

The first half of 2024 will go down as the period when we shed the last vestiges of the “wait and see” culture that has dominated the quantum computing industry. Thanks to a run of recent achievements, we have helped to lead the entire quantum computing industry into a new, post-classical era.
Today we are announcing the latest of these achievements: a major qubit count enhancement to our flagship System Model H2 quantum computer from 32 to 56 qubits. We also reveal meaningful results of work with our partner JPMorgan Chase & Co. that showcases a significant lift in performance.
But to understand the full importance of today’s announcements, it is worth recapping the succession of breakthroughs that confirm that we are entering a new era of quantum computing in which classical simulation will be infeasible.
Between January and June 2024, Quantinuum’s pioneering teams published a succession of results that accelerate our path to universal fault-tolerant quantum computing.
Our technical teams first presented a long-sought solution to the “wiring problem”, an engineering challenge that affects all types of quantum computers. In short, most current designs will require an impossible number of wires connected to the quantum processor to scale to large qubit numbers. Our solution allows us to scale to high qubit numbers with no issues, proving that our QCCD architecture has the potential to scale.
Next, we became the first quantum computing company in the world to hit “three 9s” two qubit gate fidelity across all qubit pairs in a production device. This level of fidelity in 2-qubit gate operations was long thought to herald the point at which error corrected quantum computing could become a reality. It has accelerated and intensified our focus on quantum error correction (QEC). Our scientists and engineers are working with our customers and partners to achieve multiple breakthroughs in QEC in the coming months, many of which will be incorporated into products such as the H-Series and our chemistry simulation platform, InQuanto™.
Following that, with our long-time partner Microsoft, we hit an error correction performance threshold that many believed was still years away. The System Model H2 became the first – and only – quantum computer in the world capable of creating and computing with highly reliable logical (error corrected) qubits. In this demonstration, the H2-1 configured with 32 physical qubits supported the creation of four highly reliable logical qubits operating at “better than break-even”. In the same demonstration, we also shared that logical circuit error rates were shown to be up to 800x lower than the corresponding physical circuit error rates. No other quantum computing company is even close to matching this achievement (despite many feverish claims in the past 12 months).
The quantum computing industry is departing the era when quantum computers could be simulated by a classical computer. Today, we are making two milestone announcements. The first is that our H2-1 processor has been upgraded to 56 trapped-ion qubits, making it impossible to classically simulate, without any loss of the market-leading fidelity, all-to-all qubit connectivity, mid-circuit measurement, qubit reuse, and feed forward.
The second is that the upgrade of H2-1 from 32 to 56 qubits makes our processor capable of challenging the world’s most powerful supercomputers. This demonstration was achieved in partnership with our long-term collaborator JPMorgan Chase & Co. and researchers from Caltech and Argonne National Lab.
Our collaboration tackled a well-known algorithm, Random Circuit Sampling (RCS), and measured the quality of our results with a suite of tests including the linear cross entropy benchmark (XEB) – an approach first made famous by Google in 2019 in a bid to demonstrate “quantum supremacy”. An XEB score close to 0 says your results are noisy – and do not utilize the full potential of quantum computing. In contrast, the closer an XEB score is to 1, the more your results demonstrate the power of quantum computing. The results on H2-1 are excellent, revealing, and worth exploring in a little detail. Here is the complete data on GitHub.
Our results show how far quantum hardware has come since Google’s initial demonstration. They originally ran circuits on 53 superconducting qubits that were deep enough to severely frustrate high-fidelity classical simulation at the time, achieving an estimated XEB score of ~0.002. While they showed that this small value was statistically inconsistent with zero, improvements in classical algorithms and hardware have steadily increased what XEB scores are achievable by classical computers, to the point that classical computers can now achieve scores similar to Google’s on their original circuits.

In contrast, we have been able to run circuits on all 56 qubits in H2-1 that are deep enough to challenge high-fidelity classical simulation while achieving an estimated XEB score of ~0.35. This >100x improvement implies the following: even for circuits large and complex enough to frustrate all known classical simulation methods, the H2 quantum computer produces results without making even a single error about 35% of the time. In contrast to past announcements associated with XEB experiments, 35% is a significant step towards the idealized 100% fidelity limit in which the computational advantage of quantum computers is clearly in sight.
This huge jump in quality is made possible by Quantinuum’s market-leading high fidelity and also our unique all-to-all connectivity. Our flexible connectivity, enabled by our QCCD architecture, enables us to implement circuits with much more complex geometries than the 2D geometries supported by superconducting-based quantum computers. This specific advantage means our quantum circuits become difficult to simulate classically with significantly fewer operations (or gates). These capabilities have an enormous impact on how our computational power scales as we add more qubits: since noisy quantum computers can only run a limited number of gates before returning unusable results, needing to run fewer gates ultimately translates into solving complex tasks with consistent and dependable accuracy.
This is a vitally important moment for companies and governments watching this space and deciding when to invest in quantum: these results underscore both the performance capabilities and the rapid rate of improvement of our processors, especially the System Model H2, as a prime candidate for achieving near-term value.
A direct comparison can be made between the time it took H2-1 to perform RCS and the time it took a classical supercomputer. However, classical simulations of RCS can be made faster by building a larger supercomputer (or by distributing the workload across many existing supercomputers). A more robust comparison is to consider the amount of energy that must be expended to perform RCS on either H2-1 or on classical computing hardware, which ultimately controls the real cost of performing RCS. An analysis based on the most efficient known classical algorithm for RCS and the power consumption of leading supercomputers indicates that H2-1 can perform RCS at 56 qubits with an estimated 30,000x reduction in power consumption. These early results should be seen as very attractive for data center owners and supercomputing facilities looking to add quantum computers as “accelerators” for their users.
Today’s milestone announcements are clear evidence that the H2-1 quantum processor can perform computational tasks with far greater efficiency than classical computers. They underpin the expectation that as our quantum computers scale beyond today’s 56 qubits to hundreds, thousands, and eventually millions of high-quality qubits, classical supercomputers will quickly fall behind. Quantinuum’s quantum computers are likely to become the device of choice as scrutiny continues to grow of the power consumption of classical computers applied to highly intensive workloads such as simulating molecules and material structures – tasks that are widely expected to be amenable to a speedup using quantum computers.
With this upgrade in our qubit count to 56, we will no longer be offering a commercial “fully encompassing” emulator – a mathematically exact simulation of our H2-1 quantum processor is now impossible, as it would take up the entire memory of the world’s best supercomputers. With 56 qubits, the only way to get exact results is to run on the actual hardware, a trend the leaders in this field have already embraced.
More generally, this work demonstrates that connectivity, fidelity, and speed are all interconnected when measuring the power of a quantum computer. Our competitive edge will persist in the long run; as we move to running more algorithms at the logical level, connectivity and fidelity will continue to play a crucial role in performance.
“We are entirely focused on the path to universal fault tolerant quantum computers. This objective has not changed, but what has changed in the past few months is clear evidence of the advances that have been made possible due to the work and the investment that has been made over many, many years. These results show that whilst the full benefits of fault tolerant quantum computers have not changed in nature, they may be reachable earlier than was originally expected, and crucially, that along the way, there will be tangible benefits to our customers in their day-to-day operations as quantum computers start to perform in ways that are not classically simulatable. We have an exciting few months ahead of us as we unveil some of the applications that will start to matter in this context with our partners across a number of sectors.”
– Ilyas Khan, Chief Product Officer
Stay tuned for results in error correction, physics, chemistry and more on our new 56-qubit processor.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Quantinuum is focusing on redefining what’s possible in hybrid quantum–classical computing by integrating Quantinuum’s best-in-class systems with high-performance NVIDIA accelerated computing to create powerful new architectures that can solve the world’s most pressing challenges.
The launch of Helios, Powered by Honeywell, the world’s most accurate quantum computer, marks a major milestone in quantum computing. Helios is now available to all customers through the cloud or on-premise deployment, launched with a go-to-market offering that seamlessly pairs Helios with the NVIDIA Grace Blackwell platform, targeting specific end markets such as drug discovery, finance, materials science, and advanced AI research.
We are also working with NVIDIA to adopt NVIDIA NVQLink, an open system architecture, as a standard for advancing hybrid quantum-classical supercomputing. Using this technology with Quantinuum Guppy and the NVIDIA CUDA-Q platform, Quantinuum has implemented NVIDIA accelerated computing across Helios and future systems to perform real-time decoding for quantum error correction.
In an industry-first demonstration, an NVIDIA GPU-based decoder integrated in the Helios control engine improved the logical fidelity of quantum operations by more than 3% — a notable gain given Helios’ already exceptionally low error rate. These results demonstrate how integration with NVIDIA accelerated computing through NVQLink can directly enhance the accuracy and scalability of quantum computation.

This unique collaboration spans the full Quantinuum technology stack. Quantinuum’s next-generation software development environment allows users to interleave quantum and GPU-accelerated classical computations in a single workflow. Developers can build hybrid applications using tools such as NVIDIA CUDA-Q, NVIDIA CUDA-QX, and Quantinuum’s Guppy, to make advanced quantum programming accessible to a broad community of innovators.
The collaboration also reaches into applied research through the NVIDIA Accelerated Quantum Computing Research Center (NVAQC), where an NVIDIA GB200 NVL72 supercomputer can be paired with Quantinuum’s Helios to further drive hybrid quantum-GPU research, including the development of breakthrough quantum-enhanced AI applications.
A recent achievement illustrates this potential: The ADAPT-GQE framework, a transformer-based Generative Quantum AI (GenQAI) approach, uses a Generative AI model to efficiently synthesize circuits to prepare the ground state of a chemical system on a quantum computer. Developed by Quantinuum, NVIDIA, and a pharmaceutical industry leader—and leveraging NVIDIA CUDA-Q with GPU-accelerated methods—ADAPT-GQE achieved a 234x speed-up in generating training data for complex molecules. The team used the framework to explore imipramine, a molecule crucial to pharmaceutical development. The transformer was trained on imipramine conformers to synthesize ground state circuits at orders of magnitude faster than ADAPT-VQE, and the circuit produced by the transformer was run on Helios to prepare the ground state using InQuanto, Quantinuum's computational chemistry platform.
From collaborating on hardware and software integrations to GenQAI applications, the collaboration between Quantinuum and NVIDIA is building the bridge between classical and quantum computing and creating a future where AI becomes more expansive through quantum computing, and quantum computing becomes more powerful through AI.
By Dr. Noah Berthusen
The earliest works on quantum error correction showed that by combining many noisy physical qubits into a complex entangled state called a "logical qubit," this state could survive for arbitrarily long times. QEC researchers devote much effort to hunt for codes that function well as "quantum memories," as they are called. Many promising code families have been found, but this is only half of the story.
Being able to keep a qubit around for a long time is one thing, but to realize the theoretical advantages of quantum computing we need to run quantum circuits. And to make sure noise doesn't ruin our computation, these circuits need to be run on the logical qubits of our code. This is often much more challenging than performing gates on the physical qubits of our device, as these "logical gates" often require many physical operations in their implementation. What's more, it often is not immediately obvious which logical gates a code has, and so converting a physical circuit into a logical circuit can be rather difficult.
Some codes, like the famous surface code, are good quantum memories and also have easy logical gates. The drawback is that the ratio of physical qubits to logical qubits (the "encoding rate") is low, and so many physical qubits are required to implement large logical algorithms. High-rate codes that are good quantum memories have also been found, but computing on them is much more difficult. The holy grail of QEC, so to speak, would be a high-rate code that is a good quantum memory and also has easy logical gates. Here, we make progress on that front by developing a new code with those properties.
A recent work from Quantinuum QEC researchers introduced genon codes. The underlying construction method for these codes, called the "symplectic double cover," also provided a way to obtain logical gates that are well suited for Quantinuum's QCCD architecture. Namely, these "SWAP-transversal" gates are performed by applying single qubit operations and relabeling the physical qubits of the device. Thanks to the all-to-all connectivity facilitated through qubit movement on the QCCD architecture, this relabeling can be done in software essentially for free. Combined with extremely high fidelity (~1.2 x10-5) single-qubit operations, the resulting logical gates are similarly high fidelity.
Given the promise of these codes, we take them a step further in our new paper. We combine the symplectic double codes with the [[4,2,2]] Iceberg code using a procedure called "code concatenation". A concatenated code is a bit like nesting dolls, with an outer code containing codes within it---with these too potentially containing codes. More technically, in a concatenated code the logical qubits of one code act as the physical qubits of another code.
The new codes, which we call "concatenated symplectic double codes", were designed in such a way that they have many of these easily-implementable SWAP-transversal gates. Central to its construction, we show how the concatenation method allows us to "upgrade" logical gates in terms of their ease of implementation; this procedure may provide insights for constructing other codes with convenient logical gates. Notably, the SWAP-transversal gate set on this code is so powerful that only two additional operations (logical T and S) are necessary for universal computation. Furthermore, these codes have many logical qubits, and we also present numerical evidence to suggest that they are good quantum memories.
Concatenated symplectic double codes have one of the easiest logical computation schemes, and we didn’t have to sacrifice rate to achieve it. Looking forward in our roadmap, we are targeting hundreds of logical qubits at ~ 1x 10-8 logical error rate by 2029. These codes put us in a prime position to leverage the best characteristics of our hardware and create a device that can achieve real commercial advantage.
Every year, the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) brings together the global supercomputing community to explore the technologies driving the future of computing.
Join Quantinuum at this year’s conference, taking place November 16th – 21st in St. Louis, Missouri, where we will showcase how our quantum hardware, software, and partnerships are helping define the next era of high-performance and quantum computing.
The Quantinuum team will be on-site at booth #4432 to showcase how we’re building the bridge between HPC and quantum.
On Tuesday and Wednesday, our quantum computing experts will host daily tutorials at our booth on Helios, our next-generation hardware platform, Nexus, our all-in-one quantum computing platform, and Hybrid Workflows, featuring the integration of NVIDIA CUDA-Q with Quantinuum Systems.
Join our team as they share insights on the opportunities and challenges of quantum integration within the HPC ecosystem:
Panel Session: The Quantum Era of HPC: Roadmaps, Challenges and Opportunities in Navigating the Integration Frontier
November 19th | 10:30 – 12:00pm CST
During this panel session, Kentaro Yamamoto from Quantinuum, will join experts from Lawrence Berkeley National Laboratory, IBM, QuEra, RIKEN, and Pawsey Supercomputing Research Centre to explore how quantum and classical systems are being brought together to accelerate scientific discovery and industrial innovation.
BoF Session: Bridging the Gap: Making Quantum-Classical Hybridization Work in HPC
November 19th | 5:15 – 6:45pm CST
Quantum-classical hybrid computing is moving from theory to reality, yet no clear roadmap exists for how best to integrate quantum processing units (QPUs) into established HPC environments. In this Birds of a Feather discussion, co-led by Quantinuum’s Grahame Vittorini and representatives from BCS, DOE, EPCC, Inria, ORNL NVIDIA, and RIKEN we hope to bring together a global community of HPC practitioners, system architects, quantum computing specialists and workflow researchers, including participants in the Workflow Community Initiative, to assess the state of hybrid integration and identify practical steps toward scalable, impactful deployment.