Quantinuum extends its significant lead in quantum computing, achieving historic milestones for hardware fidelity and Quantum Volume

Quantinuum has raised the bar for the global ecosystem by achieving the historic and much-vaunted “three 9's” 2-qubit gate fidelity in its commercial quantum computer and announcing that its Quantum Volume has surpassed one million – exponentially higher than its nearest competitors.

April 16, 2024

By Ilyas Khan, Founder and Chief Product Officer, Jenni Strabley, Sr Director of Offering Management

All quantum error correction schemes depend for their success on physical hardware achieving high enough fidelity. If there are too many errors in the physical qubit operations, the error correcting code has the effect of amplifying rather than diminishing overall error rates. For decades now, it has been hoped that one day a quantum computer would achieve “three 9's” – an iconic, inherent 99.9% 2-qubit physical gate fidelity – at which point many of the error-correcting codes required for universal fault tolerant quantum computing would successfully be able to squeeze errors out of the system.

That day has now arrived. Building on several previous laboratory demonstrations 1 2 3, Quantinuum has become the first company ever to achieve “three 9's” in a commercially-available quantum computer, with the first demonstration of 99.914(3)% 2-qubit gate fidelity, showing repeatable performance across all qubit pairs on our H1-1 system that is constantly available to customers. This production-environment announcement is a marked difference to one-offs recorded in carefully contrived laboratory conditions. This demonstrates what will fast become the expected standard for the entire quantum computing sector.

Quantinuum is also announcing another milestone, a seven-figure Quantum Volume (QV) of 1,048,576 – or in terms preferred by the experts, 220 – reinforcing our commitment to building, by a significant margin, the highest-performing quantum computers in the world.

These announcements follow a historic month that started when we proved our ability to scale our systems to the sizes needed to solve some of the world’s most pressing problems – and in a way that offers the best path to universal quantum computing.  

On March 5th, 2024, Quantinuum researchers disclosed details of our experiments that provide a solution to a totemic problem faced by all quantum computing architectures, known as the wiring problem. Supported by a video showing qubits being shuffled through a 2-dimensional grid ion-trap, our team presented concrete proof of the scalability of the quantum charge-coupled device (QCCD) architecture used in our H-Series quantum computers

Stop-motion ion transport video showing a chosen sorting operation implemented on an 8-site 2D grid trap with the swap-or-stay primitive. The sort is implemented by discrete choices of swaps or stays between neighboring sites. The numbers shown (indicated by dashed circles) at the beginning and end of the video show the initial and final location of the ions after the sort, e.g. the ion that starts at the top left site ends at the bottom right site. The stop-motion video was collected by segmenting the primitive operation and pausing mid-operation such that Yb fluorescence could be detected with a CMOS camera exposure.

On April 3rd, 2024 in partnership with Microsoft, our teams announced a breakthrough in quantum error correction that delivered as its crowning achievement the most reliable logical qubits on record.

We revealed detailed demonstrations in an arXiv pre-print paper of the reliability achieved via 4 logical qubits encoded into just 30 physical qubits on our System Model H2 quantum computer. Our joint teams were able to demonstrate logical circuit error rates far below physical circuit error rates, a capability that our full-stack quantum computer is currently the only one in the world with the fidelity required to achieve. 

Explaining the importance of 2-qubit gate fidelity

Reaching this level of physical fidelity is not optional for commercial scale computers – it is essential for error correction to work, and that in turn is a necessary foundation for any useful quantum computer. Our record two-qubit gate fidelity of 99.914(3)% marks a symbolic inflection point for the industry: at ”three 9's” fidelity, we are nearing or surpassing the break-even point (where logical qubits outperform physical qubits) for many quantum error correction protocols, and this will generate great interest among research and industrial teams exploring fault-tolerant methods for tackling real-world problems.

Without hardware fidelity this good, error-corrected calculations are noisier than un-corrected computations. This is why we call it a “threshold” – when gate errors are “above threshold”, quantum computers will remain noisy no matter what you do. Below threshold, you can use quantum error correction to push error rates way, way down, so that quantum computers eventually become as reliable as classical computers.  

Four years ago, Quantinuum claimed that it would improve the performance of its H-Series quantum computers by 10x each year for five years, when measured by the industry’s most widely recognized benchmark, QV (an industry standard not to be confused with less comprehensive metrics such as Algorithmic Qubits). 

Today’s achievement of a 220 QV – which as with all our demonstrations was achieved on our commercially-available machine – shows that our team is living up to this audacious commitment. We are completely confident we can continue to overcome the technical problems that stand in the way of even better fidelity and QV performance. Our QV data is available on GitHub, as are our hardware specifications

The combination of high QV and gate fidelities puts the Quantinuum system in a class by-itself – it is far and away the best of any commercially-available quantum computer.

A diagram of a circuitDescription automatically generated
Figure 1: Quantum Volume (QV) heavy output probability (HOP) as a function of time-ordered circuit index. The solid blue line shows the cumulative average while the green region shows the two-sigma confidence interval based on bootstrap resampling. A QV test is passed when the lower two-sigma confidence interval crosses 2/3.
A graph with numbers and a lineDescription automatically generated
Figure 2. Quantum volume vs time for our commercial systems. Quantinuum’s new world record quantum volume of 1,048,576 maintains our self-imposed goal of a 10-fold increase each year. In fact, in 2023 we achieved an overall increase in quantum volume of >100x.
A graph with a line and numbersDescription automatically generated with medium confidence
Figure 3. Two-qubit randomized benchmarking data from H1-1 across the five gate zones (dashed lines) and average over all five gate zones (solid blue line). The survival probability decays as a function of sequence length, which can be related to the average fidelity of the two-qubit gates with standard randomized benchmarking theory. With this data, we can claim that not only are all zones consistent with 99.9, but all zones are >99.9 outside of error bars.
QCCD: the path to fault tolerance

Additionally, and notably, these benchmarks were achieved “inherently”, without error mitigation, thanks to the H Series’ all-to-all connectivity and QCCD architecture. Full connectivity results in less errors when running large, complicated circuits. While other modalities depend on error mitigation techniques, such techniques are not scalable and present only a modest near-term value. 

Lower physical error and high connectivity means our quantum computers have a provably lower overhead for error-corrected computation.

Looking more deeply, experts look for high fidelities that are valid in all operating zones and between any pair of qubits. In contrast to our competitors, this is precisely what our H Series delivers. We do not suffer from a broad distribution of gate fidelities between different pairs of qubits, meaning that some pairs of qubits have significantly lower fidelities. Quantinuum is the only quantum computing company with all qubit pairs boasting above 99.9% fidelity.

Alongside these benefits and demonstrations of scalability, fidelity, connectivity, and reliability, it is worth noting how these features impact what arguably matters the most to users – time to solution. In the QCCD architecture, speed of operations is decoupled from speed to reach a computational solution thanks to a combination of:

  • a better signal to noise ratio than other modalities
  • drastically reducing or eliminating the number of swap gates required (because we can move our ions through space), and
  • reducing the number of trials required for an accurate result.

The net effect is that for increasingly complex circuits it takes a high-fidelity QCCD-type quantum computer less time to achieve accurate results than other 2D connected or lower-fidelity architectures.

“Getting to three 9’s in the QCCD architecture means that ~1000 entangling operations can be done before an error occurs. Our quantum computers are right at the edge of being able to do computations at the physical level that are beyond the reach of classical computers, which would occur somewhere between 3 nines and 4 nines. Some tasks become hard for classical computers before this regime (e.g. Google’s random circuit sampling problem) but this new regime allows for much less contrived problems to be solved. At that point, these machines become real tools for new discoveries – albeit they will still be limited in what they can probe, likely to be physics simulations or closely related problems,” said Dave Hayes, a Senior R&D manager at Quantinuum.

“Additionally, these fidelities put us, some would say comfortably, within the regime needed to build fault-tolerant machines. These fidelities allow us to start adding more qubits without needing to improve performance further, and to take advantage of quantum error correction to improve the computational power necessary for tackling truly large problems. This scaling problem gets easier with even better fidelities (which is why we’re not satisfied with 3 nines) but it is possible in principle.”

Quantinuum’s new records in fidelity and quantum volume on our commercial H1 device are expected to be achieved on the H2, once upgrades are implemented, underscoring the value that we offer to users for whom stability, reliability and robust performance are pre-requisites. The quantum computing landscape is complex and changing, but we remain at the head of the pack in all key metrics. The relationship with our world-class applications teams means that co-designed devices for solving some of the world’s most intractable problems are a big step closer to reality.

Quantinuum is the world’s leading quantum computing company, and our world-class scientists and engineers are continually driving our technology forward while expanding the possibilities for our users. Their work on applications includes cybersecurity, quantum chemistry, quantum Monte Carlo integration, quantum topological data analysis, condensed matter physics, high energy physics, quantum machine learning, and natural language processing – and we are privileged to support them to bring new solutions to bear on some of the greatest challenges we face.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
July 3, 2025
We’re taking a transformational approach to quantum computing

Our quantum algorithms team has been hard at work exploring solutions to continually optimize our system’s performance. Recently, they’ve invented a novel technique, called the Quantum Paldus Transform (QPT), that can offer significant resource savings in future applications.

The transform takes complex representations and makes them simple, by transforming into a different “basis”. This is like looking at a cube from one angle, then rotating it and seeing just a square, instead. Transformations like this save resources because the more complex your problem looks, the more expensive it is to represent and manipulate on qubits.

You’ve changed

While it might sound like magic, transforms are a commonly used tool in science and engineering. Transforms simplify problems by reshaping them into something that is easier to deal with, or that provides a new perspective on the situation. For example, sound engineers use Fourier transforms every day to look at complex musical pieces in terms of their frequency components. Electrical engineers use Laplace transforms; people who work in image processing use the Abel transform; physicists use the Legendre transform, and so on.

In a new paper outlining the necessary tools to implement the QPT, Dr. Nathan Fitzpatrick and Mr. Jędrzej Burkat explain how the QPT will be widely applicable in quantum computing simulations, spanning areas like molecular chemistry, materials science, and semiconductor physics. The paper also describes how the algorithm can lead to significant resource savings by offering quantum programmers a more efficient way of representing problems on qubits.

Symmetry is key

The efficiency of the QPT stems from its use of one of the most profound findings in the field of physics: that symmetries drive the properties of a system.

While the average person can “appreciate” symmetry, for example in design or aesthetics, physicists understand symmetry as a much more profound element present in the fabric of reality. Symmetries are like the universe’s DNA; they lead to conservation laws, which are the most immutable truths we know.

Back in the 1920’s, when women were largely prohibited from practicing physics, one of the great mathematicians of the century, Emmy Noether, turned her attention to the field when she was tasked with helping Einstein with his work. In her attempt to solve a problem Einstein had encountered, Dr. Noether realized that all the most powerful and fundamental laws of physics, such as “energy can neither be created nor destroyed” are in fact the consequence of a deep simplicity – symmetry – hiding behind the curtains of reality. Dr. Noether’s theorem would have a profound effect on the trajectory of physics.

In addition to the many direct consequences of Noether’s theorem is a longstanding tradition amongst physicists to treat symmetry thoughtfully. Because of its role in the fabric of our universe, carefully considering the symmetries of a system often leads to invaluable insights.

Einstein, Pauli and Noether walk into a bar...

Many of the systems we are interested in simulating with quantum computers are, at their heart, systems of electrons. Whether we are looking at how electrons move in a paired dance inside superconductors, or how they form orbitals and bonds in a chemical system, the motion of electrons are at the core.

Seven years after Noether published her blockbuster results, Wolfgang Pauli made waves when he published the work describing his Pauli exclusion principle, which relies heavily on symmetry to explain basic tenets of quantum theory. Pauli’s principle has enormous consequences; for starters, describing how the objects we interact with every day are solid even though atoms are mostly empty space, and outlining the rules of bonds, orbitals, and all of chemistry, among other things.

Symmetry in motion

It is Pauli's symmetry, coupled with a deep respect for the impact of symmetry, that led our team at Quantinuum to the discovery published today.

In their work, they considered the act of designing quantum algorithms, and how one’s design choices may lead to efficiency or inefficiency.

When you design quantum algorithms, there are many choices you can make that affect the final result. Extensive work goes into optimizing each individual step in an algorithm, requiring a cyclical process of determining subroutine improvements, and finally, bringing it all together. The significant cost and time required is a limiting factor in optimizing many algorithms of interest.

This is again where symmetry comes into play. The authors realized that by better exploiting the deepest symmetries of the problem, they could make the entire edifice more efficient, from state preparation to readout. Over the course of a few years, a team lead Dr. Fitzpatrick and his colleague Jędrzej Burkat slowly polished their approach into a full algorithm for performing the QPT.

The QPT functions by using Pauli’s symmetry to discard unimportant details and strip the problem down to its bare essentials. Starting with a Paldus transform allows the algorithm designer to enjoy knock-on effects throughout the entire structure, making it overall more efficient to run.

“It’s amazing to think how something we discovered one hundred years ago is making quantum computing easier and more efficient,” said Dr. Nathan Fitzpatrick.

Ultimately, this innovation will lead to more efficient quantum simulation. Projects we believed to still be many years out can now be realized in the near term.

Transforming the future

The discovery of the Quantum Paldus Transform is a powerful reminder that enduring ideas—like symmetry—continue to shape the frontiers of science. By reaching back into the fundamental principles laid down by pioneers like Noether and Pauli, and combining them with modern quantum algorithm design, Dr. Fitzpatrick and Mr. Burkat have uncovered a tool with the potential to reshape how we approach quantum computation.

As quantum technologies continue their crossover from theoretical promise to practical implementation, innovations like this will be key in unlocking their full potential.

technical
All
Blog
July 2, 2025
Cracking the code of superconductors: Quantum computers just got closer to the dream

In a new paper in Nature Physics, we've made a major breakthrough in one of quantum computing’s most elusive promises: simulating the physics of superconductors. A deeper understanding of superconductivity would have an enormous impact: greater insight could pave the way to real-world advances, like phone batteries that last for months, “lossless” power grids that drastically reduce your bills, or MRI machines that are widely available and cheap to use.  The development of room-temperature superconductors would transform the global economy.

A key promise of quantum computing is that it has a natural advantage when studying inherently quantum systems, like superconductors. In many ways, it is precisely the deeply ‘quantum’ nature of superconductivity that makes it both so transformative and so notoriously difficult to study.

Now, we are pleased to report that we just got a lot closer to that ultimate dream.

Making the impossible possible

To study something like a superconductor with a quantum computer, you need to first “encode” the elements of the system you want to study onto the qubits – in other words, you want to translate the essential features of your material onto the states and gates you will run on the computer.

For superconductors in particular, you want to encode the behavior of particles known as “fermions” (like the familiar electron). Naively simulating fermions using qubits will result in garbage data, because qubits alone lack the key properties that make a fermion so unique.

Until recently, scientists used something called the “Jordan-Wigner” encoding to properly map fermions onto qubits. People have argued that the Jordan-Wigner encoding is one of the main reasons fermionic simulations have not progressed beyond simple one-dimensional chain geometries: it requires too many gates as the system size grows.  

Even worse, the Jordan-Wigner encoding has the nasty property that it is, in a sense, maximally non-fault-tolerant: one error occurring anywhere in the system affects the whole state, which generally leads to an exponential overhead in the number of shots required. Due to this, until now, simulating relevant systems at scale – one of the big promises of quantum computing – has remained a daunting challenge.

Theorists have addressed the issues of the Jordan-Wigner encoding and have suggested alternative fermionic encodings. In practice, however, the circuits created from these alternative encodings come with large overheads and have so far not been practically useful.

We are happy to report that our team developed a new way to compile one of the new, alternative, encodings that dramatically improves both efficiency and accuracy, overcoming the limitations of older approaches. Their new compilation scheme is the most efficient yet, slashing the cost of simulating fermionic hopping by an impressive 42%. On top of that, the team also introduced new, targeted error mitigation techniques that ensure even larger systems can be simulated with far fewer computational "shots"—a critical advantage in quantum computing.

Using their innovative methods, the team was able to simulate the Fermi-Hubbard model—a cornerstone of condensed matter physics— at a previously unattainable scale. By encoding 36 fermionic modes into 48 physical qubits on System Model H2, they achieved the largest quantum simulation of this model to date.

This marks an important milestone in quantum computing: it demonstrates that large-scale simulations of complex quantum systems, like superconductors, are now within reach.

Unlocking the Quantum Age, One Breakthrough at a Time

This breakthrough doesn’t just show how we can push the boundaries of what quantum computers can do; it brings one of the most exciting use cases of quantum computing much closer to reality. With this new approach, scientists can soon begin to simulate materials and systems that were once thought too complex for the most powerful classical computers alone. And in doing so, they’ve unlocked a path to potentially solving one of the most exciting and valuable problems in science and technology: understanding and harnessing the power of superconductivity.

The future of quantum computing—and with it, the future of energy, electronics, and beyond—just got a lot more exciting.

technical
All
Blog
July 1, 2025
Quantinuum with partners Princeton and NIST deliver seminal result in quantum error correction

In an experiment led by Princeton and NIST, we’ve just delivered a crucial result in Quantum Error Correction (QEC), demonstrating key principles of scalable quantum computing developed by Drs Peter Shor, Dorit Aharonov, and Michael Ben-Or. In this latest paper, we showed that by using “concatenated codes” noise can be exponentially suppressed — proving that quantum computing will scale.

When noise is low enough, the results are transformative

Quantum computing is already producing results, but high-profile applications like Shor’s algorithm—which can break RSA encryption—require error rates about a billion times lower than what today’s machines can achieve.

Achieving such low error rates is a holy grail of quantum computing. Peter Shor was the first to hypothesize a way forward, in the form of quantum error correction. Building on his results, Dorit Aharanov and Michael Ben-Or proved that by concatenating quantum error correcting codes, a sufficiently high-quality quantum computer can suppress error rates arbitrarily at the cost of a very modest increase in the required number of qubits.  Without that insight, building a truly fault-tolerant quantum computer would be impossible.

Their results, now widely referred to as the “threshold theorem”, laid the foundation for realizing fault-tolerant quantum computing. At the time, many doubted that the error rates required for large-scale quantum algorithms could ever be achieved in practice. The threshold theorem made clear that large scale quantum computing is a realistic possibility, giving birth to the robust quantum industry that exists today.

Realizing a legendary vision

Until now, nobody has realized the original vision for the threshold theorem. Last year, Google performed a beautiful demonstration of the threshold theorem in a different context (without concatenated codes). This year, we are proud to report the first experimental realization of that seminal work—demonstrating fault-tolerant quantum computing using concatenated codes, just as they envisioned.

The benefits of concatenation

The team demonstrated that their family of protocols achieves high error thresholds—making them easier to implement—while requiring minimal ancilla qubits, meaning lower overall qubit overhead. Remarkably, their protocols are so efficient that fault-tolerant preparation of basis states requires zero ancilla overhead, making the process maximally efficient.

This approach to error correction has the potential to significantly reduce qubit requirements across multiple areas, from state preparation to the broader QEC infrastructure. Additionally, concatenated codes offer greater design flexibility, which makes them especially attractive. Taken together, these advantages suggest that concatenation could provide a faster and more practical path to fault-tolerant quantum computing than popular approaches like the surface code.

We’re always looking forward

From a broader perspective, this achievement highlights the power of collaboration between industry, academia, and national laboratories. Quantinuum’s commercial quantum systems are so stable and reliable that our partners were able to carry out this groundbreaking research remotely—over the cloud—without needing detailed knowledge of the hardware. While we very much look forward to welcoming them to our labs before long, its notable that they never need to step inside to harness the full capabilities of our machines.

As we make quantum computing more accessible, the rate of innovation will only increase. The era of plug-and-play quantum computing has arrived. Are you ready?

technical
All