By Ilyas Khan, Founder and Chief Product Officer, Jenni Strabley, Sr Director of Offering Management
All quantum error correction schemes depend for their success on physical hardware achieving high enough fidelity. If there are too many errors in the physical qubit operations, the error correcting code has the effect of amplifying rather than diminishing overall error rates. For decades now, it has been hoped that one day a quantum computer would achieve “three 9's” – an iconic, inherent 99.9% 2-qubit physical gate fidelity – at which point many of the error-correcting codes required for universal fault tolerant quantum computing would successfully be able to squeeze errors out of the system.
That day has now arrived. Building on several previous laboratory demonstrations 1 2 3, Quantinuum has become the first company ever to achieve “three 9's” in a commercially-available quantum computer, with the first demonstration of 99.914(3)% 2-qubit gate fidelity, showing repeatable performance across all qubit pairs on our H1-1 system that is constantly available to customers. This production-environment announcement is a marked difference to one-offs recorded in carefully contrived laboratory conditions. This demonstrates what will fast become the expected standard for the entire quantum computing sector.
Quantinuum is also announcing another milestone, a seven-figure Quantum Volume (QV) of 1,048,576 – or in terms preferred by the experts, 220 – reinforcing our commitment to building, by a significant margin, the highest-performing quantum computers in the world.
These announcements follow a historic month that started when we proved our ability to scale our systems to the sizes needed to solve some of the world’s most pressing problems – and in a way that offers the best path to universal quantum computing.
On March 5th, 2024, Quantinuum researchers disclosed details of our experiments that provide a solution to a totemic problem faced by all quantum computing architectures, known as the wiring problem. Supported by a video showing qubits being shuffled through a 2-dimensional grid ion-trap, our team presented concrete proof of the scalability of the quantum charge-coupled device (QCCD) architecture used in our H-Series quantum computers.
Stop-motion ion transport video showing a chosen sorting operation implemented on an 8-site 2D grid trap with the swap-or-stay primitive. The sort is implemented by discrete choices of swaps or stays between neighboring sites. The numbers shown (indicated by dashed circles) at the beginning and end of the video show the initial and final location of the ions after the sort, e.g. the ion that starts at the top left site ends at the bottom right site. The stop-motion video was collected by segmenting the primitive operation and pausing mid-operation such that Yb fluorescence could be detected with a CMOS camera exposure.
On April 3rd, 2024 in partnership with Microsoft, our teams announced a breakthrough in quantum error correction that delivered as its crowning achievement the most reliable logical qubits on record.
We revealed detailed demonstrations in an arXiv pre-print paper of the reliability achieved via 4 logical qubits encoded into just 30 physical qubits on our System Model H2 quantum computer. Our joint teams were able to demonstrate logical circuit error rates far below physical circuit error rates, a capability that our full-stack quantum computer is currently the only one in the world with the fidelity required to achieve.
Reaching this level of physical fidelity is not optional for commercial scale computers – it is essential for error correction to work, and that in turn is a necessary foundation for any useful quantum computer. Our record two-qubit gate fidelity of 99.914(3)% marks a symbolic inflection point for the industry: at ”three 9's” fidelity, we are nearing or surpassing the break-even point (where logical qubits outperform physical qubits) for many quantum error correction protocols, and this will generate great interest among research and industrial teams exploring fault-tolerant methods for tackling real-world problems.
Without hardware fidelity this good, error-corrected calculations are noisier than un-corrected computations. This is why we call it a “threshold” – when gate errors are “above threshold”, quantum computers will remain noisy no matter what you do. Below threshold, you can use quantum error correction to push error rates way, way down, so that quantum computers eventually become as reliable as classical computers.
Four years ago, Quantinuum claimed that it would improve the performance of its H-Series quantum computers by 10x each year for five years, when measured by the industry’s most widely recognized benchmark, QV (an industry standard not to be confused with less comprehensive metrics such as Algorithmic Qubits).
Today’s achievement of a 220 QV – which as with all our demonstrations was achieved on our commercially-available machine – shows that our team is living up to this audacious commitment. We are completely confident we can continue to overcome the technical problems that stand in the way of even better fidelity and QV performance. Our QV data is available on GitHub, as are our hardware specifications
The combination of high QV and gate fidelities puts the Quantinuum system in a class by-itself – it is far and away the best of any commercially-available quantum computer.
Additionally, and notably, these benchmarks were achieved “inherently”, without error mitigation, thanks to the H Series’ all-to-all connectivity and QCCD architecture. Full connectivity results in less errors when running large, complicated circuits. While other modalities depend on error mitigation techniques, such techniques are not scalable and present only a modest near-term value.
Lower physical error and high connectivity means our quantum computers have a provably lower overhead for error-corrected computation.
Looking more deeply, experts look for high fidelities that are valid in all operating zones and between any pair of qubits. In contrast to our competitors, this is precisely what our H Series delivers. We do not suffer from a broad distribution of gate fidelities between different pairs of qubits, meaning that some pairs of qubits have significantly lower fidelities. Quantinuum is the only quantum computing company with all qubit pairs boasting above 99.9% fidelity.
Alongside these benefits and demonstrations of scalability, fidelity, connectivity, and reliability, it is worth noting how these features impact what arguably matters the most to users – time to solution. In the QCCD architecture, speed of operations is decoupled from speed to reach a computational solution thanks to a combination of:
The net effect is that for increasingly complex circuits it takes a high-fidelity QCCD-type quantum computer less time to achieve accurate results than other 2D connected or lower-fidelity architectures.
“Getting to three 9’s in the QCCD architecture means that ~1000 entangling operations can be done before an error occurs. Our quantum computers are right at the edge of being able to do computations at the physical level that are beyond the reach of classical computers, which would occur somewhere between 3 nines and 4 nines. Some tasks become hard for classical computers before this regime (e.g. Google’s random circuit sampling problem) but this new regime allows for much less contrived problems to be solved. At that point, these machines become real tools for new discoveries – albeit they will still be limited in what they can probe, likely to be physics simulations or closely related problems,” said Dave Hayes, a Senior R&D manager at Quantinuum.
“Additionally, these fidelities put us, some would say comfortably, within the regime needed to build fault-tolerant machines. These fidelities allow us to start adding more qubits without needing to improve performance further, and to take advantage of quantum error correction to improve the computational power necessary for tackling truly large problems. This scaling problem gets easier with even better fidelities (which is why we’re not satisfied with 3 nines) but it is possible in principle.”
Quantinuum’s new records in fidelity and quantum volume on our commercial H1 device are expected to be achieved on the H2, once upgrades are implemented, underscoring the value that we offer to users for whom stability, reliability and robust performance are pre-requisites. The quantum computing landscape is complex and changing, but we remain at the head of the pack in all key metrics. The relationship with our world-class applications teams means that co-designed devices for solving some of the world’s most intractable problems are a big step closer to reality.
Quantinuum is the world’s leading quantum computing company, and our world-class scientists and engineers are continually driving our technology forward while expanding the possibilities for our users. Their work on applications includes cybersecurity, quantum chemistry, quantum Monte Carlo integration, quantum topological data analysis, condensed matter physics, high energy physics, quantum machine learning, and natural language processing – and we are privileged to support them to bring new solutions to bear on some of the greatest challenges we face.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
In the world of physics, ideas can lie dormant for decades before revealing their true power. What begins as a quiet paper in an academic journal can eventually reshape our understanding of the universe itself.
In 1993, nestled deep in the halls of Yale University, physicist Subir Sachdev and his graduate student Jinwu Ye stumbled upon such an idea. Their work, originally aimed at unraveling the mysteries of “spin fluids”, would go on to ignite one of the most surprising and profound connections in modern physics—a bridge between the strange behavior of quantum materials and the warped spacetime of black holes.
Two decades after the paper was published, it would be pulled into the orbit of a radically different domain: quantum gravity. Thanks to work by renowned physicist Alexei Kitaev in 2015, the model found new life as a testing ground for the mind-bending theory of holography—the idea that the universe we live in might be a projection, from a lower-dimensional reality.
Holography is an exotic approach to understanding reality where scientists use holograms to describe higher dimensional systems in one less dimension. So, if our world is 3+1 dimensional (3 spatial directions plus time), there exists a 2+1, or 3-dimensional description of it. In the words of Leonard Susskind, a pioneer in quantum holography, "the three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface."
The “SYK” model, as it is known today, is now considered a quintessential framework for studying strongly correlated quantum phenomena, which occur in everything from superconductors to strange metals—and even in black holes. In fact, The SYK model has also been used to study one of physics’ true final frontiers, quantum gravity, with the authors of the paper calling it “a paradigmatic model for quantum gravity in the lab.”
The SYK model involves Majorana fermions, a type of particle that is its own antiparticle. A key feature of the model is that these fermions are all-to-all connected, leading to strong correlations. This connectivity makes the model particularly challenging to simulate on classical computers, where such correlations are difficult to capture. Our quantum computers, however, natively support all-to-all connectivity making them a natural fit for studying the SYK model.
Now, 10 years after Kitaev’s watershed lectures, we’ve made new progress in studying the SYK model. In a new paper, we’ve completed the largest ever SYK study on a quantum computer. By exploiting our system’s native high fidelity and all-to-all connectivity, as well as our scientific team’s deep expertise across many disciplines, we were able to study the SYK model at a scale three times larger than the previous best experimental attempt.
While this work does not exceed classical techniques, it is very close to the classical state-of-the-art. The biggest ever classical study was done on 64 fermions, while our recent result, run on our smallest processor (System Model H1), included 24 fermions. Modelling 24 fermions costs us only 12 qubits (plus one ancilla) making it clear that we can quickly scale these studies: our System Model H2 supports 56 qubits (or ~100 fermions), and Helios, which is coming online this year, will have over 90 qubits (or ~180 fermions).
However, working with the SYK model takes more than just qubits. The SYK model has a complex Hamiltonian that is difficult to work with when encoded on a computer—quantum or classical. Studying the real-time dynamics of the SYK model means first representing the initial state on the qubits, then evolving it properly in time according to an intricate set of rules that determine the outcome. This means deep circuits (many circuit operations), which demand very high fidelity, or else an error will occur before the computation finishes.
Our cross-disciplinary team worked to ensure that we could pull off such a large simulation on a relatively small quantum processor, laying the groundwork for quantum advantage in this field.
First, the team adopted a randomized quantum algorithm called TETRIS to run the simulation. By using random sampling, among other methods, the TETRIS algorithm allows one to compute the time evolution of a system without the pernicious discretization errors or sizable overheads that plague other approaches. TETRIS is particularly suited to simulating the SYK model because with a high level of disorder in the material, simulating the SYK Hamiltonian means averaging over many random Hamiltonians. With TETRIS, one generates random circuits to compute evolution (even with a deterministic Hamiltonian). Therefore, when applying TETRIS on SYK, for every shot one can just generate a random instance of the Hamiltonain, and generate a random circuit on TETRIS at the same time. This simple approach enables less gate counts required per shot, meaning users can run more shots, naturally mitigating noise.
In addition, the team “sparsified” the SYK model, which means “pruning” the fermion interactions to reduce the complexity while still maintaining its crucial features. By combining sparsification and the TETRIS algorithm, the team was able to significantly reduce the circuit complexity, allowing it to be run on our machine with high fidelity.
They didn’t stop there. The team also proposed two new noise mitigation techniques, ensuring that they could run circuits deep enough without devolving entirely into noise. The two techniques both worked quite well, and the team was able to show that their algorithm, combined with the noise mitigation, performed significantly better and delivered more accurate results. The perfect agreement between the circuit results and the true theoretical results is a remarkable feat coming from a co-design effort between algorithms and hardware.
As we scale to larger systems, we come closer than ever to realizing quantum gravity in the lab, and thus, answering some of science’s biggest questions.
At Quantinuum, we pay attention to every detail. From quantum gates to teleportation, we work hard every day to ensure our quantum computers operate as effectively as possible. This means not only building the most advanced hardware and software, but that we constantly innovate new ways to make the most of our systems.
A key step in any computation is preparing the initial state of the qubits. Like lining up dominoes, you first need a special setup to get meaningful results. This process, known as state preparation or “state prep,” is an open field of research that can mean the difference between realizing the next breakthrough or falling short. Done ineffectively, state prep can carry steep computational costs, scaling exponentially with the qubit number.
Recently, our algorithm teams have been tackling this challenge from all angles. We’ve published three new papers on state prep, covering state prep for chemistry, materials, and fault tolerance.
In the first paper, our team tackled the issue of preparing states for quantum chemistry. Representing chemical systems on gate-based quantum computers is a tricky task; partly because you often want to prepare multiconfigurational states, which are very complex. Preparing states like this can cost a lot of resources, so our team worked to ensure we can do it without breaking the (quantum) bank.
To do this, our team investigated two different state prep methods. The first method uses Givens rotations, implemented to save computational costs. The second method exploits the sparsity of the molecular wavefunction to maximize efficiency.
Once the team perfected the two methods, they implemented them in InQuanto to explore the benefits across a range of applications, including calculating the ground and excited states of a strongly correlated molecule (twisted C_2 H_4). The results showed that the “sparse state preparation” scheme performed especially well, requiring fewer gates and shorter runtimes than alternative methods.
In the second paper, our team focused on state prep for materials simulation. Generally, it’s much easier for computers to simulate materials that are at zero temperature, which is, obviously, unrealistic. Much more relevant to most scientists is what happens when a material is not at zero temperature. In this case, you have two options: when the material is steadily at a given temperature, which scientists call thermal equilibrium, or when the material is going through some change, also known as out of equilibrium. Both are much harder for classical computers to work with.
In this paper, our team looked to solve an outstanding problem: there is no standard protocol for preparing thermal states. In this work, our team only targeted equilibrium states but, interestingly, they used an out of equilibrium protocol to do the work. By slowly and gently evolving from a simple state that we know how to prepare, they were able to prepare the desired thermal states in a way that was remarkably insensitive to noise.
Ultimately, this work could prove crucial for studying materials like superconductors. After all, no practical superconductor will ever be used at zero temperature. In fact, we want to use them at room temperature – and approaches like this are what will allow us to perform the necessary studies to one day get us there.
Finally, as we advance toward the fault-tolerant era, we encounter a new set of challenges: making computations fault-tolerant at every step can be an expensive venture, eating up qubits and gates. In the third paper, our team made fault-tolerant state preparation—the critical first step in any fault-tolerant algorithm—roughly twice as efficient. With our new “flag at origin” technique, gate counts are significantly reduced, bringing fault-tolerant computation closer to an everyday reality.
The method our researchers developed is highly modular: in the past, to perform optimized state prep like this, developers needed to solve one big expensive optimization problem. In this new work, we’ve figured out how to break the problem up into smaller pieces, in the sense that one now needs to solve a set of much smaller problems. This means that now, for the first time, developers can prepare fault-tolerant states for much larger error correction codes, a crucial step forward in the early-fault-tolerant era.
On top of this, our new method is highly general: it applies to almost any QEC code one can imagine. Normally, fault-tolerant state prep techniques must be anchored to a single code (or a family of codes), making it so that when you want to use a different code, you need a new state prep method. Now, thanks to our team’s work, developers have a single, general-purpose, fault-tolerant state prep method that can be widely applied and ported between different error correction codes. Like the modularity, this is a huge advance for the whole ecosystem—and is quite timely given our recent advances into true fault-tolerance.
This generality isn’t just applicable to different codes, it’s also applicable to the states that you are preparing: while other methods are optimized for preparing only the |0> state, this method is useful for a wide variety of states that are needed to set up a fault tolerant computation. This “state diversity” is especially valuable when working with the best codes – codes that give you many logical qubits per physical qubit. This new approach to fault-tolerant state prep will likely be the method used for fault-tolerant computations across the industry, and if not, it will inform new approaches moving forward.
From the initial state preparation to the final readout, we are ensuring that not only is our hardware the best, but that every single operation is as close to perfect as we can get it.
Twenty-five years ago, scientists accomplished a task likened to a biological moonshot: the sequencing of the entire human genome.
The Human Genome Project revealed a complete human blueprint comprising around 3 billion base pairs, the chemical building blocks of DNA. It led to breakthrough medical treatments, scientific discoveries, and a new understanding of the biological functions of our body.
Thanks to technological advances in the quarter-century since, what took 13 years and cost $2.7 billion then can now be done in under 12 minutes for a few hundred dollars. Improved instruments such as next-generation sequencers and a better understanding of the human genome – including the availability of a “reference genome” – have aided progress, alongside enormous advances in algorithms and computing power.
But even today, some genomic challenges remain so complex that they stretch beyond the capabilities of the most powerful classical computers operating in isolation. This has sparked a bold search for new computational paradigms, and in particular, quantum computing.
The Wellcome Leap Quantum for Bio (Q4Bio) challenge is pioneering this new frontier. The program funds research to develop quantum algorithms that can overcome current computational bottlenecks. It aims to test the classical boundaries of computational genetics in the next 3-5 years.
One consortium – led by the University of Oxford and supported by prestigious partners including the Wellcome Sanger Institute, the Universities of Cambridge, Melbourne, and Kyiv Academic University – is taking a leading role.
“The overall goal of the team’s project is to perform a range of genomic processing tasks for the most complex and variable genomes and sequences – a task that can go beyond the capabilities of current classical computers” – Wellcome Sanger Institute press release, July 2025
Earlier this year, the Sanger Institute selected Quantinuum as a technology partner in their bid to succeed in the Q4Bio challenge.
Our flagship quantum computer, System H2, has for many years led the field of commercially available systems for qubit fidelity and consistently holds the global record for Quantum Volume, currently benchmarked at 8,388,608 (223).
In this collaboration, the scientific research team can take advantage of Quantinuum’s full stack approach to technology development, including hardware, software, and deep expertise in quantum algorithm development.
“We were honored to be selected by the Sanger Institute to partner in tackling some of the most complex challenges in genomics. By bringing the world’s highest performing quantum computers to this collaboration, we will help the team push the limits of genomics research with quantum algorithms and open new possibilities for health and medical science.” – Rajeeb Hazra, President and CEO of Quantinuum
At the heart of this endeavor, the consortium has announced a bold central mission for the coming year: to encode and process an entire genome using a quantum computer. This achievement would be a potential world-first and provide evidence for quantum computing’s readiness for tackling real-world use cases.
Their chosen genome, the bacteriophage PhiX174, carries symbolic weight, as its sequencing earned Fred Sanger his second Nobel Prize for Chemistry in 1980. Successfully encoding this genome quantum mechanically would represent a significant milestone for both genomics and quantum computing.
Sooner than many expect, quantum computing may play an essential role in tackling genomic challenges at the very frontier of human health. The Sanger Institute and Quantinuum’s partnership reminds us that we may soon reach an important step forward in human health research – one that could change medicine and computational biology as dramatically as the original Human Genome Project did a quarter-century ago.
“Quantum computational biology has long inspired us at Quantinuum, as it has the potential to transform global health and empower people everywhere to lead longer, healthier, and more dignified lives.” – Ilyas Khan, Founder and Chief Product Officer of Quantinuum