By Ilyas Khan, Chief Product Officer and Jenni Strabley, Senior Director Offering Management
Quantinuum and Microsoft have announced a vital breakthrough in quantum computing that Microsoft described as “a major achievement for the entire quantum ecosystem.”
By combining Microsoft’s innovative qubit-virtualization system with the unique architectural features and fidelity of Quantinuum’s System Model H2 quantum computer, our teams have demonstrated the most reliable logical qubits on record with logical circuit error rates 800 times lower than the corresponding physical circuit error rates.
This achievement is not just monumental for Quantinuum and Microsoft, but it is a major advancement for the entire quantum ecosystem. It is a crucial milestone on the path to building a hybrid supercomputing system that can truly transform research and innovation across many industries for decades to come. It also further bolsters H2’s title as the highest performing quantum computer in the world.
Entering a new era of quantum computing
Historically, there have been widely held assumptions about the physical qubits needed for large scale fault-tolerant quantum computing and the timeline to quantum computers delivering real-world value. It was previously thought that an achievement like this one was still years away from realization – but together, Quantinuum and Microsoft proved that fault-tolerant quantum computing is in fact a reality.
In enabling today’s announcement, Quantinuum’s System Model H2 becomes the first quantum computer to advance to Microsoft’s Level 2 – Resilient phase of quantum computing – an incredible milestone. Until now, no other computer had been capable of producing reliable logical qubits.
Using Microsoft’s qubit-virtualization system, our teams used reliable logical qubits to perform 14,000 individual instances of a quantum circuit with no errors, an overall result that is unprecedented. Microsoft also demonstrated multiple rounds of active syndrome extraction – an essential error correction capability for measuring and detecting the occurrence of errors without destroying the quantum information encoded in the logical qubit.
As we prepare to bring today’s logical quantum computing breakthrough to commercial users, there is palpable anticipation about what this new era means for our partners, customers, and the global quantum computing ecosystem that has grown up around our hardware, middleware, and software.
To understand this achievement, it is helpful to shed some light on the joint work that went into it. Our breakthrough would not have been possible without the close collaboration of the two exceptional teams at Quantinuum and Microsoft over many years.
Building on a relationship that stretches back five years, we collaborated with Microsoft Azure Quantum at a very deep level to best execute their innovative qubit-virtualization system, including error diagnostics and correction. The Microsoft team was able to optimize their error correction innovation, reducing an original estimate of 300 required physical qubits 10-fold, to create four logical qubits with only 30 physical qubits, bringing it into scope for the 32-qubit H2 quantum computer.
This massive compression of the code and efficient virtualization challenges a consensus view about the resources needed to do fault-tolerant quantum computing, where it has been routinely stated that a logical qubit will require hundreds, even thousands of physical qubits. Through our collaboration, Microsoft’s far more efficient encoding was made possible by architectural features unique to the System Model H2, including our market-leading 99.8% two-qubit gate fidelity, 32 fully-connected qubits, and compatibility with Quantum Intermediate Representation (QIR).
Thanks to this powerful combination of collaboration, engineering excellence, and resource efficiency, quantum computing has taken a major step into a new era, introducing reliable logical qubits which will soon be available to industrial and research users.
It is widely recognized that for a quantum computer to be useful, it must be able to compute correctly even when errors (or faults) occur – this is what scientists and engineers describe as fault-tolerance.
In classical computing, fault-tolerance is well-understood and we have come to take it for granted. We always assume that our computers will be reliable and fault-free. Multiple advances over the course of decades have led to this state of affairs, including hardware that is incredibly robust and error rates that are very low, and classical error correction schemes that are based on the ability to copy information across multiple bits, to create redundancy.
Getting to the same point in quantum computing is more challenging, although the solution to this problem has been known for some time. Qubits are incredibly delicate since one must control the precise quantum states of single atoms, which are prone to errors. Additionally, we must abide by a fundamental law of quantum physics known as the no cloning theorem, which says that you can’t just copy qubits – meaning some of the techniques used in classical error correction are unavailable in quantum machines.
The solution involves entangling groups of physical qubits (thereby creating a logical qubit), storing the relevant quantum information in the entangled state, and, via some complex functions, performing computations with error correction. This process is all done with the sole purpose of creating logical qubit errors lower than the errors at the physical level.
However, implementing quantum error correction requires a significant number of qubit operations. Unless the underlying physical fidelity is good enough, implementing a quantum error correcting code will add more noise to your circuit than it takes away. No matter how clever you are in implementing a code, if your physical fidelity is poor, the error correcting code will only introduce more noise. But, once your physical fidelity is good enough (aka when the physical error rate is “below threshold”), then you will see the error correcting code start to actually help: producing logical errors below the physical errors.
Today’s results are an exciting marker on the path to fault-tolerant quantum computing. The focus must and will now shift from quantum computing companies simply stating the number of qubits they have to explaining their connectivity, the underlying quality of the qubits with reference to gate fidelities, and their approach to fault-tolerance.
Our H-Series hardware roadmap has not only focused on scaling qubits, but also developing useable quantum computers that are part of a vertically integrated stack. Our work across the full stack includes major advances at every level, for instance just last month we proved that our qubits could scale when we announced solutions to the wiring problem and the sorting problem. By maintaining higher qubit counts and world class fidelity, our customers and partners are able to advance further and faster in fields such as material science, drug discovery, AI and finance.
In 2025, we will introduce a new H-Series quantum computer, Helios, that takes the very best the H-Series has to offer, improving both physical qubit count and physical fidelity. This will take us and our users below threshold for a wider set of error correcting codes and make that device capable of supporting at least 10 highly reliable logical qubits.
A path to real-world impact
As we build upon today’s milestone and lead the field on the path to fault-tolerance, we are committed to continuing to make significant strides in the research that enables the rapid advance of our technologies. We were the first to demonstrate real-time quantum error correction (meaning a fully-fault tolerant QEC protocol), a result that meant we were the first to show: repeated real-time error correction, the ability to perform quantum "loops" (repeat-until-success protocols), and real-time decoding to determine the corrections during the computation. We were the first to create non-Abelian topological quantum matter and braid its anyons, leading to topological qubits.
The native flexibility of our QCCD architecture has allowed us to efficiently investigate a large variety of fault-tolerant methods, and our best-in-class fidelity means we expect to lead the way in achieving reduced error rates with additional error correcting codes – and supporting our partners to do the same. We are already working on making reliable quantum computing a commercial reality so that our customers and partners can unlock the enormous real-world economic value that is waiting to be unleashed by the development of these systems.
In the short term – with a hybrid supercomputer powered by a hundred reliable logical qubits, we believe that organizations will be able to start to see scientific advantages and will be able to accelerate valuable progress toward some of the most important problems that mankind faces such as modelling the materials used in batteries and hydrogen fuel cells or accelerating the development of meaning-aware AI language models. Over the long-term, if we are able to scale closer to ~1,000 reliable logical qubits, we will be able to unlock the commercial advantages that can ultimately transform the commercial world.
Quantinuum customers have always been able to operate the most cutting-edge quantum computing, and we look forward to seeing how they, and our own world-leading teams, drive ahead developing new solutions based on the state-of-the-art tools we continue to put into their hands. We were the early leaders in quantum computing and now we are thrilled to be positioned at the forefront of fault-tolerant quantum computing. We are excited to see what today’s milestone unlocks for our customers in the days ahead.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
At Quantinuum, we pay attention to every detail. From quantum gates to teleportation, we work hard every day to ensure our quantum computers operate as effectively as possible. This means not only building the most advanced hardware and software, but that we constantly innovate new ways to make the most of our systems.
A key step in any computation is preparing the initial state of the qubits. Like lining up dominoes, you first need a special setup to get meaningful results. This process, known as state preparation or “state prep,” is an open field of research that can mean the difference between realizing the next breakthrough or falling short. Done ineffectively, state prep can carry steep computational costs, scaling exponentially with the qubit number.
Recently, our algorithm teams have been tackling this challenge from all angles. We’ve published three new papers on state prep, covering state prep for chemistry, materials, and fault tolerance.
In the first paper, our team tackled the issue of preparing states for quantum chemistry. Representing chemical systems on gate-based quantum computers is a tricky task; partly because you often want to prepare multiconfigurational states, which are very complex. Preparing states like this can cost a lot of resources, so our team worked to ensure we can do it without breaking the (quantum) bank.
To do this, our team investigated two different state prep methods. The first method uses Givens rotations, implemented to save computational costs. The second method exploits the sparsity of the molecular wavefunction to maximize efficiency.
Once the team perfected the two methods, they implemented them in InQuanto to explore the benefits across a range of applications, including calculating the ground and excited states of a strongly correlated molecule (twisted C_2 H_4). The results showed that the “sparse state preparation” scheme performed especially well, requiring fewer gates and shorter runtimes than alternative methods.
In the second paper, our team focused on state prep for materials simulation. Generally, it’s much easier for computers to simulate materials that are at zero temperature, which is, obviously, unrealistic. Much more relevant to most scientists is what happens when a material is not at zero temperature. In this case, you have two options: when the material is steadily at a given temperature, which scientists call thermal equilibrium, or when the material is going through some change, also known as out of equilibrium. Both are much harder for classical computers to work with.
In this paper, our team looked to solve an outstanding problem: there is no standard protocol for preparing thermal states. In this work, our team only targeted equilibrium states but, interestingly, they used an out of equilibrium protocol to do the work. By slowly and gently evolving from a simple state that we know how to prepare, they were able to prepare the desired thermal states in a way that was remarkably insensitive to noise.
Ultimately, this work could prove crucial for studying materials like superconductors. After all, no practical superconductor will ever be used at zero temperature. In fact, we want to use them at room temperature – and approaches like this are what will allow us to perform the necessary studies to one day get us there.
Finally, as we advance toward the fault-tolerant era, we encounter a new set of challenges: making computations fault-tolerant at every step can be an expensive venture, eating up qubits and gates. In the third paper, our team made fault-tolerant state preparation—the critical first step in any fault-tolerant algorithm—roughly twice as efficient. With our new “flag at origin” technique, gate counts are significantly reduced, bringing fault-tolerant computation closer to an everyday reality.
The method our researchers developed is highly modular: in the past, to perform optimized state prep like this, developers needed to solve one big expensive optimization problem. In this new work, we’ve figured out how to break the problem up into smaller pieces, in the sense that one now needs to solve a set of much smaller problems. This means that now, for the first time, developers can prepare fault-tolerant states for much larger error correction codes, a crucial step forward in the early-fault-tolerant era.
On top of this, our new method is highly general: it applies to almost any QEC code one can imagine. Normally, fault-tolerant state prep techniques must be anchored to a single code (or a family of codes), making it so that when you want to use a different code, you need a new state prep method. Now, thanks to our team’s work, developers have a single, general-purpose, fault-tolerant state prep method that can be widely applied and ported between different error correction codes. Like the modularity, this is a huge advance for the whole ecosystem—and is quite timely given our recent advances into true fault-tolerance.
This generality isn’t just applicable to different codes, it’s also applicable to the states that you are preparing: while other methods are optimized for preparing only the |0> state, this method is useful for a wide variety of states that are needed to set up a fault tolerant computation. This “state diversity” is especially valuable when working with the best codes – codes that give you many logical qubits per physical qubit. This new approach to fault-tolerant state prep will likely be the method used for fault-tolerant computations across the industry, and if not, it will inform new approaches moving forward.
From the initial state preparation to the final readout, we are ensuring that not only is our hardware the best, but that every single operation is as close to perfect as we can get it.
Twenty-five years ago, scientists accomplished a task likened to a biological moonshot: the sequencing of the entire human genome.
The Human Genome Project revealed a complete human blueprint comprising around 3 billion base pairs, the chemical building blocks of DNA. It led to breakthrough medical treatments, scientific discoveries, and a new understanding of the biological functions of our body.
Thanks to technological advances in the quarter-century since, what took 13 years and cost $2.7 billion then can now be done in under 12 minutes for a few hundred dollars. Improved instruments such as next-generation sequencers and a better understanding of the human genome – including the availability of a “reference genome” – have aided progress, alongside enormous advances in algorithms and computing power.
But even today, some genomic challenges remain so complex that they stretch beyond the capabilities of the most powerful classical computers operating in isolation. This has sparked a bold search for new computational paradigms, and in particular, quantum computing.
The Wellcome Leap Quantum for Bio (Q4Bio) challenge is pioneering this new frontier. The program funds research to develop quantum algorithms that can overcome current computational bottlenecks. It aims to test the classical boundaries of computational genetics in the next 3-5 years.
One consortium – led by the University of Oxford and supported by prestigious partners including the Wellcome Sanger Institute, the Universities of Cambridge, Melbourne, and Kyiv Academic University – is taking a leading role.
“The overall goal of the team’s project is to perform a range of genomic processing tasks for the most complex and variable genomes and sequences – a task that can go beyond the capabilities of current classical computers” – Wellcome Sanger Institute press release, July 2025
Earlier this year, the Sanger Institute selected Quantinuum as a technology partner in their bid to succeed in the Q4Bio challenge.
Our flagship quantum computer, System H2, has for many years led the field of commercially available systems for qubit fidelity and consistently holds the global record for Quantum Volume, currently benchmarked at 8,388,608 (223).
In this collaboration, the scientific research team can take advantage of Quantinuum’s full stack approach to technology development, including hardware, software, and deep expertise in quantum algorithm development.
“We were honored to be selected by the Sanger Institute to partner in tackling some of the most complex challenges in genomics. By bringing the world’s highest performing quantum computers to this collaboration, we will help the team push the limits of genomics research with quantum algorithms and open new possibilities for health and medical science.” – Rajeeb Hazra, President and CEO of Quantinuum
At the heart of this endeavor, the consortium has announced a bold central mission for the coming year: to encode and process an entire genome using a quantum computer. This achievement would be a potential world-first and provide evidence for quantum computing’s readiness for tackling real-world use cases.
Their chosen genome, the bacteriophage PhiX174, carries symbolic weight, as its sequencing earned Fred Sanger his second Nobel Prize for Chemistry in 1980. Successfully encoding this genome quantum mechanically would represent a significant milestone for both genomics and quantum computing.
Sooner than many expect, quantum computing may play an essential role in tackling genomic challenges at the very frontier of human health. The Sanger Institute and Quantinuum’s partnership reminds us that we may soon reach an important step forward in human health research – one that could change medicine and computational biology as dramatically as the original Human Genome Project did a quarter-century ago.
“Quantum computational biology has long inspired us at Quantinuum, as it has the potential to transform global health and empower people everywhere to lead longer, healthier, and more dignified lives.” – Ilyas Khan, Founder and Chief Product Officer of Quantinuum
Every year, The IEEE International Conference on Quantum Computing and Engineering – or IEEE Quantum Week – brings together engineers, scientists, researchers, students, and others to learn about advancements in quantum computing.
This year’s conference from August 31st – September 5th, is being held in Albuquerque, New Mexico, a burgeoning epicenter for quantum technology innovation and the home to our new location that will support ongoing collaborative efforts to advance the photonics technologies critical to furthering our product development.
Throughout IEEE Quantum Week, our quantum experts will be on-site to share insights on upgrades to our hardware, enhancements to our software stack, our path to error correction, and more.
Meet our team at Booth #507 and join the below sessions to discover how Quantinuum is forging the path to fault-tolerant quantum computing with our integrated full-stack.
Quantum Software Workshop
Quantum Software 2.1: Open Problems, New Ideas, and Paths to Scale
1:15 – 2:10pm MDT | Mesilla
We recently shared the details of our new software stack for our next-generation systems, including Helios (launching in 2025). Quantinuum’s Agustín Borgna will deliver a lighting talk to introduce Guppy, our new, open-source programming language based on Python, one of the most popular general-use programming languages for classical computing.
PAN08: Progress and Platforms in the Era of Reliable Quantum Computing
1:00 – 2:30pm MDT | Apache
We are entering the era of reliable quantum computing. Across the industry, quantum hardware and software innovators are enabling this transformation by creating reliable logical qubits and building integrated technology stacks that span the application layer, middleware and hardware. Attendees will hear about current and near-term developments from Microsoft, Quantinuum and Atom Computing. They will also gain insights into challenges and potential solutions from across the ecosystem, learn about Microsoft’s qubit-virtualization system, and get a peek into future developments from Quantinuum and Microsoft.
BOF03: Exploring Distributed Quantum Simulators on Exa-scale HPC Systems
3:00 – 4:30pm MDT | Apache
The core agenda of the session is dedicated to addressing key technical and collaborative challenges in this rapidly evolving field. Discussions will concentrate on innovative algorithm design tailored for HPC environments, the development of sophisticated hybrid frameworks that seamlessly combine classical and quantum computational resources, and the crucial task of establishing robust performance benchmarks on large-scale CPU/GPU HPC infrastructures.
PAN11: Real-time Quantum Error Correction: Achievements and Challenges
1:00 – 2:30pm MDT | La Cienega
This panel will explore the current state of real-time quantum error correction, identifying key challenges and opportunities as we move toward large-scale, fault-tolerant systems. Real-time decoding is a multi-layered challenge involving algorithms, software, compilation, and computational hardware that must work in tandem to meet the speed, accuracy, and scalability demands of FTQC. We will examine how these challenges manifest for multi-logical qubit operations, and discuss steps needed to extend the decoding infrastructure from intermediate-scale systems to full-scale quantum processors.
Keynote by NVIDIA
8:00 – 9:30am MDT | Kiva Auditorium
During his keynote talk, NVIDIA’s Head of Quantum Computing Product, Sam Stanwyck, will detail our partnership to fast-track commercially scalable quantum supercomputers. Discover how Quantinuum and NVIDIA are pushing the boundaries to deliver on the power of hybrid quantum and classical compute – from integrating NVIDIA’s CUDA-Q Platform with access to Quantinuum’s industry-leading hardware to the recently announced NVIDIA Quantum Research Center (NVAQC).
Visible Photonic Component Development for Trapped-Ion Quantum Computing
September 2nd from 6:30 - 8:00pm MDT | September 3rd from 9:30 - 10:00am MDT | September 4th from 11:30 - 12:30pm MDT
Authors: Elliot Lehman, Molly Krogstad, Molly P. Andersen, Sara Cambell, Kirk Cook, Bryan DeBono, Christopher Ertsgaard, Azure Hansen, Duc Nguyen, Adam Ollanik, Daniel Ouellette, Michael Plascak, Justin T. Schultz, Johanna Zultak, Nicholas Boynton, Christopher DeRose,Michael Gehl, and Nicholas Karl
Scaling Up Trapped-Ion Quantum Processors with Integrated Photonics
September 2nd from 6:30 - 8:00pm MDT and 2:30 - 3:00pm MDT | September 4th from 9:30 - 10:00am MDT
Authors: Molly Andersen, Bryan DeBono, Sara Campbell, Kirk Cook, David Gaudiosi, Christopher Ertsgaard, Azure Hansen, Todd Klein, Molly Krogstad, Elliot Lehman, Gregory MacCabe, Duc Nguyen, Nhung Nguyen, Adam Ollanik, Daniel Ouellette, Brendan Paver, Michael Plascak, Justin Schultz and Johanna Zultak
In a partnership that is part of a long-standing relationship with Los Alamos National Laboratory, we have been working on new methods to make quantum computing operations more efficient, and ultimately, scalable.
Learn more in our Research Paper: Classical shadows with symmetries
Our teams collaborated with Sandia National Laboratories demonstrating our leadership in benchmarking. In this paper, we implemented a technique devised by researchers at Sandia to measure errors in mid-circuit measurement and reset. Understanding these errors helps us to reduce them while helping our customers understand what to expect while using our hardware.
Learn more in our Research Paper: Measuring error rates of mid-circuit measurements