While it sounds like a gadget from Star Trek, teleportation is real – and it is happening at Quantinuum. In a new paper published in Science, our researchers moved a quantum state from one place to another without physically moving it through space - and they accomplished this feat with fault-tolerance and excellent fidelity. This is an important milestone for the whole quantum computing community and the latest example of Quantinuum achieving critical milestones years ahead of expectations.
While it seems exotic, teleportation is a critical piece of technology needed for full scale fault-tolerant quantum computing, and it is used widely in algorithm and architecture design. In addition to being essential on its own, teleportation has historically been used to demonstrate a high level of system maturity. The protocol requires multiple qubits, high-fidelity state-preparation, single-qubit operations, entangling operations, mid-circuit measurement, and conditional operations, making it an excellent system-level benchmark.
Our team was motivated to do this work by the US Government Intelligence Advance Research Projects Activity (IARPA), who set a challenge to perform high fidelity teleportation with the goal of advancing the state of science in universal fault-tolerant quantum computing. IARPA further specified that the entanglement and teleportation protocols must also maintain fault-tolerance, a key property that keeps errors local and correctable.
These ambitious goals required developing highly complex systems, protocols, and other infrastructure to enable exquisite control and operation of quantum-mechanical hardware. We are proud to have accomplished these goals ahead of schedule, demonstrating the flexibility, performance, and power of Quantinuum’s Quantum Charge Coupled Device (QCCD) architecture.
Quantinuum’s demonstration marks the first time that an arbitrary quantum state has been teleported at the logical level (using a quantum error correcting code). This means that instead of teleporting the quantum state of a single physical qubit we have teleported the quantum information encoded in an entangled set of physical qubits, known as a logical qubit. In other words, the collective state of a bunch of qubits is teleported from one set of physical qubits to another set of physical qubits. This is, in a sense, a lot closer to what you see in Star Trek – they teleport the state of a big collection of atoms at once. Except for the small detail of coming up with a pile of matter with which to reconstruct a human body...
This is also the first demonstration of a fully fault-tolerant version of the state teleportation circuit using real-time quantum error correction (QEC), decoding mid-circuit measurement of syndromes and implementing corrections during the protocol. It is critical for computers to be able to catch and correct any errors that happen along the way, and this is not something other groups have managed to do in any robust sense. In addition, our team achieved the result with high fidelity (97.5%±0.2%), providing a powerful demonstration of the quality of our H2 quantum processor, Powered by Honeywell.
Our team also tried several variations of logical teleportation circuits, using both transversal gates and lattice surgery protocols, thanks to the flexibility of our QCCD architecture. This marks the first demonstration of lattice surgery performed on a QEC code.
Lattice surgery is a strategy for implementing logical gates that requires only 2D nearest-neighbor interactions, making it especially useful for architectures whose qubit locations are fixed, such as superconducting architectures. QCCD and other technologies that do not have fixed qubit positioning might employ this method, another method, or some mixture. We are fortunate that our QCCD architecture allows us to explore the use of different logical gating options so that we can optimize our choices for experimental realities.
While the teleportation demonstration is the big result, sometimes it is the behind-the-scenes technology advancements that make the big differences. The experiments in this paper were designed at the logical level using an internally developed logical-level programming language dubbed Simple Logical Representation (SLR). This is yet another marker of our system’s maturity – we are no longer programming at the physical level but have instead moved up one “layer of abstraction”. Someday, all quantum algorithms will need to be run on the logical level with rounds of quantum error correction. This is a markedly different state than most present experiments, which are run on the physical level without quantum error correction. It is also worth noting that these results were generated using the software stack available to any user of Quantinuum’s H-Series quantum computers, and these experiments were run alongside customer jobs – underlining that these results are commercial performance, not hero data on a bespoke system.
Ironically, a key element in this work is our ability to move our qubits through space the “normal” way - this capacity gives us all-to-all connectivity, which was essential for some of the QEC protocols used in the complex task of fault-tolerant logical teleportation. We recently demonstrated solutions to the sorting problem and wiring problem in a new 2D grid trap, which will be essential as we scale up our devices.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
In an experiment led by Princeton and NIST, we’ve just delivered a crucial result in Quantum Error Correction (QEC), demonstrating key principles of scalable quantum computing developed by Drs Peter Shor, Dorit Aharonov, and Michael Ben-Or. In this latest paper, we showed that by using “concatenated codes” noise can be exponentially suppressed — proving that quantum computing will scale.
Quantum computing is already producing results, but high-profile applications like Shor’s algorithm—which can break RSA encryption—require error rates about a billion times lower than what today’s machines can achieve.
Achieving such low error rates is a holy grail of quantum computing. Peter Shor was the first to hypothesize a way forward, in the form of quantum error correction. Building on his results, Dorit Aharanov and Michael Ben-Or proved that by concatenating quantum error correcting codes, a sufficiently high-quality quantum computer can suppress error rates arbitrarily at the cost of a very modest increase in the required number of qubits. Without that insight, building a truly fault-tolerant quantum computer would be impossible.
Their results, now widely referred to as the “threshold theorem”, laid the foundation for realizing fault-tolerant quantum computing. At the time, many doubted that the error rates required for large-scale quantum algorithms could ever be achieved in practice. The threshold theorem made clear that large scale quantum computing is a realistic possibility, giving birth to the robust quantum industry that exists today.
Until now, nobody has realized the original vision for the threshold theorem. Last year, Google performed a beautiful demonstration of the threshold theorem in a different context (without concatenated codes). This year, we are proud to report the first experimental realization of that seminal work—demonstrating fault-tolerant quantum computing using concatenated codes, just as they envisioned.
The team demonstrated that their family of protocols achieves high error thresholds—making them easier to implement—while requiring minimal ancilla qubits, meaning lower overall qubit overhead. Remarkably, their protocols are so efficient that fault-tolerant preparation of basis states requires zero ancilla overhead, making the process maximally efficient.
This approach to error correction has the potential to significantly reduce qubit requirements across multiple areas, from state preparation to the broader QEC infrastructure. Additionally, concatenated codes offer greater design flexibility, which makes them especially attractive. Taken together, these advantages suggest that concatenation could provide a faster and more practical path to fault-tolerant quantum computing than popular approaches like the surface code.
From a broader perspective, this achievement highlights the power of collaboration between industry, academia, and national laboratories. Quantinuum’s commercial quantum systems are so stable and reliable that our partners were able to carry out this groundbreaking research remotely—over the cloud—without needing detailed knowledge of the hardware. While we very much look forward to welcoming them to our labs before long, its notable that they never need to step inside to harness the full capabilities of our machines.
As we make quantum computing more accessible, the rate of innovation will only increase. The era of plug-and-play quantum computing has arrived. Are you ready?
Quantum computing companies are poised to exceed $1 billion in revenues by the close of 2025, according to McKinsey & Company, underscoring how today’s quantum computers are already delivering customer value in their current phase of development.
This figure is projected to reach upwards of $37 billion by 2030, rising in parallel with escalating demand, as well as with the scale of the machines and the complexity of problem sets of which they will be able to address.
Several systems on the market today are fault-tolerant by design, meaning they are capable of suppressing error-causing noise to yield reliable calculations. However, the full potential of quantum computing to tackle problems of true industrial relevance, in areas like medicine, energy, and finance, remains contingent on an architecture that supports a fully fault-tolerant universal gate set with repeatable error correction—a capability that, until now, has eluded the industry.
Quantinuum is the first—and only—company to achieve this critical technical breakthrough, universally recognized as the essential precursor to scalable, industrial-scale quantum computing. This milestone provides us with the most de-risked development roadmap in the industry and positions us to fulfill our promise to deliver our universal, fully fault-tolerant quantum computer, Apollo, by 2029.
In this regard, Quantinuum is the first company to step from the so-called “NISQ” (noisy intermediate-scale quantum) era towards utility-scale quantum computers.
A quantum computer uses operations called gates to process information in ways that even today’s fastest supercomputers cannot. The industry typically refers to two types of gates for quantum computers:
A system that can run both gates is classified as universal and has the machinery to tackle the widest range of problems. Without non-Clifford gates, a quantum computer is non-universal and restricted to smaller, easier sets of tasks - and it can always be simulated by classical computers. This is like painting with a full palette of primary colors, versus only having one or two to work with. Simply put, a quantum computer that cannot implement ‘non-Clifford’ gates is not really a quantum computer.
A fault-tolerant, or error-corrected, quantum computer detects and corrects its own errors (or faults) to produce reliable results. Quantinuum has the best and brightest scientists dedicated to keeping our systems’ error rates the lowest in the world.
For a quantum computer to be fully fault-tolerant, every operation must be error-resilient, across Clifford gates and non-Clifford gates, and thus, performing “a full gate set” with error correction. While some groups have performed fully fault-tolerant gate sets in academic settings, these demonstrations were done with only a few qubits and error rates near 10%—too high for any practical use.
Today, we have published two papers that establishes Quantinuum as the first company to develop a complete solution for a universal fully fault-tolerant quantum computer with repeatable error correction, and error rates low enough for real-world applications.
The first paper describes how scientists at Quantinuum used our System Model H1-1 to perfect magic state production, a crucial technique for achieving a fully fault-tolerant universal gate set. In doing so, they set a record magic state infidelity (7x10-5), 10x better than any previously published result.
Our simulations show that our system could reach a magic state infidelity of 10^-10, or about one error per 10 billion operations, on a larger-scale computer with our current physical error rate. We anticipate reaching 10^-14, or about one error per 100 trillion operations, as we continue to advance our hardware. This means that our roadmap is now derisked.
Setting a record magic state infidelity was just the beginning. The paper also presents the first break-even two-qubit non-Clifford gate, demonstrating a logical error rate below the physical one. In doing so, the team set another record for two-qubit non-Clifford gate infidelity (2x10-4, almost 10x better than our physical error rate). Putting everything together, the team ran the first circuit that used a fully fault-tolerant universal gate set, a critical moment for our industry.
In the second paper, co-authored with researchers at the University of California at Davis, we demonstrated an important technique for universal fault-tolerance called “code switching”.
Code switching describes switching between different error correcting codes. The team then used the technique to demonstrate the key ingredients for universal computation, this time using a code where we’ve previously demonstrated full error correction and the other ingredients for universality.
In the process, the team set a new record for magic states in a distance-3 error correcting code, over 10x better than the best previous attempt with error correction. Notably, this process only cost 28 qubits instead of hundreds. This completes, for the first time, the ingredient list for a universal gate setin a system that also has real-time and repeatable QEC.
Innovations like those described in these two papers can reduce estimates for qubit requirements by an order of magnitude, or more, bringing powerful quantum applications within reach far sooner.
With all of the required pieces now, finally, in place, we are ‘fully’ equipped to become the first company to perform universal fully fault-tolerant computing—just in time for the arrival of Helios, our next generation system launching this year, and what is very likely to remain as the most powerful quantum computer on the market until the launch of its successor, Sol, arriving in 2027.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.