We're announcing the world’s first scalable, error-corrected, end-to-end computational chemistry workflow. With this, we are entering the future of computational chemistry.
Quantum computers are uniquely equipped to perform the complex computations that describe chemical reactions – computations that are so complex they are impossible even with the world’s most powerful supercomputers.
However, realizing this potential is a herculean task: one must first build a large-scale, universal, fully fault-tolerant quantum computer – something nobody in our industry has done yet. We are the farthest along that path, as our roadmap, and our robust body of research, proves. At the moment, we have the world’s most powerful quantum processors, and are moving quickly towards universal fault tolerance. Our commitment to building the best quantum computers is proven again and again in our world-leading results.
While we do the work to build the world’s best quantum computers, we aren’t waiting to develop their applications. We have teams working right now on making sure that we hit the ground running with each new hardware generation. In fact, our team has just taken a huge leap forward for computational chemistry using our System Model H2.
In our latest paper, we have announced the first-ever demonstration of a scalable, end-to-end workflow for simulating chemical systems with quantum error correction (QEC). This milestone shows that quantum computing will play an essential role, in tandem with HPC and AI, in unlocking new frontiers in scientific discovery.
In the paper, we showcase the first practical combination of quantum phase estimation (QPE) with logical qubits for molecular energy calculations – an essential step toward fault-tolerant quantum simulations. It builds on our previous work implementing quantum error detection with QPE and marks a critical step toward achieving quantum advantage in chemistry.
By demonstrating this end-to-end workflow on our H2 quantum computer using our state-of-the-art chemistry platform InQuanto™, we are proving that quantum error-corrected chemistry simulations are not only feasible, but also scalable and —crucially—implementable in our quantum computing stack.
This work sets key benchmarks on the path to fully fault-tolerant quantum simulations. Building such capabilities into an industrial workflow will be a milestone for quantum computing, and the demonstration reported here represents a new high-water mark as we continue to lead the global industry in pushing towards universal fault-tolerant computers capable of widespread scientific and commercial advantage.
As we look ahead, this workflow will serve as the foundation for future quantum-HPC integration, enabling chemistry simulations that are impossible today.
Today’s achievement wouldn’t be possible without the Quantinuum’s full stack approach. Our vertical integration - from hardware to software to applications - ensures that each layer works together seamlessly.
Our H2 quantum computer, based on the scalable QCCD architecture with its unique combination of high-fidelity operations, all-to-all connectivity, mid-circuit measurements and conditional logic, enabled us to run more complex quantum computing simulations than previously possible. The work also leverages Quantinuum’s real-time QEC decoding capability and benefitted from the quantum error correction advantages also provided by QCCD.
We will make this workflow available to customers via InQuanto, our quantum chemistry platform, allowing users to easily replicate and build upon this work. The integration of high-quality quantum computing hardware with sophisticated software creates a robust environment for iterating and accelerating breakthroughs in fields like chemistry and materials science.
Achieving quantum advantage in chemistry will require more than just quantum hardware; it will require a synergistic approach that combines such quantum computing workflows demonstrated here with classical supercomputing and AI. Our strategic partnerships with leading supercomputing providers – with Quantinuum being selected as a founding collaborator for NVIDIA’s Accelerated Quantum Research Center – as well as our commitment to exploring generative quantum AI, place us in a unique position to maximize the benefit of quantum computing, and supercharge quantum advantage with the integration of classical supercomputing and AI.
Quantum computing holds immense potential for transforming industries across the globe. Our work today experimentally demonstrates the first complete and scalable quantum chemistry simulation, showing that the long-awaited quantum advantage in simulating chemical systems is not only possible, but within reach. With the development of new error correction techniques and the continued advancement of our quantum hardware and software we are paving the way for a future where quantum simulations can address challenges that are impossible today. Quantinuum’s ongoing collaborations with HPC providers and its exploration of AI-driven quantum techniques position our company to capitalize on this trifecta of computing power and achieve meaningful breakthroughs in quantum chemistry and beyond.
We encourage you to explore this breakthrough further by reading our latest research on arXiv and try out the Python code for yourself.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!
Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.
Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.
Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.
This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.
Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.
This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.