Recently a new benchmark called algorithmic qubits (AQ) has started to be confused with quantum volume measurements. Quantum volume (QV) was specifically designed to be hard to “game,” however the algorithmic qubits test turns out to be very susceptible to tricks that can make a quantum computer look much better than it actually is. While it is not clear what can be done to fix the algorithmic qubits test, it is already clear that it is much easier to pass than QV and is a poor substitute for measuring performance. It is also important to note that algorithmic qubits are not the same as logical qubits, which are necessary for full fault-tolerant quantum computing.
To make this point clear, we simulated what algorithmic qubits data would look like for two machines, one clearly much higher performing than the other. We applied two tricks that are typically used when sharing algorithmic qubits results: gate compilation and error mitigation with plurality voting. From the data above, you can see how these tricks are misleading without further information. For example, if you compare data from the higher fidelity machine without any compilation or plurality voting (bottom left) to data from the inferior machine with both tricks (top right) you may incorrectly believe the inferior machine is performing better. Unfortunately, this inaccurate and misleading comparison has been made in the past. It is important to note that algorithmic qubits uses a subset of algorithms from a QED-C paper that introduced a suite of application oriented tests and created a repository to test available quantum computers. Importantly, that work explicitly forbids the compilation and error mitigation techniques that are causing the issue here.
As a demonstration of the perils of AQ as a benchmark, we look at data obtained on both Quantinuum’s H2-1 system as well as publicly available data from IonQ’s Forte system.
We reproduce data without any error mitigation from IonQ’s publicly released data in association with a preprint posted to the arXiv, and compare it to data taken on our H2-1 device. Without error mitigation, IonQ Forte achieves an AQ score of 9, whereas Quantinuum H2-1 achieves AQ of 26. Here you can clearly see improved circuit fidelities on the H2-1 device, as one would expect from the higher reported 2Q gate fidelities (average 99.816(5)% for Quantinuum’s H2-1 vs 99.35% for IonQ’s Forte). However, after you apply error mitigation, in this case plurality voting, to both sets of data the picture changes substantially, hiding each underlying computer’s true capabilities.
Here the H2-1 algorithmic performance still exceeds Forte (from the publicly released data), but the perceived gap has been reduced by error mitigation.
“Error mitigation, including plurality voting, may be a useful tool for some near-term quantum computing but it doesn’t work for every problem and it’s unlikely to be scalable to larger systems. In order to achieve the lofty goals of quantum computing we’ll need serious device performance upgrades. If we allow error mitigation in benchmarking it will conflate the error mitigation with the underlying device performance. This will make it hard for users to appreciate actual device improvements that translate to all applications and larger problems,” explained Dr. Charlie Baldwin, a leader in Quantinuum’s benchmarking efforts.
There are other issues with the algorithmic qubits test. The circuits used in the test can be reduced to very easy-to-run circuits with basic quantum circuit compilation that are freely available in packages like pytket. For example, the largest phase estimation and amplitude estimation tests required to pass AQ=32 are specified with 992 and 868 entangling gates respectively but applying pytket optimization reduces the circuits to 141 and 72 entangling gates. This is only possible due to choices in constructing the benchmarks and will not be universally available when using the algorithms in applications. Since AQ reports the precompiled gate counts this also may lead users to expect a machine to be able to run many more entangling gates than what is actually possible on the benchmarked hardware.
What makes a good quantum benchmark? Quantum benchmarking is extremely useful in charting the hardware progress and providing roadmaps for future development. However, quantum benchmarking is an evolving field that is still an open area of research. At Quantinuum we believe in testing the limits of our machine with a variety of different benchmarks to learn as much as possible about the errors present in our system and how they affect different circuits. We are open to working with the larger community on refining benchmarks and creating new ones as the field evolves.
To learn more about the Algorithmic Qubits benchmark and the issues with it, please watch this video where Dr. Charlie Baldwin walks us through the details, starting at 32:40.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!
Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.
Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.
Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.
This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.
Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.
This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.