In a meaningful advance in an important area of industrial and real-world relevance, Quantinuum researchers have demonstrated a quantum algorithm capable of solving complex combinatorial optimization problems while making the most of available quantum resources.
Results on the new H2 quantum computer evidenced a remarkable ability to solve combinatorial optimization problems with as few quantum resources as those employed by just one layer of the quantum approximate optimization algorithm (QAOA), the current and traditional workhorse of quantum heuristic algorithms.
Optimization problems are common in industry in contexts such as route planning, scheduling, cost optimization and logistics. However, as the number of variables increases and optimization problems grow larger and more complex, finding satisfactory solutions using classical algorithms becomes increasingly difficult.
Recent research suggests that certain quantum algorithms might be capable of solving combinatorial optimization problems better than classical algorithms. The realization of such quantum algorithms can therefore potentially increase the efficiency of industrial processes.
However, the effectiveness of these algorithms on near-term quantum devices and even on future generations of more capable quantum computers presents a technical challenge: quantum resources will need to be reduced as much as possible in order to protect the quantum algorithm from the unavoidable effects of quantum noise.
Sebastian Leontica and Dr. David Amaro, a senior research scientist at Quantinuum, explain their advances in a new paper, “Exploring the neighborhood of 1-layer QAOA with Instantaneous Quantum Polynomial circuits” published on arXiv. This is one of several papers published at the launch of Quantinuum’s H2, that highlight the unparalleled power of the newest generation of the H-Series, Powered by Honeywell.
“We should strive to use as few quantum resources as possible no matter how good a quantum computer we are operating on, which means using the smallest possible number of qubits that fit within the problem size and a circuit that is as shallow as possible,” Dr. Amaro said. “Our algorithm uses the fewest possible resources and still achieves good performance.”
The researchers use a parameterized instantaneous quantum polynomial (IQP) circuit of the same depth as the 1-layer QAOA to incorporate corrections that would otherwise require multiple layers. Another differentiating feature of the algorithm is that the parameters in the IQP circuit can be efficiently trained on a classical computer, avoiding some training issues of other algorithms like QAOA. Critically, the circuit takes full advantage of, and benefits from features available on Quantinuum’s devices, including parameterized two-qubit gates, all-to-all connectivity, and high-fidelity operations.
“Our numerical simulations and experiments on the new H2 quantum computer at small scale indicate that this heuristic algorithm, compared to 1-layer QAOA, is expected to amplify the probability of sampling good or even optimal solutions of large optimization problems,” Dr. Amaro said. “We now want to understand how the solution quality and runtime of our algorithm compares to the best classical algorithms.”
This algorithm will be useful for current quantum computers as well as larger machines farther along the Quantinuum hardware roadmap.
The goal of this project was to provide a quantum heuristic algorithm for combinatorial optimization that returns better solutions for optimization problems and uses fewer quantum resources than state of the art quantum heuristics. The researchers used a fully connected parameterized IQP, warm-started from 1-layer QAOA. For a problem with n binary variables the circuit contained up to n(n-1)/2 two-qubit gates and the researchers employed only 20.32n shots.
The algorithm showed improved performance on the Sherrington-Kirkpatrick (SK) optimization problem compared to the 1-layer QAOA. Numerical simulations showed an average speed up of 20.31n compared to 20.5n when looking for the optimal solution.
Experimental results on our new H2 quantum computer and emulator confirmed that the new optimization algorithm outperforms 1-layer QAOA and reliably solves complex optimization problems. The optimal solution was found for 136 out of 312 instances, four of which were for the maximum size of 32 qubits. A 30-qubit instance was solved optimally on the H2 device, which means, remarkably, that at least one of the 776 shots measured after performing 432 two-qubit gates corresponds to the unique optimal solution in the huge set of 230 > 109 candidate solutions.
These results indicate that the algorithm, in combination with H2 hardware, is capable of solving hard optimization problems using minimal quantum resources in the presence of real hardware noise.
Quantinuum researchers expect that these promising results at small scale will encourage the further study of new quantum heuristic algorithms at the relevant scale for real-world optimization problems, which requires a better understanding of their performance under realistic conditions.
Numerical simulations of 256 SK random instances for each problem size from 4 to 29 qubits. Graph A shows the probability of sampling the optimal solution in the IQP circuit, for which the average is 2-0.31n. Graph B shows the enhancement factor compared to 1-layer QAOA, for which the average is 20.23n. These results indicate that Quantinuum’s algorithm has significantly better runtime than 1-layer QAOA.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Quantum computing companies are poised to exceed $1 billion in revenues by the close of 2025, according to McKinsey & Company, underscoring how today’s quantum computers are already delivering customer value in their current phase of development.
This figure is projected to reach upwards of $37 billion by 2030, rising in parallel with escalating demand, as well as with the scale of the machines and the complexity of problem sets of which they will be able to address.
Several systems on the market today are fault-tolerant by design, meaning they are capable of suppressing error-causing noise to yield reliable calculations. However, the full potential of quantum computing to tackle problems of true industrial relevance, in areas like medicine, energy, and finance, remains contingent on an architecture that supports a fully fault-tolerant universal gate set with repeatable error correction—a capability that, until now, has eluded the industry.
Quantinuum is the first—and only—company to achieve this critical technical breakthrough, universally recognized as the essential precursor to scalable, industrial-scale quantum computing. This milestone provides us with the most de-risked development roadmap in the industry and positions us to fulfill our promise to deliver our universal, fully fault-tolerant quantum computer, Apollo, by 2029.
In this regard, Quantinuum is the first company to step from the so-called “NISQ” (noisy intermediate-scale quantum) era towards utility-scale quantum computers.
A quantum computer uses operations called gates to process information in ways that even today’s fastest supercomputers cannot. The industry typically refers to two types of gates for quantum computers:
A system that can run both gates is classified as universal and has the machinery to tackle the widest range of problems. Without non-Clifford gates, a quantum computer is non-universal and restricted to smaller, easier sets of tasks - and it can always be simulated by classical computers. This is like painting with a full palette of primary colors, versus only having one or two to work with. Simply put, a quantum computer that cannot implement ‘non-Clifford’ gates is not really a quantum computer.
A fault-tolerant, or error-corrected, quantum computer detects and corrects its own errors (or faults) to produce reliable results. Quantinuum has the best and brightest scientists dedicated to keeping our systems’ error rates the lowest in the world.
For a quantum computer to be fully fault-tolerant, every operation must be error-resilient, across Clifford gates and non-Clifford gates, and thus, performing “a full gate set” with error correction. While some groups have performed fully fault-tolerant gate sets in academic settings, these demonstrations were done with only a few qubits and error rates near 10%—too high for any practical use.
Today, we have published two papers that establishes Quantinuum as the first company to develop a complete solution for a universal fully fault-tolerant quantum computer with repeatable error correction, and error rates low enough for real-world applications.
The first paper describes how scientists at Quantinuum used our System Model H1-1 to perfect magic state production, a crucial technique for achieving a fully fault-tolerant universal gate set. In doing so, they set a record magic state infidelity (7x10-5), 10x better than any previously published result.
Our simulations show that our system could reach a magic state infidelity of 10^-10, or about one error per 10 billion operations, on a larger-scale computer with our current physical error rate. We anticipate reaching 10^-14, or about one error per 100 trillion operations, as we continue to advance our hardware. This means that our roadmap is now derisked.
Setting a record magic state infidelity was just the beginning. The paper also presents the first break-even two-qubit non-Clifford gate, demonstrating a logical error rate below the physical one. In doing so, the team set another record for two-qubit non-Clifford gate infidelity (2x10-4, almost 10x better than our physical error rate). Putting everything together, the team ran the first circuit that used a fully fault-tolerant universal gate set, a critical moment for our industry.
In the second paper, co-authored with researchers at the University of California at Davis, we demonstrated an important technique for universal fault-tolerance called “code switching”.
Code switching describes switching between different error correcting codes. The team then used the technique to demonstrate the key ingredients for universal computation, this time using a code where we’ve previously demonstrated full error correction and the other ingredients for universality.
In the process, the team set a new record for magic states in a distance-3 error correcting code, over 10x better than the best previous attempt with error correction. Notably, this process only cost 28 qubits instead of hundreds. This completes, for the first time, the ingredient list for a universal gate setin a system that also has real-time and repeatable QEC.
Innovations like those described in these two papers can reduce estimates for qubit requirements by an order of magnitude, or more, bringing powerful quantum applications within reach far sooner.
With all of the required pieces now, finally, in place, we are ‘fully’ equipped to become the first company to perform universal fully fault-tolerant computing—just in time for the arrival of Helios, our next generation system launching this year, and what is very likely to remain as the most powerful quantum computer on the market until the launch of its successor, Sol, arriving in 2027.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!