

By Konstantinos Meichanetzidis
One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.
Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:
Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.
Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.
The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.


The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.
Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.

In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.
The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.

The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.
Edited
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
By Dr. Noah Berthusen
The earliest works on quantum error correction showed that by combining many noisy physical qubits into a complex entangled state called a "logical qubit," this state could survive for arbitrarily long times. QEC researchers devote much effort to hunt for codes that function well as "quantum memories," as they are called. Many promising code families have been found, but this is only half of the story.
Being able to keep a qubit around for a long time is one thing, but to realize the theoretical advantages of quantum computing we need to run quantum circuits. And to make sure noise doesn't ruin our computation, these circuits need to be run on the logical qubits of our code. This is often much more challenging than performing gates on the physical qubits of our device, as these "logical gates" often require many physical operations in their implementation. What's more, it often is not immediately obvious which logical gates a code has, and so converting a physical circuit into a logical circuit can be rather difficult.
Some codes, like the famous surface code, are good quantum memories and also have easy logical gates. The drawback is that the ratio of physical qubits to logical qubits (the "encoding rate") is low, and so many physical qubits are required to implement large logical algorithms. High-rate codes that are good quantum memories have also been found, but computing on them is much more difficult. The holy grail of QEC, so to speak, would be a high-rate code that is a good quantum memory and also has easy logical gates. Here, we make progress on that front by developing a new code with those properties.
A recent work from Quantinuum QEC researchers introduced genon codes. The underlying construction method for these codes, called the "symplectic double cover," also provided a way to obtain logical gates that are well suited for Quantinuum's QCCD architecture. Namely, these "SWAP-transversal" gates are performed by applying single qubit operations and relabeling the physical qubits of the device. Thanks to the all-to-all connectivity facilitated through qubit movement on the QCCD architecture, this relabeling can be done in software essentially for free. Combined with extremely high fidelity (~1.2 x10-5) single-qubit operations, the resulting logical gates are similarly high fidelity.
Given the promise of these codes, we take them a step further in our new paper. We combine the symplectic double codes with the [[4,2,2]] Iceberg code using a procedure called "code concatenation". A concatenated code is a bit like nesting dolls, with an outer code containing codes within it---with these too potentially containing codes. More technically, in a concatenated code the logical qubits of one code act as the physical qubits of another code.
The new codes, which we call "concatenated symplectic double codes", were designed in such a way that they have many of these easily-implementable SWAP-transversal gates. Central to its construction, we show how the concatenation method allows us to "upgrade" logical gates in terms of their ease of implementation; this procedure may provide insights for constructing other codes with convenient logical gates. Notably, the SWAP-transversal gate set on this code is so powerful that only two additional operations (logical T and S) are necessary for universal computation. Furthermore, these codes have many logical qubits, and we also present numerical evidence to suggest that they are good quantum memories.
Concatenated symplectic double codes have one of the easiest logical computation schemes, and we didn’t have to sacrifice rate to achieve it. Looking forward in our roadmap, we are targeting hundreds of logical qubits at ~ 1x 10-8 logical error rate by 2029. These codes put us in a prime position to leverage the best characteristics of our hardware and create a device that can achieve real commercial advantage.
Every year, the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) brings together the global supercomputing community to explore the technologies driving the future of computing.
Join Quantinuum at this year’s conference, taking place November 16th – 21st in St. Louis, Missouri, where we will showcase how our quantum hardware, software, and partnerships are helping define the next era of high-performance and quantum computing.
The Quantinuum team will be on-site at booth #4432 to showcase how we’re building the bridge between HPC and quantum.
From Monday through Wednesday, our quantum computing experts will host daily tutorials at our booth on Helios, our next-generation hardware platform, Nexus, our all-in-one quantum computing platform, and Hybrid Workflows, featuring the integration of NVIDIA CUDA-Q with Quantinuum Systems.
Register for a tutorial
Join our team as they share insights on the opportunities and challenges of quantum integration within the HPC ecosystem:
Panel Session: The Quantum Era of HPC: Roadmaps, Challenges and Opportunities in Navigating the Integration Frontier
November 19th | 10:30 – 12:00pm CST
During this panel session, Kentaro Yamamoto from Quantinuum, will join experts from Lawrence Berkeley National Laboratory, IBM, QuEra, RIKEN, and Pawsey Supercomputing Research Centre to explore how quantum and classical systems are being brought together to accelerate scientific discovery and industrial innovation.
BoF Session: Bridging the Gap: Making Quantum-Classical Hybridization Work in HPC
November 19th | 5:15 – 6:45pm CST
Quantum-classical hybrid computing is moving from theory to reality, yet no clear roadmap exists for how best to integrate quantum processing units (QPUs) into established HPC environments. In this Birds of a Feather discussion, co-led by Quantinuum’s Grahame Vittorini and representatives from BCS, DOE, EPCC, Inria, ORNL NVIDIA, and RIKEN we hope to bring together a global community of HPC practitioners, system architects, quantum computing specialists and workflow researchers, including participants in the Workflow Community Initiative, to assess the state of hybrid integration and identify practical steps toward scalable, impactful deployment.
Quantinuum’s real world experiment, on the world’s most powerful quantum computer, is the largest of its kind— so large that no amount of classical computing could match it

In 1911, a student working under famed physicist Heike Kamerlingh Onnes made a discovery that would rewire our understanding of electricity. The student was studying the electrical resistance of wires, a seemingly simple question that held secrets destined to surprise the world.
Kamerlingh Onnes had recently succeeded in liquefying helium, a feat so impressive it earned him the Nobel Prize in Physics two years later. With this breakthrough, scientists could now immerse other materials in a cold bath of liquid Helium, cooling things to unprecedented temperatures and observing their behavior.
Many theories existed about what would happen to a wire at such low temperatures. Lord Kelvin predicted that electrons would freeze in place, making the resistance infinite and stopping the conduction of electricity. Others expected resistance to decrease linearly with temperature—a hypothesis that led to thermometer designs still in use today.
When the student cooled a mercury wire to 3.6 degrees above absolute zero, he found something remarkable: the electrical resistivity suddenly vanished.
Onnes quickly devised an ingenious experiment: as a diligent researcher, he knew that he needed to validate these surprising findings. He took a closed loop of wire, set a current running through it, and watched as it flowed endlessly without fading—a type of perpetual motion that seemed to defy everything we know about physics. And so, superconductivity was born.
More than a century later, all known superconductors still require extreme conditions like brutal cold or high pressure. If we could instead design a material that superconducts at room temperature, and under normal conditions, our world would be profoundly reshaped. “Room temperature superconductivity”, as it is generally called, would enable a raft of technological breakthroughs from affordable MRI machines to nearly lossless power grids.
Designing such a material means answering many open questions, and scientists are pursuing diverse strategies to find answers. One promising approach is light-induced superconductivity. In one astonishing study, researchers at the Max Planck Institute in Hamburg used light to entice a material that normally superconducts at roughly -180 °C to superconduct at room temperature - but only for a few picoseconds. This effect raised new questions: how does light achieve something that scientists have been grappling with for decades? What is the microscopic mechanism behind this phenomenon? Could understanding it unlock practical room-temperature superconductors?
Physics is a surprisingly profound field when you stop to think about it. At its core lies the idea that nature speaks the language of mathematics—and that by discovering the right equations, we can reveal her secrets. As bold as that sounds, history has proven it true time and again. Whenever we peek behind the veil; mathematics is there.
To understand a phenomena like superconductivity, physicists first need a mathematical model, or a set of equations that describe how it works. With the right model, they can predict and even design new superconductors that operate under more practical conditions. This is a key frontier in the search for room temperature superconductors, one of science’s holy grails.
Since the discovery of superconductivity, a lot of work has gone into finding this right model – one that can act as a sort of ‘Rosetta stone’ for harnessing this phenomenon. One of the best bets for describing high temperature superconductors like the one in the Hamburg study is called the “non-equilibrium Fermi-Hubbard” model, which describes how electrons interact and move in a crystal.
A surprising element of models that describe superconductivity is the prediction that electrons ‘pair up’ when the material becomes superconducting, dancing around in a waltz, two at a time. These pairs are referred to as “cooper pairs” after the famous physicist Leon Cooper. Now, scientists studying superconductors look for “pairing correlations”, a key signature of superconductivity.
Even armed with the Fermi-Hubbard model, light-induced superconductivity has been very difficult to study. The world’s most powerful supercomputers can only handle very small versions, limiting their utility. Even quantum platforms, like analog simulators, limit researchers to observing ‘average’ quantities and obscuring the microscopic details that are crucial for unravelling this mystery.
Light-induced superconductivity has proved challenging to study with quantum computers as well, as doing so requires low error rates, many qubits, and extreme flexibility to measure the fickle symptoms of superconductivity.
That was, until now: Quantinuum’s Helios is one of the first machines in the world able to handle the complexity of the non-equilibrium Fermi-Hibbard model at scales previously out of reach.
Before Helios, we were limited to small explorations of this model, stalling research on this critical frontier. Now, with Helios, we have a quantum computer uniquely suited for this problem. With a novel fermionic encoding and using up to 90 qubits (72 system qubits plus 18 ancilla), Helios can simulate the dynamics of a 6×6 lattice — a system so large that its full quantum state spans over 2^72 dimensions.

Using Helios to study a system like this offers researchers a sort of “qubit-based laboratory.” Capable of handling complex quantum mechanical effects better than classical computers, Helios allows researchers to thoroughly explore phenomena like this without wasting expensive laboratory time and materials, or spending lots of money and energy running it on a supercomputer.
Our qubit-based laboratory is a dream come true for several reasons. First, it allows arbitrary state preparation – preparing states far from equilibrium, a challenging task for classical computers. Second, it allows for meaningfully long ‘dynamical simulation’ – seeing how the state evolves in time as entanglement spreads and complexity increases. This is notoriously difficult for classical computers, in part due to their difficulty with handling distinctly quantum phenomena like entanglement. Finally, it allows for flexible measurements and experimental parameters – you can measure any observable, including critical “off-diagonal” observables that carry the signature of superconductivity, and simulate any system, such as those with laser pulses or electric fields.
This last point is the most significant. While analog quantum simulators, like cold atom systems, can take snapshots of atom positions or measure densities, they struggle with off-diagonal observables—the very ones that signal the formation of Cooper pairs in superconductors.
In our work, we've simulated three different regimes of the Fermi-Hubbard model and successfully measured non-zero superconducting pairing correlations — a first for any quantum computing platform.
We began by preparing a low-energy state of the model at half-filling — a standard benchmark for testing quantum simulations. Then, using simulated laser pulses or electric fields, we perturbed the system and observed how it responded.
After these perturbations, we measured a notable increase in the so-called “eta” pairing correlations, a mathematical signature of superconducting behavior. These results prove that our computers can help us understand light-induced superconductivity, such as the results from the Max Planck researchers. However, unlike those physical experiments, Helios offers a new level of control and insight. By tuning every aspect of the simulation — from pulse shape, to field strength, to lattice geometry — researchers can explore scenarios that are completely inaccessible to real materials or analog simulators.
Why does any of this matter? If we could predict which materials will become superconducting — and at what temperature, field, or current — it would transform how we search for new superconductors. Instead of trial-and-error in the lab, scientists could design and test new materials digitally first, saving huge amounts of time and money.
In the long run, Helios and its successors will become essential tools for materials science — not just confirming theories but generating new ones. And perhaps, one day, they’ll help us crack the code behind room-temperature superconductors.
Until then, the quantum revolution continues, one entangled pair at a time.