Untangling the Mysteries of Knots with Quantum Computers

What Quantum Advantage actually looks like

March 25, 2025

One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.

Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:

Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.

Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.

The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.

Four equivalent representations of the trefoil knot, the simplest non-trivial knot.
They all have the same Jones polynomial, as it is an invariant.
These knots have different Jones polynomials, so they are not equivalent.

The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.

Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.

Demonstration of the quantum algorithm on the H2 quantum computer for estimating the value of Jones polynomial of a link with ~100 crossings. The raw signal (orange) can be amplified (green) with error detection, and corrected via a problem-tailored error mitigation method (purple), bringing the experimental estimate closer to the actual value (blue).

In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.

The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.

Rigorous resource estimation of our quantum algorithm pinpoints the exponential quantum advantage quantified in terms of time-to-solution, namely the time necessary for the classical state-of-the-art to reach the same error as the achieved by quantum. The advantage crossover happens at large link sizes, requiring circuits with ~85 qubits and ~8.5k two-qubit gates, assuming 99.99% two-qubit gate fidelity and 30ms per circuit-layer. The classical algorithms are assumed to run on the Frontier Supercomputer.

The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.

Edited

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
June 10, 2025
Our Hardware is Now Running Quantum Transformers!

If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.

The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.

Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.

Why this matters: Quantum AI, born native

This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.

Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.  

Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.

That’s what we’ve built.

What makes Quixer different?

Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.

Quixer is different: it’s not a translation – it's an innovation.

With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.

As quantum computing advances toward fault tolerance, Quixer is built to scale with it.

What’s next for Quixer?

We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.

This is just the beginning.

Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.

This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.

Stay tuned. The revolution is only getting started.

technical
All
Blog
June 9, 2025
Join us at ISC25

Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!

As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.

Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.

  • Our industry-leading quantum computer holds the record for performance with a Quantum Volume of 2²³ = 8,388,608 and the highest fidelity on a commercially available QPU available to our users every time they access our systems.
  • Our systems have been validated by a #1 ranking against competitors in a recent benchmarking study by Jülich Research Centre.
  • We’ve laid out a clear roadmap to reach universal, fully fault-tolerant quantum computing by the end of the decade and will launch our next-generation system, Helios, later this year.
  • We are advancing real-world hybrid compute with partners such as RIKEN, NVIDIA, SoftBank, STFC Hartree Center and are pioneering applications such as our own GenQAI framework.
Exhibit Hall

From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.

Presentations & Demos

Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.

Multicore World Networking Event

  • Monday, June 9 | 7:00pm – 9:00 PM at Hofbräu Wirtshaus Esplanade
    In partnership with Multicore World, join us for a Quantinuum-sponsored Happy Hour to explore the present and future of quantum computing with Quantinuum CCO, Dr. Nash Palaniswamy, and network with our team.
    Register here

H1 x CUDA-Q Demonstration

  • All Week at Booth B40
    We’re showcasing a live demonstration of NVIDIA’s CUDA-Q platform running on Quantinuum’s industry-leading quantum hardware. This new integration paves the way for hybrid compute solutions in optimization, AI, and chemistry.
    Register for a demo

HPC Solutions Forum

  • Wednesday, June 11 | 2:20 – 2:40 PM
    “Enabling Scientific Discovery with Generative Quantum AI” – Presented by Maud Einhorn, Technical Account Executive at Quantinuum, discover how hybrid quantum-classical workflows are powering novel use cases in scientific discovery.
See You There!

Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.

We look forward to seeing you in Hamburg!

events
All
Blog
May 27, 2025
Teleporting to new heights

Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.

Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.

Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.

Figure 1: Fidelity of two-bit state teleportation for physical qubit experiments and logical qubit experiments using the d=3 color code (Steane code). The same QASM programs that were ran during March 2024 on the Quantinuum's H2-1 device were reran on the same device on April to March 2025. Thanks to the improvements made to H2-1 from 2024 to 2025, physical error rates have been reduced leading to increased fidelity for both the physical and logical level teleportation experiments. The results imply a logical error rate that is 2.3 times smaller than the physical error rate while being statistically well separated, thus indicating the logical fidelities are below break-even for teleportation.

This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.

Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.

This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.

technical
All