One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.
Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:
Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.
Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.
The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.
The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.
Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.
In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.
The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.
The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.
Edited
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Today, the Quantinuum software team is excited to announce Guppy, a new quantum programming language for the next generation of quantum computing—designed to work with upcoming hardware like Helios, our most powerful system yet. You can download Guppy today and start experimenting with it using our custom-built Selene emulator. Both Guppy and Selene are open source and are capable of handling everything from traditional circuits to dynamic, measurement-dependent programs such as quantum error correction protocols.
Guppy is a quantum-first programming language designed from the ground up to meet the needs of state-of-the-art quantum computers. Embedded in Python, it uses syntax that closely resembles Python, making it instantly familiar to developers. Guppy also provides powerful abstractions and compile-time safety that go far beyond traditional circuit builders like pytket or Qiskit.
Guppy is designed to be readable and expressive, while enabling precise, low-level quantum programming.
This example implements the gate V3 = (I + 2iZ)/√5 using a probabilistic repeat-until-success scheme[1].
If both X-basis measurements on the top two qubits return 0, the V3 gate is successfully applied to the input state |ψ⟩; otherwise, the identity is applied. Since this succeeds with a probability of 5/8, we can repeat the procedure until success.
Let’s implement this in Guppy.
First, we’ll define a helper function to prepare a scratch qubit in the |+⟩ state:
@guppy
def plus_q() -> qubit:
"""Allocate and prepare a qubit in the |+> state"""
q = qubit()
h(q)
return q
Next, a function for performing X-basis measurement:
@guppy
def x_measure(q: qubit @ owned) -> bool:
"""Measure the qubit in the X basis and return the result."""
h(q)
return measure(q)
The @owned annotation tells the Guppy compiler that we’re taking ownership of the qubit, not just borrowing it—a concept familiar to Rust programmers. This is required because measurement deallocates the qubit, and the compiler uses this information to track lifetimes and prevent memory leaks.
The @guppy decorator marks functions as Guppy source code. Oustide these functions, we can use regular Python - like setting a maximum attempt limit:
MAX_ATTEMPTS = 1000
With these pieces in place, we can now implement the full protocol:
@guppy
def v3_rus(q: qubit) -> int:
attempt = 0
while attempt < comptime(MAX_ATTEMPTS):
attempt += 1
a, b = plus_q(), plus_q()
toffoli(a, b, q)
s(q)
toffoli(a, b, q)
a_x, b_x = x_measure(a), x_measure(b)
if not (a_x or b_x):
break
z(q)
return attempt
What’s happening here?
There's a lot more to Guppy, including:
Explore more in the Guppy documentation
Helios represents a major leap forward for Quantinuum hardware—with more qubits, lower error rates, and advanced runtime features that require a new class of programming tools. Guppy provides the expressive power needed to fully harness Helios's capabilities—features that traditional circuit-building tools simply can't support.
See our latest roadmap update for more on Helios and what's coming.
Quantum hardware access is limited—but development shouldn't be. Selene is our new open-source emulator, designed to run compiled Guppy programs accurately—including support for noise modeling. Unlike generic simulators, Selene models advanced runtime behavior unique to Helios, such as measurement-dependent control flow and hybrid quantum-classical logic.
Selene supports multiple simulation backends:
Whether you're prototyping new algorithms or testing low-level error correction, Selene offers a realistic, flexible environment to build and iterate.
Guppy is available now on GitHub and PyPi under the Apache 2 license. Try it out with Selene, read the docs, and start building for the future of quantum computing today.
👉 Getting started with Guppy and Selene
1. Paetznick, A., & Svore, K. M. (2014). Repeat-Until-Success: Non-deterministic decomposition of single-qubit unitaries. arXiv preprint arXiv:1311.1074 ↩
Our next-generation quantum computer, Helios, will come online this year as more than a new chip. It will arrive as a full-stack platform that sets a new standard for the industry.
With our current and previous generation systems, H2 and H1, we have set industry records for the highest fidelities, pioneered the teleportation of logical qubits, and introduced the world’s first commercial application for quantum computers. Much of this success stems from the deep integration between our software and hardware.
Today, we are excited to share the details of our new software stack. Its features and benefits, outlined below, enable a lower barrier to entry, faster time-to-solution, industry-standard access, and the best possible user experience on Helios.
Most importantly, this stack is designed with the future in mind as Quantinuum advances toward universal, fully fault-tolerant quantum computing.
Register for our September 18th webinar on our new software stack
Our Current Generation Software Stack
Currently, the solutions our customers explore on our quantum hardware, which span cybersecurity, quantum chemistry, and quantum AI, plus third-party programs, are all powered by two middleware technologies:
Our Next Generation Software Stack
The launch of Helios will come with an upgraded software stack with new features. We’re introducing two key additions to the stack, specifically:
Moving forward, users will now leverage Guppy to run software applications on Helios and our future systems. TKET will be used solely as a compiler tool chain and for the optimization of Guppy programs.
Nexus, which remains as the default pathway to access our hardware, and third-party hardware, has been upgraded to support Guppy and provide access to Selene. Nexus also supports Quantum Intermediate Representation (QIR), an industry standard, which enables developers to program with languages like NVIDIA CUDA-Q, ensuring our stack stays accessible to the whole ecosystem.
With this new stack running on our next generation Helios system, several benefits will be delivered to the end user, including, but not limited to, improved time-to-solution and reduced memory error for programs critical to quantum error correction and utility-scale algorithms.
Below, we dive deeper into these upgrades and what they mean for our customers.
Designed for the Next Era of Quantum Computing
Guppy is a new programming language hosted in Python, providing developers with a familiar, accessible entry point into the next era of quantum computing.
As Quantinuum leads the transition from the noisy intermediate scale quantum (NISQ) era to fault-tolerant quantum computing, Guppy represents a fundamental departure from legacy circuit-building tools. Instead of forcing developers to construct programs gate-by-gate, a tedious and error-prone process, Guppy treats quantum programs as structured, dynamic software.
With native support for real-time feedback and common programming constructs like ‘if’ statements and ‘for’loops, Guppy enables developers to write complex, readable programs that adapt as the quantum system evolves. This approach unlocks unprecedented power and clarity, far surpassing traditional tools.
Designed with fault-tolerance in mind, Guppy also optimizes qubit resource management automatically, improving efficiency and reducing developer overhead.
All Guppy programs can be seamlessly submitted and managed through Nexus, our all-in-one quantum computing platform.
Find out more at guppylang.org
The Most Flexible Approach to Quantum Error Correction
When it comes to quantum error correction (QEC), flexibility is everything. That is why we designed Guppy to reduce barriers to entry to access necessary features for QEC.
Unlike platforms locked into rigid, hardware-specific codes, Quantinuum’s QCCD architecture gives developers the freedom to implement any QEC code. In a rapidly evolving field, this adaptability is critical: the ability to test and deploy the latest techniques can mean the difference between achieving quantum advantage and falling behind.
With Guppy, developers can implement advanced protocols such as magic state distillation and injection, quantum teleportation, and other measurement-based routines, all executed dynamically through our real-time control system. This creates an environment where researchers can push the limits of fault-tolerance now—not years from now.
In addition, users can employ NVIDIA’s CUDA-QX for out-of-the-box QEC, without needing to worry about writing their own decoders, simplifying the development of novel QEC codes.
By enabling a modular, programmable approach to QEC, our stack accelerates the path to fault-tolerance and positions us to scale quickly as more efficient codes emerge from the research frontier.
Real-Time Control for True Quantum Computing
Integrated seamlessly with Guppy is a next-generation control system powered by a new real-time engine, a key breakthrough for large-scale quantum computing.
This control layer makes our software stack the first commercial system to deliver full measurement-dependent control with undefined sequence length. In practical terms, that means operations can now be guided dynamically by quantum measurements as they occur—a critical step toward truly adaptive, fault-tolerant algorithms.
At the hardware level, features like real-time transport enable dynamic software capabilities, such as conditionals, loops, and recursion, which are all foundational for scaling from thousands to millions of qubits.
These advances deliver tangible performance gains, including faster time-to-solution, reduced memory error, and greater algorithmic efficiency, providing the foundational support required to convert algorithmic advances into useful real-world applications.
Quantum hardware access is limited, but development shouldn't be. Selene is our new open-source emulator, built to model realistic, entangled quantum behavior with exceptional detail and speed.
Unlike generic simulators, Selene captures advanced runtime behavior unique to Helios, including measurement-dependent control flow and hybrid quantum-classical logic. It runs Guppy programs out of the box, allowing developers to start building and testing immediately without waiting for machine time.
Selene supports multiple simulation backends, giving users state-of-the-art options for their specific needs, including backends optimized for matrix product state and tensor network simulations using NVIDIA GPUs and cuQuantum. This ensures maximum performance both on the quantum processor and in simulation.
These new features, and more, are available through Nexus, our all-in-one quantum computing platform.
Nexus serves as the middle layer that connects every part of the stack, providing a cloud-native SaaS environment for full-stack workflows, including server-side Selene instances. Users can manage Guppy programs, analyze results, and collaborate with others, all within a single, streamlined platform.
Further, Selene users who submit quantum state-vector simulations—the most complete and powerful method to simulate a general quantum circuit on a classical computer—through Nexus will be leveraging the NVIDIA cuQuantum library for efficient GPU-powered simulation.
Our entire stack, including Nexus and Selene, supports the industry-standard Quantum Intermediate Representation (QIR) as input, allowing users to program in their preferred programming language. QIR provides a common format for accessing a range of quantum computing backends, and Quantinuum Helios will support the full Adaptive Profile QIR This means developers can generate programs for Helios using tools like NVIDIA CUDA-Q, Microsoft Q#, and ORNL XACC.
Our customers choose Quantinuum as their top quantum computing partner because no one else matches our team or our results. We remain the leaders in quantum computing and the only provider of integrated quantum resources that will address our society’s most complex problems.
That future is already taking shape. With Helios and our new software stack, we are building the foundation for scalable, programmable, real-time quantum computing.
Wherever you’re sitting right now, you’re probably surrounded by the fruits of modern semiconductor technology. Chips aren't only in your laptops and cell phones – they're in your car, your doorbell, your thermostat, and even your toaster. Importantly, semiconductor-based chips are also in the heart of most quantum computers.
While quantum computing holds transformative potential, it faces two major challenges: first, achieving low error operations (say one in a billion), and second, scaling systems to enough qubits to address complex, real-world problems (say, on the order of a million). Quantinuum is proud to lead the industry in providing the lowest error rates in the business, but some continue to question whether our chosen modality, trapped-ion technology, can scale to meet these ambitious goals.
Why the doubt? Well, early demonstrations of trapped-ion quantum computers relied on bulky, expensive laser sources, large glass optics, and sizeable ion traps assembled by hand. By comparison, other modalities, such as semiconductor and superconductor qubits, resemble conventional computer chips. However, our quantum-charge-coupled device (QCCD) architecture shares the same path to scaling: at their core, our quantum computers are also chip-based. By leveraging modern microfabrication techniques, we can scale effectively while maintaining the advantage of low error rates that trapped ions provide.
Fortunately, we are at a point in history where QCCD quantum computing is already more compact compared to the early days. Traditional oversized laser sources have already been replaced by tiny diode lasers based on semiconductor chips, and our ion traps have already evolved from bulky, hand-assembled objects to traps fabricated on silicon wafers. The biggest remaining challenge lies in the control and manipulation of laser light.
For this next stage in our journey, we have turned to Infineon. Infineon not only builds some of the world’s leading classical computer chips, but they also bring in-house expertise in ion-trap quantum computing. Together, we are developing a chip with integrated photonics, bringing the control and manipulation of light fully onto our chips. This innovation drastically reduces system complexity and paves the way for serious scaling.
Since beginning work with Infineon, our pace of innovation has accelerated. Their expertise in fabricating waveguides, building grating couplers, and optimizing deposition processes for ultra-low optical loss gives us a significant advantage. In fact, Infineon has already developed deposition processes with the lowest optical losses in the world—a critical capability for building high-performance photonic systems.
Their impressive suite of failure analysis tools, such as electron microscopes, SIMS, FIB, AFMs, and Kelvin probes, allow us to diagnose and correct failures in days rather than weeks. Some of these tools are in-line, meaning analysis can be performed without removing devices from the cleanroom environment, minimizing contamination risk and further accelerating development.
Together, we are demonstrating that QCCD quantum computing is fundamentally a semiconductor technology—just like conventional computers. While seeming like it’s a world away, quantum computing is now closer to home than ever.