April 16, 2024

*By Ilyas Khan, Founder and Chief Product Officer, Jenni Strabley, Sr Director of Offering Management*

All quantum error correction schemes depend for their success on physical hardware achieving high enough fidelity. If there are too many errors in the physical qubit operations, the error correcting code has the effect of amplifying rather than diminishing overall error rates. For decades now, it has been hoped that one day a quantum computer would achieve “three 9's” – an iconic, inherent 99.9% 2-qubit physical gate fidelity – at which point many of the error-correcting codes required for universal fault tolerant quantum computing would successfully be able to squeeze errors out of the system.

That day has now arrived. Building on several previous laboratory demonstrations ^{1} ^{2} ^{3}, Quantinuum has become the first company ever to achieve “three 9's” in a commercially-available quantum computer, with the first demonstration of 99.914(3)% 2-qubit gate fidelity, showing repeatable performance across all qubit pairs on our H1-1 system that is constantly available to customers. This production-environment announcement is a marked difference to one-offs recorded in carefully contrived laboratory conditions. This demonstrates what will fast become the expected standard for the entire quantum computing sector.

Quantinuum is also announcing another milestone, a seven-figure Quantum Volume (QV) of 1,048,576 – or in terms preferred by the experts, 2^{20} – reinforcing our commitment to building, by a significant margin, the highest-performing quantum computers in the world.

These announcements follow a historic month that started when we proved our ability to scale our systems to the sizes needed to solve some of the world’s most pressing problems – and in a way that offers the best path to universal quantum computing.

On March 5^{th}, 2024, Quantinuum researchers disclosed details of our experiments that provide a solution to a totemic problem faced by all quantum computing architectures, known as the* wiring problem*. Supported by a video showing qubits being shuffled through a 2-dimensional grid ion-trap, our team presented concrete proof of the scalability of the quantum charge-coupled device (QCCD) architecture used in our H-Series quantum computers.

*Stop-motion ion transport video showing a chosen sorting operation implemented on an 8-site 2D grid trap with the swap-or-stay primitive. The sort is implemented by discrete choices of swaps or stays between neighboring sites. The numbers shown (indicated by dashed circles) at the beginning and end of the video show the initial and final location of the ions after the sort, e.g. the ion that starts at the top left site ends at the bottom right site. The stop-motion video was collected by segmenting the primitive operation and pausing mid-operation such that Yb fluorescence could be detected with a CMOS camera exposure.*

On April 3^{rd}, 2024 in partnership with Microsoft, our teams announced a breakthrough in quantum error correction that delivered as its crowning achievement the most reliable logical qubits on record.

We revealed detailed demonstrations in an arXiv pre-print paper of the reliability achieved via 4 logical qubits encoded into just 30 physical qubits on our System Model H2 quantum computer. Our joint teams were able to demonstrate logical circuit error rates far below physical circuit error rates, a capability that our full-stack quantum computer is currently the only one in the world with the fidelity required to achieve.

Reaching this level of physical fidelity is not optional for commercial scale computers – it is *essential* for error correction to work, and that in turn is a necessary foundation for any useful quantum computer. Our record two-qubit gate fidelity of 99.914(3)% marks a symbolic inflection point for the industry: at ”three 9's” fidelity, we are nearing or surpassing the break-even point (where logical qubits outperform physical qubits) for many quantum error correction protocols, and this will generate great interest among research and industrial teams exploring fault-tolerant methods for tackling real-world problems.

Without hardware fidelity this good, error-corrected calculations are *noisier* than un-corrected computations. This is why we call it a “threshold” – when gate errors are “above threshold”, quantum computers will remain noisy no matter what you do. Below threshold, you can use quantum error correction to push error rates way, way down, so that quantum computers eventually become as reliable as classical computers.

Four years ago, Quantinuum claimed that it would improve the performance of its H-Series quantum computers by 10x each year for five years, when measured by the industry’s most widely recognized benchmark, QV (an industry standard not to be confused with less comprehensive metrics such as Algorithmic Qubits).

Today’s achievement of a 2^{20} QV – which as with all our demonstrations was achieved on our commercially-available machine – shows that our team is living up to this audacious commitment. We are completely confident we can continue to overcome the technical problems that stand in the way of even better fidelity and QV performance. Our QV data is available on GitHub, as are our hardware specifications

The combination of high QV and gate fidelities puts the Quantinuum system in a class by-itself – it is far and away the best of any commercially-available quantum computer.

Additionally, and notably, these benchmarks were achieved “inherently”, without error mitigation, thanks to the H Series’ all-to-all connectivity and QCCD architecture. Full connectivity results in less errors when running large, complicated circuits. While other modalities depend on error mitigation techniques, such techniques are not scalable and present only a modest near-term value.

Lower physical error and high connectivity means our quantum computers have a provably *lower overhead* for error-corrected computation.

Looking more deeply, experts look for high fidelities that are valid in *all* operating zones and between *any *pair of qubits. In contrast to our competitors, this is precisely what our H Series delivers. We do not suffer from a broad distribution of gate fidelities between different pairs of qubits, meaning that some pairs of qubits have significantly lower fidelities. *Quantinuum is the only quantum computing company with all qubit pairs boasting above 99.9% fidelity*.

Alongside these benefits and demonstrations of scalability, fidelity, connectivity, and reliability, it is worth noting how these features impact what arguably matters the most to users – time to solution. In the QCCD architecture, speed of operations is decoupled from speed to reach a computational solution thanks to a combination of:

- a better signal to noise ratio than other modalities
- drastically reducing or eliminating the number of swap gates required (because we can move our ions through space), and
- reducing the number of trials required for an accurate result.

The net effect is that for increasingly complex circuits it takes a high-fidelity QCCD-type quantum computer less time to achieve accurate results than other 2D connected or lower-fidelity architectures.

“Getting to three 9’s in the QCCD architecture means that ~1000 entangling operations can be done before an error occurs. Our quantum computers are right at the edge of being able to do computations at the physical level that are beyond the reach of classical computers, which would occur somewhere between 3 nines and 4 nines. Some tasks become hard for classical computers before this regime (e.g. Google’s random circuit sampling problem) but this new regime allows for much less contrived problems to be solved. At that point, these machines become real tools for new discoveries – albeit they will still be limited in what they can probe, likely to be physics simulations or closely related problems,” **said Dave Hayes, a Senior R&D manager at Quantinuum.**

“Additionally, these fidelities put us, some would say comfortably, within the regime needed to build fault-tolerant machines. These fidelities allow us to start adding more qubits without needing to improve performance further, and to take advantage of quantum error correction to improve the computational power necessary for tackling truly large problems. This scaling problem gets easier with even better fidelities (which is why we’re not satisfied with 3 nines) but it is possible in principle.”

Quantinuum’s new records in fidelity and quantum volume on our commercial H1 device are expected to be achieved on the H2, once upgrades are implemented, underscoring the value that we offer to users for whom stability, reliability and robust performance are pre-requisites. The quantum computing landscape is complex and changing, but we remain at the head of the pack in all key metrics. The relationship with our world-class applications teams means that co-designed devices for solving some of the world’s most intractable problems are a big step closer to reality.

Quantinuum is the world’s leading quantum computing company, and our world-class scientists and engineers are continually driving our technology forward while expanding the possibilities for our users. Their work on applications includes cybersecurity, quantum chemistry, quantum Monte Carlo integration, quantum topological data analysis, condensed matter physics, high energy physics, quantum machine learning, and natural language processing – and we are privileged to support them to bring new solutions to bear on some of the greatest challenges we face.

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.

Blog

September 20, 2024

Quantinuum achieves moonshot years ahead of schedule, demonstrating fault-tolerant high-fidelity teleportation of a logical qubit

While it sounds like a gadget from Star Trek, teleportation is real – and it is happening at Quantinuum. In a new paper published in *Science*, our researchers moved a quantum state from one place to another without physically moving it through space - and they accomplished this feat with* fault-tolerance and excellent* fidelity. This is an important milestone for the whole quantum computing community and the latest example of Quantinuum achieving critical milestones years ahead of expectations.

While it seems exotic, teleportation is a critical piece of technology needed for full scale fault-tolerant quantum computing, and it is used widely in algorithm and architecture design. In addition to being essential on its own, teleportation has historically been used to demonstrate a high level of system maturity. The protocol requires multiple qubits, high-fidelity state-preparation, single-qubit operations, entangling operations, mid-circuit measurement, and conditional operations, making it an excellent system-level benchmark.

Our team was motivated to do this work by the US Government Intelligence Advance Research Projects Activity (IARPA), who set a challenge to perform high fidelity teleportation with the goal of advancing the state of science in universal fault-tolerant quantum computing. IARPA further specified that the entanglement and teleportation protocols must also maintain fault-tolerance, a key property that keeps errors local and correctable.

These ambitious goals required developing highly complex systems, protocols, and other infrastructure to enable exquisite control and operation of quantum-mechanical hardware. We are proud to have accomplished these goals ahead of schedule, demonstrating the flexibility, performance, and power of Quantinuum’s Quantum Charge Coupled Device (QCCD) architecture.

Quantinuum’s demonstration marks the first time that an arbitrary quantum state has been teleported at the *logical level *(using a quantum error correcting code). This means that instead of teleporting the quantum state of a single physical qubit we have teleported the quantum information encoded in an entangled *set *of physical qubits, known as a logical qubit. In other words, the collective state of a bunch of qubits is teleported from one set of physical qubits to another set of physical qubits. This is, in a sense, a lot closer to what you see in Star Trek – they teleport the state of a big collection of atoms at once. Except for the small detail of coming up with a pile of matter with which to reconstruct a human body...

This is also the first demonstration of a fully fault-tolerant version of the state teleportation circuit using real-time quantum error correction (QEC), decoding mid-circuit measurement of syndromes and implementing corrections during the protocol. It is *critical* for computers to be able to catch and correct any errors that happen along the way, and this is not something other groups have managed to do in any robust sense. In addition, our team achieved the result with *high fidelity *(97.5%±0.2%), providing a powerful demonstration of the quality of our H2 quantum processor, Powered by Honeywell.

Our team also tried several variations of logical teleportation circuits, using both transversal gates and lattice surgery protocols, thanks to the flexibility of our QCCD architecture. This marks the first demonstration of lattice surgery performed on a QEC code.

Lattice surgery is a strategy for implementing logical gates that requires only 2D nearest-neighbor interactions, making it especially useful for architectures whose qubit locations are fixed, such as superconducting architectures. QCCD and other technologies that do not have fixed qubit positioning might employ this method, another method, or some mixture. We are fortunate that our QCCD architecture allows us to explore the use of different logical gating options so that we can optimize our choices for experimental realities.

While the teleportation demonstration is the big result, sometimes it is the behind-the-scenes technology advancements that make the big differences. The experiments in this paper were designed *at the logical level* using an internally developed logical-level programming language dubbed Simple Logical Representation (SLR). This is yet another marker of our system’s maturity – we are no longer programming at the physical level but have instead moved up one “layer of abstraction”. Someday, all quantum algorithms will need to be run on the *logical level* with rounds of quantum error correction. This is a markedly different state than most present experiments, which are run on the *physical level* without quantum error correction. It is also worth noting that these results were generated using the software stack available to any user of Quantinuum’s H-Series quantum computers, and these experiments were run alongside customer jobs – underlining that these results are commercial performance, not hero data on a bespoke system.

Ironically, a key element in this work is our ability to move our qubits through space the “normal” way - this capacity gives us all-to-all connectivity, which was essential for some of the QEC protocols used in the complex task of fault-tolerant logical teleportation. We recently demonstrated solutions to the sorting problem and wiring problem in a new 2D grid trap, which will be essential as we scale up our devices.

technical

All

Blog

September 18, 2024

“Talking quantum circuits”

*“How can quantum structures and quantum computers contribute to the effectiveness of AI?”*

*In previous work we have made notable advances in answering this question, and this article is based on our most recent work in the new papers *[**arXiv:2406.17583****, ****arXiv:2408.06061****], and most notably the experiment in [**

**This article is one of a series that we will be publishing alongside further advances – advances that are accelerated by access to the most powerful quantum computers ****available****.**

Large language Models (LLMs) such as ChatGPT are having an impact on society across many walks of life. However, as users have become more familiar with this new technology, they have also become increasingly aware of deep-seated and systemic problems that come with AI systems built around LLM’s.

The primary problem with LLMs is that *nobody knows how they work *- as inscrutable “black boxes” they aren’t “interpretable”, meaning we can’t reliably or efficiently control or predict their behavior. This is unacceptable in many situations. In addition, Modern LLMs are incredibly expensive to build and run, costing serious – and potentially unsustainable –amounts of power to train and use. This is why more and more organizations, governments, and regulators are insisting on solutions.

But how can we find these solutions, when we don’t fully understand what we are dealing with now?^{1}

At Quantinuum, we have been working on natural language processing (NLP) using quantum computers for some time now. We are excited to have recently carried out experiments [arXiv: 2409.08777] which demonstrate not only how it is possible to train a model for a quantum computer in a scalable manner, but also how to do this in a way that is *interpretable* for us. Moreover, we have promising theoretical indications of the usefulness of quantum computers for interpretable NLP [arXiv:2408.06061].

In order to better understand why this could be the case, one needs to understand the ways in which meanings compose together throughout a story or narrative. Our work towards capturing them in a new model of language, which we call DisCoCirc, is reported on extensively in this previous blog post from 2023.

In new work referred to in this article, we embrace “compositional interpretability” as proposed in [arXiv:2406.17583] as a solution to the problems that plague current AI. In brief, compositional interpretability boils down to being able to assign a human friendly meaning, such as natural language, to the components of a model, and then being able to understand how they fit together^{2}.

A problem currently inherent to quantum machine learning is that of being able to train at scale. We avoid this by making use of “compositional generalization”. This means we train small, on classical computers, and then at test time evaluate much larger examples on a quantum computer. There now exist quantum computers which are impossible to simulate classically. To train models for such computers, it seems that compositional generalization currently provides the only credible path.

DisCoCirc is a circuit-based model for natural language that turns arbitrary text into “text circuits” [arXiv:1904.03478, arXiv:2301.10595, arXiv:2311.17892]. When we say that arbitrary text becomes ‘text-circuits’ we are converting the lines of text, which live in one dimension, into text-circuits which live in two-dimensions. These dimensions are the entities of the text versus the events in time.

To see how that works, consider the following story. In the beginning there is **Alex** and **Beau**. **Alex** meets **Beau**. Later, **Chris** shows up, and **Beau** marries **Chris**. **Alex** then kicks **Beau**.

The content of this story can be represented as the following circuit:

Such a text circuit represents how the ‘actors’ in it interact with each other, and how their states evolve by doing so. Initially, we know nothing about Alex and Beau. Once Alex meets Beau, we know something about Alex and Beau’s interaction, then Beau marries Chris, and then Alex kicks Beau, so we know quite a bit more about all three, and in particular, how they relate to each other.

Let’s now take those circuits to be quantum circuits.

In the last section we will elaborate more why this could be a very good choice. For now it’s ok to understand that we simply follow the current paradigm of using vectors for meanings, in exactly the same way that this works in LLMs. Moreover, if we then also want to faithfully represent the compositional structure in language^{3}, we can rely on theorem 5.49 from our book Picturing Quantum Processes, which informally can be stated as follows:

*If the manner in which meanings of words (represented by vectors) compose obeys linguistic structure, then those vectors compose in exactly the same way as quantum systems compose. ^{4}*

In short, a quantum implementation enables us to embrace compositional interpretability, as defined in our recent paper [arXiv:2406.17583].

So, what have we done? And what does it mean?

We implemented a “question-answering” experiment on our Quantinuum quantum computers, for text circuits as described above. We know from our new paper [arXiv:2408.06061] that this is very hard to do on a classical computer due to the fact that as the size of the texts get bigger they very quickly become unrealistic to even try to do this on a classical computer, however powerful it might be. This is worth emphasizing. The experiment we have completed would scale exponentially using classical computers – to the point where the approach becomes intractable.

The experiment consisted of teaching (or training) the quantum computer to answer a question about a story, where both the story and question are presented as text-circuits. To test our model, we created longer stories in the same style as those used in training and questioned these. In our experiment, our stories were about people moving around, and we questioned the quantum computer about who was moving in the same direction at the end of the stories. A harder alternative one could imagine, would be having a murder mystery story and then asking the computer who was the murderer.

And remember - the training in our experiment constitutes the assigning of quantum states and gates to words that occur in the text.

The major reason for our excitement is that the training of our circuits enjoys *compositional generalization*. That is, we can do the training on small-scale ordinary computers, and do the testing, or asking the important questions, on quantum computers that can operate in ways not possible classically. Figure 4 shows how, despite only being trained on stories with up to 8 actors, the test accuracy remains high, even for much longer stories involving up to 30 actors.

Training large circuits directly in quantum machine learning, leads to difficulties which in many cases undo the potential advantage. Critically - compositional generalization allows us to bypass these issues.

We can compare the results of our experiment on a quantum computer, to the success of a classical LLM ChatGPT (GPT-4) when asked the same questions.

What we are considering here is a story about a collection of characters that walk in a number of different directions, and sometimes follow each other. These are just some initial test examples, but it does show that this kind of reasoning is not particularly easy for LLMs.

The input to ChatGPT was:

What we got from ChatGPT:

Can you see where ChatGPT went wrong?

ChatGPT’s score (in terms of accuracy) oscillated around 50% (equivalent to random guessing). Our text circuits consistently outperformed ChatGPT on these tasks. Future work in this area would involve looking at prompt engineering – for example how the phrasing of the instructions can affect the output, and therefore the overall score.

Of course, we note that ChatGPT and other LLM’s will issue new versions that may or may not be marginally better with ‘question-answering’ tasks, and we also note that our own work may become far more effective as quantum computers rapidly become more powerful.

We have now turned our attention to work that will show that using vectors to represent meaning and requiring compositional interpretability for natural language takes us mathematically natively into the quantum formalism. This does not mean that there doesn't exist an efficient classical method for solving specific tasks, and it may be hard to prove traditional hardness results whenever there is some machine learning involved. This could be something we might have to come to terms with, just as in classical machine learning.

At Quantinuum we possess the most powerful quantum computers currently available. Our recently published roadmap is going to deliver more computationally powerful quantum computers in the short and medium term, as we extend our lead and push towards universal, fault tolerant quantum computers by the end of the decade. We expect to show even better (and larger scale) results when implementing our work on those machines. In short, we foresee a period of rapid innovation as powerful quantum computers that cannot be classically simulated become more readily available. This will likely be disruptive, as more and more use cases, including ones that we might not be currently thinking about, come into play.

Interestingly and intriguingly, we are also pioneering the use of powerful quantum computers in a hybrid system that has been described as a ‘quantum supercomputer’ where quantum computers, HPC and AI work together in an integrated fashion and look forward to using these systems to advance our work in language processing that can help solve the problem with LLM’s that we highlighted at the start of this article.

^{1} And where do we go next, when we don’t even understand what we are dealing with now? On previous occasions in the history of science and technology, when efficient models without a clear interpretation have been developed, such as the Babylonian lunar theory or Ptolemy’s model of epicycles, these initially highly successful technologies vanished, making way for something else.

^{2} Note that our conception of compositionality is more general than the usual one adopted in linguistics, which is due to Frege. A discussion can be found in [arXiv: 2110.05327].

^{3} For example, using pregroups here as linguistic structure, which are the cups and caps of PQP.

^{4} That is, using the tensor product of the corresponding vector spaces.

technical

All

Blog

September 17, 2024

Technical perspective: By the end of the decade, we will deliver universal, fault-tolerant quantum computing

**By Dr. Harry Buhrman, Chief Scientist for Algorithms and Innovation, and Dr. Chris Langer, Fellow**

This week, we confirm what has been implied by the rapid pace of our recent technical progress as we reveal a major acceleration in our hardware road map. By the end of the decade, our accelerated hardware roadmap will deliver a fully fault-tolerant and universal quantum computer capable of executing millions of operations on hundreds of logical qubits.

The next major milestone on our accelerated roadmap is Quantinuum Helios™, Powered by Honeywell, a device that will definitively push beyond classical capabilities in 2025. That sets us on a path to our fifth-generation system, Quantinuum Apollo™, a machine that delivers scientific advantage and a commercial tipping point this decade.

We are committed to continually advancing the capabilities of our hardware over prior generations, and Apollo makes good on that promise. It will offer:

- thousands of physical qubits
- physical error rates less than 10
^{-4} - All of our most competitive features: all-to-all connectivity, low crosstalk, mid-circuit measurement and qubit re-use
- Conditional logic
- Real-time classical co-compute
- Physical variable angle 1 qubit and 2 qubit gates
- Hundreds of logical qubits
- Logical error rates better than 10
^{-6}with analysis based on recent literature estimating as low as 10^{-10}

By leveraging our all-to-all connectivity and low error rates, we expect to enjoy significant efficiency gains in terms of fault-tolerance, including single-shot error correction (which saves time) and high-rate and high-distance Quantum Error Correction (QEC) codes (which mean more logical qubits, with stronger error correction capabilities, can be made from a smaller number of physical qubits).

Studies of several efficient QEC codes already suggest we can enjoy logical error rates much lower than our target 10^{-6} – we may even be able to reach 10^{-10}, which enables exploration of even more complex problems of both industrial and scientific interest.

Error correcting code exploration is only just beginning – we anticipate discoveries of even more efficient codes. As new codes are developed, Apollo will be able to accommodate them, thanks to our flexible high-fidelity architecture. The bottom line is that Apollo promises fault-tolerant quantum advantage sooner, with fewer resources.

Like all our computers, Apollo is based on the quantum charged coupled device (QCCD) architecture. Here, each qubit’s information is stored in the atomic states of a single ion. Laser beams are applied to the qubits to perform operations such as gates, initialization, and measurement. The lasers are applied to individual qubits or co-located qubit pairs in dedicated operation zones. Qubits are held in place using electromagnetic fields generated by our ion trap chip. We move the qubits around in space by dynamically changing the voltages applied to the chip. Through an alternating sequence of qubit rearrangements via movement followed by quantum operations, arbitrary circuits with arbitrary connectivity can be executed.

The ion trap chip in Apollo will host a 2D array of trapping locations. It will be fabricated using standard CMOS processing technology and controlled using standard CMOS electronics. The 2D grid architecture enables fast and scalable qubit rearrangement and quantum operations – a critical competitive advantage. The Apollo architecture is scalable to the significantly larger systems we plan to deliver in the next decade.

Apollo’s scaling of very stable physical qubits and native high-fidelity gates, together with our advanced error correcting and fault tolerant techniques will establish a quantum computer that can perform tasks that do not run (efficiently) on any classical computer. We already had a first glimpse of this in our recent work sampling the output of random quantum circuits on H2, where we performed 100x better than competitors who performed the same task while using 30,000x less power than a classical supercomputer. But with Apollo we will travel into uncharted territory.

The flexibility to use either thousands of qubits for shorter computations (up to 10k gates) or hundreds of qubits for longer computations (from 1 million to 1 billion gates) make Apollo a versatile machine with unprecedented quantum computational power. We expect the first application areas will be in scientific discovery; particularly the simulation of quantum systems. While this may sound academic, this is how all new material discovery begins and its value should not be understated. This era will lead to discoveries in materials science, high-temperature superconductivity, complex magnetic systems, phase transitions, and high energy physics, among other things.

In general, Apollo will advance the field of physics to new heights while we start to see the first glimmers of distinct progress in chemistry and biology. For some of these applications, users will employ Apollo in a mode where it offers thousands of qubits for relatively short computations; e.g. exploring the magnetism of materials. At other times, users may want to employ significantly longer computations for applications like chemistry or topological data analysis.

But there is more on the horizon. Carefully crafted AI models that interact seamlessly with Apollo will be able to squeeze all the “quantum juice” out and generate data that was hitherto unavailable to mankind. We anticipate using this data to further the field of AI itself, as it can be used as training data.

The era of scientific (quantum) discovery and exploration will inevitably lead to commercial value. Apollo will be the centerpiece of this commercial tipping point where use-cases will build on the value of scientific discovery and support highly innovative commercially viable products.

Very interestingly, we will uncover applications that we are currently unaware of. As is always the case with disruptive new technology, Apollo will run currently unknown use-cases and applications that will make perfect sense once we see them. We are eager to co-develop these with our customers in our unique co-creation program.

Today, System Model H2 is our most advanced commercial quantum computer, providing 56 physical qubits with physical two-qubit gate errors less than 10^{-3}. System Model H2, like all our systems, is based on the QCCD architecture.

Starting from where we are today, our roadmap progresses through two additional machines prior to Apollo. The Quantinuum Helios™ system, which we are releasing in 2025, will offer around 100 physical qubits with two-qubit gate errors less than 5x10^{-4}. In addition to expanded qubit count and better errors, Helios makes two departures from H2. First, Helios will use 137Ba+ qubits in contrast to the 171Yb+ qubits used in our H1 and H2 systems. This change enables lower two-qubit gate errors and less complex laser systems with lower cost. Second, for the first time in a commercial system, Helios will use junction-based qubit routing. The result will be a “twice-as-good" system: Helios will offer roughly 2x more qubits with 2x lower two-qubit gate errors while operating more than 2x faster than our 56-qubit H2 system.

After Helios we will introduce Quantinuum Sol™, our first commercially available 2D-grid-based quantum computer. Sol will offer hundreds of physical qubits with two-qubit gate errors less than 2x10^{-4}, operating approximately 2x faster than Helios. Sol being a fully 2D-grid architecture is the scalability launching point for the significant size increase planned for Apollo.

Thanks to Sol’s low error rates, users will be able to execute circuits with up to 10,000 quantum operations. The usefulness of Helios and Sol may be extended with a combination of quantum error detection (QED) and quantum error mitigation (QEM). For example, the [[k+2, k, 2]] iceberg code is a light-weight QED code that encodes k+2 physical qubits into k logical qubits and only uses an additional 2 ancilla qubits. This low-overhead code is well-suited for Helios and Sol because it offers the non-Clifford variable angle entangling ZZ-gate directly without the overhead of magic state distillation. The errors Iceberg fails to detect are already ~10x lower than our physical errors, and by applying a modest run-time overhead to discard detected failures, the effective error in the computation can be further reduced. Combining QED with QEM, a ~10x reduction in the effective error may be possible while maintaining run-time overhead at modest levels and below that of full-blown QEC.

Our new roadmap is an acceleration over what we were previously planning. The benefits of this are obvious: Apollo brings the commercial tipping point sooner than we previously thought possible. This acceleration is made possible by a set of recent breakthroughs.

First, we solved the “wiring problem”: we demonstrated that trap chip control is scalable using our novel center-to-left-right (C2LR) protocol and broadcasting shared control signals to multiple electrodes. This demonstration of qubit rearrangement in a 2D geometry marks the most advanced ion trap built, containing approximately 40 junctions. This trap was deployed to 3 different testbeds in 2 different cities and operated with 2 different collections of dual-ion-species, and all 3 cases were a success. These demonstrations showed that the footprint of the most complex parts of the trap control stay constant as the number of qubits scales up. This gives us the confidence that Sol, with approximately 100 junctions, will be a success.

Second, we continue to reduce our two-qubit physical gate errors. Today, H1 and H2 have two-qubit gate errors less than 1x10^{-3} across *all *pairs of qubits. This is the best in the industry and is a key ingredient in our record >2 million quantum volume. Our systems are the most benchmarked in the industry, and we stand by our data - making it all publicly available. Recently, we observed an 8x10^{-4} two-qubit gate error in our Helios development test stand in 137Ba+, and we’ve seen even better error rates in other testbeds. We are well on the path to meeting the 5x10^{-4} spec in Helios next year.

Third, the all-to-all connectivity offered by our systems enables highly efficient QEC codes. In Microsoft’s recent demonstration, our H2 system with 56 physical qubits was used to generate 12 logical qubits at distance 4. This work demonstrated several experiments, including repeated rounds of error correction where the error in the final result was ~10x lower than the physical circuit baseline.

In conclusion, through a combination of advances in hardware readiness and QEC, we have line-of-sight to Apollo by the end of the decade, a fully fault-tolerant quantum advantaged machine. This will be a commercial tipping point: ushering in an era of scientific discovery in physics, materials, chemistry, and more. Along the way, users will have the opportunity to discover new enabling use cases through quantum error detection and mitigation in Helios and Sol.

Quantinuum has the best quantum computers today and is on the path to offering fault-tolerant useful quantum computation by the end of the decade.

technical

All