Building a quantum computer that offers advantages over classical computers is the goal of quantum computing groups worldwide. A competitive quantum computer must be “universal”, requiring the ability to perform all operations already possible on a classical computer, as well as new ones specific to quantum computing. Of course, that’s just the beginning – it should also be able to do this in a reasonable amount of time, to deal effectively with noise from the environment, and to perform computations to arbitrary accuracy.
This is a lot to get right, and over the years quantum computer scientists have described ways to solve these often-overlapping challenges. To deal with noise from the environment and achieve arbitrary accuracy, quantum computers need to be able to keep going even as noise accumulates on the quantum bits, or qubits, which hold the quantum information. Such fault-tolerance may be achieved using quantum error correction, where ensembles of physical qubits are encoded into logical qubits and those are used to counteract noise and perform computational operations called gates. Unfortunately, no single quantum error correction code plays well with the goal of universality because all codes lack a complete universal set of fault-tolerant gates (the technical reason for this comes down to the way quantum gates are executed between logical qubits – the native gate set on error-corrected logical qubits are known by experts as transversal gates, and they do not include all the gates needed for universal quantum computing).
The solution to this obstacle to universality is a magic state, a quantum state which provides for the missing gate when error correcting codes are used. High fidelity magic states are achieved by a process of distillation, which purifies them from other noisier magic states. It is widely recognized that magic state distillation is one of the totemic challenges on the path towards universal, fault-tolerant quantum computing. Quantinuum’s scientists, in close collaboration with a team at Microsoft, set out to demonstrate the distillation process in real-time using physical qubits on a quantum computer for the first time.
The results of this work are available in a new paper, Advances in compilation for quantum hardware -- A demonstration of magic state distillation and repeat until success protocols.
How does magic state distillation work? Imagine a factory, taking in many qubits in imperfect initial states at one end. Broadly speaking, the factory distills the imperfect states into an almost pure state with a smaller error probability, by sending them through a well-defined process over and over. In this case, the process takes in a group of five qubits. It applies a quantum error correcting code that entangles these five qubits, with four used to test whether the fifth, target qubit has been purified. If the process fails, the ensemble is discarded and the process repeats. If it succeeds, the newly distilled target qubit is kept and combined with four other successes to form a new ensemble, which then rejoins the process of continued purification. By undertaking this process many times, the purity of the magic state increases at each step, gradually moving towards the conditions required for universal, fault-tolerant quantum computing.
Despite being the subject of theoretical exploration over decades, real-time magic state distillation had never been realized on a quantum computer. In typical pioneering style, the Quantinuum and Microsoft team decided to take on this challenge. But before they could get started, they recognized that their toolset would have to be significantly sharpened up.
At the heart of magic state distillation is a highly complex repeating process, which requires state-of-the-art protocols and control flow logic built on a best-in-class programming toolset. The research team turned to Quantum Intermediate Representation (QIR) to simplify and streamline the programming of this complex quantum computing process.
QIR is a is a quantum-specific code representation based on the popular open-sourced classical LLVM intermediate language, with the addition of structures and protocols that support the maturation and modernization of quantum computing. QIR includes elements that are essential in classical computing, but which are yet to be standardized in quantum computing, such as the humble programming loop.
Loops, which often take forms like "for...next" or "do...while," are central to programming, allowing code to repeat instructions in a stepwise manner until a condition is met. In quantum computing, this is a tough challenge because loops require control flow logic and mid-circuit measurement, which are difficult to realize in a quantum computer but have been demonstrated in Quantinuum’s System Model H1-1, Powered by Honeywell. Loops are essential for realizing magic state distillation and it’s well-understood that LLVM is great at optimizing complex control flow, including loops. This made magic state distillation a natural choice for demonstrating a valuable application of QIR and making for a great example of the use of a classical technique in a quantum context.
The team used Quantinuum’s H1-1 quantum computer – benefiting from industry-leading components such as mid-circuit measurement, qubit reuse and feed-forward – to make possible the quantum looping required for a magic state distillation protocol, and becoming the first quantum computing team ever to run a real-time magic state distillation protocol on quantum hardware.
Building on this success, the team designed further experiments to assess the potential of four methods for exploring the use of a quantum protocol called a repeat-until-success (RUS) circuit to achieve a loop process. First, they hard-coded a loop directly into the extended OpenQASM 2.0, a widely used quantum assembly language, but which requires additional overhead to target advanced components on Quantinuum's very versatile H-Series quantum computer. Against this, they compared two alternative methods for coding a loop in a standard high-level programming language: controlled recursion, which was directed through both OpenQASM and through QIR; and a native for loop made possible within QIR.
The results were clear-cut: the hard-coded OpenQASM 2.0 loop performed as well as the theoretical prediction, maintaining high quality results after a number of loops, as did the natively-coded QIR for loop. The two recursive loops saw the quality of their results drop away fast as the loop limit was raised. But in a head-to-head between hard-coded OpenQASM and QIR, which converts high-level source code from many prominent and familiar languages into low-level machine code, QIR won hands-down on the basis of practicality.
Martin Roetteler, Director of Quantum Applications at Microsoft, shared: “This was a very exciting exploration of control flow logic on quantum hardware. In seeking to understand the capabilities of QIR to optimize programming structures on real hardware, we were rewarded with a clear answer, and an important demonstration of the capabilities of QIR.”
In follow-up work, the team is now preparing to run a logical magic state protocol on the H2-1 quantum computer with its 32 high-fidelity qubits, and hopes to become the first group to successfully achieve logical magic state distillation. The features and fidelity offered by the H2 make it one of the best quantum computers currently capable of shooting for such a major milestone on the journey towards fault tolerance, while the current work demonstrates that, in QIR, the necessary control flow logic is now available to achieve it.
The paper discussed in this post was authored by Natalie C. Brown, John P. Campora III, Cassandra Granade, Bettina Heim, Stefan Wernli, Ciaran Ryan-Anderson, Dominic Lucchetti, Adam Paetznick, Martin Roetteler, Krysta Svore and Alex Chernoguzov.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
While it sounds like a gadget from Star Trek, teleportation is real – and it is happening at Quantinuum. In a new paper published in Science, our researchers moved a quantum state from one place to another without physically moving it through space - and they accomplished this feat with fault-tolerance and excellent fidelity. This is an important milestone for the whole quantum computing community and the latest example of Quantinuum achieving critical milestones years ahead of expectations.
While it seems exotic, teleportation is a critical piece of technology needed for full scale fault-tolerant quantum computing, and it is used widely in algorithm and architecture design. In addition to being essential on its own, teleportation has historically been used to demonstrate a high level of system maturity. The protocol requires multiple qubits, high-fidelity state-preparation, single-qubit operations, entangling operations, mid-circuit measurement, and conditional operations, making it an excellent system-level benchmark.
Our team was motivated to do this work by the US Government Intelligence Advance Research Projects Activity (IARPA), who set a challenge to perform high fidelity teleportation with the goal of advancing the state of science in universal fault-tolerant quantum computing. IARPA further specified that the entanglement and teleportation protocols must also maintain fault-tolerance, a key property that keeps errors local and correctable.
These ambitious goals required developing highly complex systems, protocols, and other infrastructure to enable exquisite control and operation of quantum-mechanical hardware. We are proud to have accomplished these goals ahead of schedule, demonstrating the flexibility, performance, and power of Quantinuum’s Quantum Charge Coupled Device (QCCD) architecture.
Quantinuum’s demonstration marks the first time that an arbitrary quantum state has been teleported at the logical level (using a quantum error correcting code). This means that instead of teleporting the quantum state of a single physical qubit we have teleported the quantum information encoded in an entangled set of physical qubits, known as a logical qubit. In other words, the collective state of a bunch of qubits is teleported from one set of physical qubits to another set of physical qubits. This is, in a sense, a lot closer to what you see in Star Trek – they teleport the state of a big collection of atoms at once. Except for the small detail of coming up with a pile of matter with which to reconstruct a human body...
This is also the first demonstration of a fully fault-tolerant version of the state teleportation circuit using real-time quantum error correction (QEC), decoding mid-circuit measurement of syndromes and implementing corrections during the protocol. It is critical for computers to be able to catch and correct any errors that happen along the way, and this is not something other groups have managed to do in any robust sense. In addition, our team achieved the result with high fidelity (97.5%±0.2%), providing a powerful demonstration of the quality of our H2 quantum processor, Powered by Honeywell.
Our team also tried several variations of logical teleportation circuits, using both transversal gates and lattice surgery protocols, thanks to the flexibility of our QCCD architecture. This marks the first demonstration of lattice surgery performed on a QEC code.
Lattice surgery is a strategy for implementing logical gates that requires only 2D nearest-neighbor interactions, making it especially useful for architectures whose qubit locations are fixed, such as superconducting architectures. QCCD and other technologies that do not have fixed qubit positioning might employ this method, another method, or some mixture. We are fortunate that our QCCD architecture allows us to explore the use of different logical gating options so that we can optimize our choices for experimental realities.
While the teleportation demonstration is the big result, sometimes it is the behind-the-scenes technology advancements that make the big differences. The experiments in this paper were designed at the logical level using an internally developed logical-level programming language dubbed Simple Logical Representation (SLR). This is yet another marker of our system’s maturity – we are no longer programming at the physical level but have instead moved up one “layer of abstraction”. Someday, all quantum algorithms will need to be run on the logical level with rounds of quantum error correction. This is a markedly different state than most present experiments, which are run on the physical level without quantum error correction. It is also worth noting that these results were generated using the software stack available to any user of Quantinuum’s H-Series quantum computers, and these experiments were run alongside customer jobs – underlining that these results are commercial performance, not hero data on a bespoke system.
Ironically, a key element in this work is our ability to move our qubits through space the “normal” way - this capacity gives us all-to-all connectivity, which was essential for some of the QEC protocols used in the complex task of fault-tolerant logical teleportation. We recently demonstrated solutions to the sorting problem and wiring problem in a new 2D grid trap, which will be essential as we scale up our devices.
“How can quantum structures and quantum computers contribute to the effectiveness of AI?”
In previous work we have made notable advances in answering this question, and this article is based on our most recent work in the new papers [arXiv:2406.17583, arXiv:2408.06061], and most notably the experiment in [arXiv:2409.08777].
This article is one of a series that we will be publishing alongside further advances – advances that are accelerated by access to the most powerful quantum computers available.
Large language Models (LLMs) such as ChatGPT are having an impact on society across many walks of life. However, as users have become more familiar with this new technology, they have also become increasingly aware of deep-seated and systemic problems that come with AI systems built around LLM’s.
The primary problem with LLMs is that nobody knows how they work - as inscrutable “black boxes” they aren’t “interpretable”, meaning we can’t reliably or efficiently control or predict their behavior. This is unacceptable in many situations. In addition, Modern LLMs are incredibly expensive to build and run, costing serious – and potentially unsustainable –amounts of power to train and use. This is why more and more organizations, governments, and regulators are insisting on solutions.
But how can we find these solutions, when we don’t fully understand what we are dealing with now?1
At Quantinuum, we have been working on natural language processing (NLP) using quantum computers for some time now. We are excited to have recently carried out experiments [arXiv: 2409.08777] which demonstrate not only how it is possible to train a model for a quantum computer in a scalable manner, but also how to do this in a way that is interpretable for us. Moreover, we have promising theoretical indications of the usefulness of quantum computers for interpretable NLP [arXiv:2408.06061].
In order to better understand why this could be the case, one needs to understand the ways in which meanings compose together throughout a story or narrative. Our work towards capturing them in a new model of language, which we call DisCoCirc, is reported on extensively in this previous blog post from 2023.
In new work referred to in this article, we embrace “compositional interpretability” as proposed in [arXiv:2406.17583] as a solution to the problems that plague current AI. In brief, compositional interpretability boils down to being able to assign a human friendly meaning, such as natural language, to the components of a model, and then being able to understand how they fit together2.
A problem currently inherent to quantum machine learning is that of being able to train at scale. We avoid this by making use of “compositional generalization”. This means we train small, on classical computers, and then at test time evaluate much larger examples on a quantum computer. There now exist quantum computers which are impossible to simulate classically. To train models for such computers, it seems that compositional generalization currently provides the only credible path.
DisCoCirc is a circuit-based model for natural language that turns arbitrary text into “text circuits” [arXiv:1904.03478, arXiv:2301.10595, arXiv:2311.17892]. When we say that arbitrary text becomes ‘text-circuits’ we are converting the lines of text, which live in one dimension, into text-circuits which live in two-dimensions. These dimensions are the entities of the text versus the events in time.
To see how that works, consider the following story. In the beginning there is Alex and Beau. Alex meets Beau. Later, Chris shows up, and Beau marries Chris. Alex then kicks Beau.
The content of this story can be represented as the following circuit:
Such a text circuit represents how the ‘actors’ in it interact with each other, and how their states evolve by doing so. Initially, we know nothing about Alex and Beau. Once Alex meets Beau, we know something about Alex and Beau’s interaction, then Beau marries Chris, and then Alex kicks Beau, so we know quite a bit more about all three, and in particular, how they relate to each other.
Let’s now take those circuits to be quantum circuits.
In the last section we will elaborate more why this could be a very good choice. For now it’s ok to understand that we simply follow the current paradigm of using vectors for meanings, in exactly the same way that this works in LLMs. Moreover, if we then also want to faithfully represent the compositional structure in language3, we can rely on theorem 5.49 from our book Picturing Quantum Processes, which informally can be stated as follows:
If the manner in which meanings of words (represented by vectors) compose obeys linguistic structure, then those vectors compose in exactly the same way as quantum systems compose.4
In short, a quantum implementation enables us to embrace compositional interpretability, as defined in our recent paper [arXiv:2406.17583].
So, what have we done? And what does it mean?
We implemented a “question-answering” experiment on our Quantinuum quantum computers, for text circuits as described above. We know from our new paper [arXiv:2408.06061] that this is very hard to do on a classical computer due to the fact that as the size of the texts get bigger they very quickly become unrealistic to even try to do this on a classical computer, however powerful it might be. This is worth emphasizing. The experiment we have completed would scale exponentially using classical computers – to the point where the approach becomes intractable.
The experiment consisted of teaching (or training) the quantum computer to answer a question about a story, where both the story and question are presented as text-circuits. To test our model, we created longer stories in the same style as those used in training and questioned these. In our experiment, our stories were about people moving around, and we questioned the quantum computer about who was moving in the same direction at the end of the stories. A harder alternative one could imagine, would be having a murder mystery story and then asking the computer who was the murderer.
And remember - the training in our experiment constitutes the assigning of quantum states and gates to words that occur in the text.
The major reason for our excitement is that the training of our circuits enjoys compositional generalization. That is, we can do the training on small-scale ordinary computers, and do the testing, or asking the important questions, on quantum computers that can operate in ways not possible classically. Figure 4 shows how, despite only being trained on stories with up to 8 actors, the test accuracy remains high, even for much longer stories involving up to 30 actors.
Training large circuits directly in quantum machine learning, leads to difficulties which in many cases undo the potential advantage. Critically - compositional generalization allows us to bypass these issues.
We can compare the results of our experiment on a quantum computer, to the success of a classical LLM ChatGPT (GPT-4) when asked the same questions.
What we are considering here is a story about a collection of characters that walk in a number of different directions, and sometimes follow each other. These are just some initial test examples, but it does show that this kind of reasoning is not particularly easy for LLMs.
The input to ChatGPT was:
What we got from ChatGPT:
Can you see where ChatGPT went wrong?
ChatGPT’s score (in terms of accuracy) oscillated around 50% (equivalent to random guessing). Our text circuits consistently outperformed ChatGPT on these tasks. Future work in this area would involve looking at prompt engineering – for example how the phrasing of the instructions can affect the output, and therefore the overall score.
Of course, we note that ChatGPT and other LLM’s will issue new versions that may or may not be marginally better with ‘question-answering’ tasks, and we also note that our own work may become far more effective as quantum computers rapidly become more powerful.
We have now turned our attention to work that will show that using vectors to represent meaning and requiring compositional interpretability for natural language takes us mathematically natively into the quantum formalism. This does not mean that there doesn't exist an efficient classical method for solving specific tasks, and it may be hard to prove traditional hardness results whenever there is some machine learning involved. This could be something we might have to come to terms with, just as in classical machine learning.
At Quantinuum we possess the most powerful quantum computers currently available. Our recently published roadmap is going to deliver more computationally powerful quantum computers in the short and medium term, as we extend our lead and push towards universal, fault tolerant quantum computers by the end of the decade. We expect to show even better (and larger scale) results when implementing our work on those machines. In short, we foresee a period of rapid innovation as powerful quantum computers that cannot be classically simulated become more readily available. This will likely be disruptive, as more and more use cases, including ones that we might not be currently thinking about, come into play.
Interestingly and intriguingly, we are also pioneering the use of powerful quantum computers in a hybrid system that has been described as a ‘quantum supercomputer’ where quantum computers, HPC and AI work together in an integrated fashion and look forward to using these systems to advance our work in language processing that can help solve the problem with LLM’s that we highlighted at the start of this article.
1 And where do we go next, when we don’t even understand what we are dealing with now? On previous occasions in the history of science and technology, when efficient models without a clear interpretation have been developed, such as the Babylonian lunar theory or Ptolemy’s model of epicycles, these initially highly successful technologies vanished, making way for something else.
2 Note that our conception of compositionality is more general than the usual one adopted in linguistics, which is due to Frege. A discussion can be found in [arXiv: 2110.05327].
3 For example, using pregroups here as linguistic structure, which are the cups and caps of PQP.
4 That is, using the tensor product of the corresponding vector spaces.
By Dr. Harry Buhrman, Chief Scientist for Algorithms and Innovation, and Dr. Chris Langer, Fellow
This week, we confirm what has been implied by the rapid pace of our recent technical progress as we reveal a major acceleration in our hardware road map. By the end of the decade, our accelerated hardware roadmap will deliver a fully fault-tolerant and universal quantum computer capable of executing millions of operations on hundreds of logical qubits.
The next major milestone on our accelerated roadmap is Quantinuum Helios™, Powered by Honeywell, a device that will definitively push beyond classical capabilities in 2025. That sets us on a path to our fifth-generation system, Quantinuum Apollo™, a machine that delivers scientific advantage and a commercial tipping point this decade.
We are committed to continually advancing the capabilities of our hardware over prior generations, and Apollo makes good on that promise. It will offer:
By leveraging our all-to-all connectivity and low error rates, we expect to enjoy significant efficiency gains in terms of fault-tolerance, including single-shot error correction (which saves time) and high-rate and high-distance Quantum Error Correction (QEC) codes (which mean more logical qubits, with stronger error correction capabilities, can be made from a smaller number of physical qubits).
Studies of several efficient QEC codes already suggest we can enjoy logical error rates much lower than our target 10-6 – we may even be able to reach 10-10, which enables exploration of even more complex problems of both industrial and scientific interest.
Error correcting code exploration is only just beginning – we anticipate discoveries of even more efficient codes. As new codes are developed, Apollo will be able to accommodate them, thanks to our flexible high-fidelity architecture. The bottom line is that Apollo promises fault-tolerant quantum advantage sooner, with fewer resources.
Like all our computers, Apollo is based on the quantum charged coupled device (QCCD) architecture. Here, each qubit’s information is stored in the atomic states of a single ion. Laser beams are applied to the qubits to perform operations such as gates, initialization, and measurement. The lasers are applied to individual qubits or co-located qubit pairs in dedicated operation zones. Qubits are held in place using electromagnetic fields generated by our ion trap chip. We move the qubits around in space by dynamically changing the voltages applied to the chip. Through an alternating sequence of qubit rearrangements via movement followed by quantum operations, arbitrary circuits with arbitrary connectivity can be executed.
The ion trap chip in Apollo will host a 2D array of trapping locations. It will be fabricated using standard CMOS processing technology and controlled using standard CMOS electronics. The 2D grid architecture enables fast and scalable qubit rearrangement and quantum operations – a critical competitive advantage. The Apollo architecture is scalable to the significantly larger systems we plan to deliver in the next decade.
Apollo’s scaling of very stable physical qubits and native high-fidelity gates, together with our advanced error correcting and fault tolerant techniques will establish a quantum computer that can perform tasks that do not run (efficiently) on any classical computer. We already had a first glimpse of this in our recent work sampling the output of random quantum circuits on H2, where we performed 100x better than competitors who performed the same task while using 30,000x less power than a classical supercomputer. But with Apollo we will travel into uncharted territory.
The flexibility to use either thousands of qubits for shorter computations (up to 10k gates) or hundreds of qubits for longer computations (from 1 million to 1 billion gates) make Apollo a versatile machine with unprecedented quantum computational power. We expect the first application areas will be in scientific discovery; particularly the simulation of quantum systems. While this may sound academic, this is how all new material discovery begins and its value should not be understated. This era will lead to discoveries in materials science, high-temperature superconductivity, complex magnetic systems, phase transitions, and high energy physics, among other things.
In general, Apollo will advance the field of physics to new heights while we start to see the first glimmers of distinct progress in chemistry and biology. For some of these applications, users will employ Apollo in a mode where it offers thousands of qubits for relatively short computations; e.g. exploring the magnetism of materials. At other times, users may want to employ significantly longer computations for applications like chemistry or topological data analysis.
But there is more on the horizon. Carefully crafted AI models that interact seamlessly with Apollo will be able to squeeze all the “quantum juice” out and generate data that was hitherto unavailable to mankind. We anticipate using this data to further the field of AI itself, as it can be used as training data.
The era of scientific (quantum) discovery and exploration will inevitably lead to commercial value. Apollo will be the centerpiece of this commercial tipping point where use-cases will build on the value of scientific discovery and support highly innovative commercially viable products.
Very interestingly, we will uncover applications that we are currently unaware of. As is always the case with disruptive new technology, Apollo will run currently unknown use-cases and applications that will make perfect sense once we see them. We are eager to co-develop these with our customers in our unique co-creation program.
Today, System Model H2 is our most advanced commercial quantum computer, providing 56 physical qubits with physical two-qubit gate errors less than 10-3. System Model H2, like all our systems, is based on the QCCD architecture.
Starting from where we are today, our roadmap progresses through two additional machines prior to Apollo. The Quantinuum Helios™ system, which we are releasing in 2025, will offer around 100 physical qubits with two-qubit gate errors less than 5x10-4. In addition to expanded qubit count and better errors, Helios makes two departures from H2. First, Helios will use 137Ba+ qubits in contrast to the 171Yb+ qubits used in our H1 and H2 systems. This change enables lower two-qubit gate errors and less complex laser systems with lower cost. Second, for the first time in a commercial system, Helios will use junction-based qubit routing. The result will be a “twice-as-good" system: Helios will offer roughly 2x more qubits with 2x lower two-qubit gate errors while operating more than 2x faster than our 56-qubit H2 system.
After Helios we will introduce Quantinuum Sol™, our first commercially available 2D-grid-based quantum computer. Sol will offer hundreds of physical qubits with two-qubit gate errors less than 2x10-4, operating approximately 2x faster than Helios. Sol being a fully 2D-grid architecture is the scalability launching point for the significant size increase planned for Apollo.
Thanks to Sol’s low error rates, users will be able to execute circuits with up to 10,000 quantum operations. The usefulness of Helios and Sol may be extended with a combination of quantum error detection (QED) and quantum error mitigation (QEM). For example, the [[k+2, k, 2]] iceberg code is a light-weight QED code that encodes k+2 physical qubits into k logical qubits and only uses an additional 2 ancilla qubits. This low-overhead code is well-suited for Helios and Sol because it offers the non-Clifford variable angle entangling ZZ-gate directly without the overhead of magic state distillation. The errors Iceberg fails to detect are already ~10x lower than our physical errors, and by applying a modest run-time overhead to discard detected failures, the effective error in the computation can be further reduced. Combining QED with QEM, a ~10x reduction in the effective error may be possible while maintaining run-time overhead at modest levels and below that of full-blown QEC.
Our new roadmap is an acceleration over what we were previously planning. The benefits of this are obvious: Apollo brings the commercial tipping point sooner than we previously thought possible. This acceleration is made possible by a set of recent breakthroughs.
First, we solved the “wiring problem”: we demonstrated that trap chip control is scalable using our novel center-to-left-right (C2LR) protocol and broadcasting shared control signals to multiple electrodes. This demonstration of qubit rearrangement in a 2D geometry marks the most advanced ion trap built, containing approximately 40 junctions. This trap was deployed to 3 different testbeds in 2 different cities and operated with 2 different collections of dual-ion-species, and all 3 cases were a success. These demonstrations showed that the footprint of the most complex parts of the trap control stay constant as the number of qubits scales up. This gives us the confidence that Sol, with approximately 100 junctions, will be a success.
Second, we continue to reduce our two-qubit physical gate errors. Today, H1 and H2 have two-qubit gate errors less than 1x10-3 across all pairs of qubits. This is the best in the industry and is a key ingredient in our record >2 million quantum volume. Our systems are the most benchmarked in the industry, and we stand by our data - making it all publicly available. Recently, we observed an 8x10-4 two-qubit gate error in our Helios development test stand in 137Ba+, and we’ve seen even better error rates in other testbeds. We are well on the path to meeting the 5x10-4 spec in Helios next year.
Third, the all-to-all connectivity offered by our systems enables highly efficient QEC codes. In Microsoft’s recent demonstration, our H2 system with 56 physical qubits was used to generate 12 logical qubits at distance 4. This work demonstrated several experiments, including repeated rounds of error correction where the error in the final result was ~10x lower than the physical circuit baseline.
In conclusion, through a combination of advances in hardware readiness and QEC, we have line-of-sight to Apollo by the end of the decade, a fully fault-tolerant quantum advantaged machine. This will be a commercial tipping point: ushering in an era of scientific discovery in physics, materials, chemistry, and more. Along the way, users will have the opportunity to discover new enabling use cases through quantum error detection and mitigation in Helios and Sol.
Quantinuum has the best quantum computers today and is on the path to offering fault-tolerant useful quantum computation by the end of the decade.