Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
technical
All
corporate
All
November 5, 2025
technical
corporate
Introducing Helios: The Most Accurate Quantum Computer in the World
A large room with a large rectangular objectAI-generated content may be incorrect.
Figure 1: A rendering of the Quantinuum Helios system deployed at a customer site. 

We’re pleased to introduce Helios, a technological marvel redefining the possible. 

Building on its predecessor H2, which has already breached quantum advantage, Helios nearly doubles the qubit count and surpasses H2’s industry-leading fidelity, pushing further into the quantum advantage regime than any system before it. With unprecedented capability across its full stack, Helios is the most powerful quantum computer in the world. 

“Helios is a true marvel—a seamless fusion of hardware and software, creating a platform for discovery unlike any other.”- Dr. Rajeeb Hazra, CEO 

Helios’ groundbreaking design and advanced software stack bring quantum programming closer than ever to the ease and flexibility of classical computing—positioning Helios to accelerate commercial adoption. Even before its public debut, Helios had already demonstrated its capabilities as the world’s first enterprise-grade quantum computer. During a two-month early access program, select partners including SoftBank Corp. and JPMorgan Chase conducted commercially relevant research. We also leveraged Helios to perform large-scale simulations in high-temperature superconductivity and quantum magnetism—both with clear pathways to real-world industry applications.

Helios is now available to all customers through our cloud service and on-premise offering, including an option to integrate with NVIDIA GB200 for applications targeting specific end markets.     

A Stellar Quantum Computer 
“You would need to harvest every star in the universe to power a classical machine that could do the same calculations we did with Helios."
- Dr. Anthony Ransford, Helios Lead Architect
Figure 2: Random Circuit Sampling (RCS) results on Helios. Running the same calculation classically in the same amount of time would require the power of all the stars in the visible universe.

As we detailed in a benchmarking paper, Helios sets a new standard for quantum computing performance with the highest fidelity ever released to the market. It features 98 fully connected physical qubits with single-qubit gate fidelity of 99.9975% and two-qubit gate fidelity of 99.921% across all qubit pairs—making it the most accurate commercial quantum computer in the world.  

Our fidelity shines in system-level benchmarks, such as Random Circuit Sampling (RCS), famously used by Google to demonstrate quantum supremacy when it performed an RCS task that would take a classical computer “10 septillion years” to replicate. Now, RCS serves as both a benchmark and the minimum standard for serious competitors in the market. Frequently missed in this conversation, however, is the importance of fidelity, or accuracy. That's why, when benchmarking Helios using RCS, we report the fidelity achieved by Helios on circuits of varying complexity (with complexity quantified by power requirements for classical simulation).

Our results show a classical supercomputer would require more power than the Sun—or, in fact, the combined power of all stars in the visible universe—to complete the same task in the same amount of time. In contrast, Helios achieved it using roughly the power of a single data center rack. 

Like its predecessors, H1 and H2, Helios is designed to improve fidelity and overall system performance over time while sustaining competitive leadership through the launch of its successor.

Qubits at a Crossroads
Figure 3: The Helios chip, which generates tiny electromagnetic fields to trap single atomic ions hovering above the chip, which are then used for computation. The Helios chip contains the world’s first commercial ion junction – enabling a huge jump in architectural design and opening the door to true scaling.
"When I first saw the rotatable ion storage ring with a junction and gating legs sketched on a napkin, I loved the idea for its simplicity and efficiency. Seeing it finally realized after all of the team’s hard work has been truly incredible." 
- Dr. John Gaebler, Fellow and Chief Scientist, Quantinuum

The Helios ion trap uses tiny currents to generate electromagnetic fields that hold single atomic ions (qubits) hovering above the trap for computation. We introduced a first-of-its-kind “junction”, which acts like a traffic intersection for qubits, enabling efficient routing and improved reliability. This is not only the first commercial implementation of this engineering triumph but it also allows our QCCD (Quantum Charged Coupled Device) architecture to scale, with future systems featuring hundreds of junctions arranged like a city street grid.   

Illustration:The Helios QPU. Ions rotate through the ring storage to the cache and logic zones for gating. Image adapted from benchmarking paper.

Whereas predecessor systems routed qubits using “physical swaps,” requiring sequential sorting, cooling, and gating that prevented parallel operations, the Helios QPU instead resembles a classical architecture with dedicated memory, cache, and computational zones. Like a spinning hard drive, the Helios QPU rotates qubits through ring storage (memory), passes them through the junction into the cache, moves them to logic zones for gating, and moves them to the leg storage while the next batch is processed. Sorting can now be done in parallel with cooling operations, resulting in a processor that is faster and less error prone.  This parallelism will become a hallmark of Quantinuum’s future generations, enabling faster operating speeds.

Animation: This triumph of engineering demonstrates exquisite control over some of nature’s smallest particles in a way the world has never seen; one colleague likened the ions to a “little marching band.”

Quantinuum’s QCCD provides full all-to-all connectivity, giving the Helios QPU significant advantages over “fixed qubit” architectures, such as those used in superconducting systems. Its ability to physically move qubits around and entangle any qubit with any other qubit enables algorithms and error-correcting codes that are functionally impossible for fixed qubit architectures. 

A blue dot pattern on a black backgroundAI-generated content may be incorrect.
Image: Real image of 98 single Barium atoms (atomic ions) used for computation inside Quantinuum’s Helios quantum computer.

We made another “tiny” but significant change: we switched our qubits from ytterbium to barium. Whereas ytterbium largely relied on ultraviolet lasers that are expensive and hard on other components, barium can be manipulated with lasers in the visible part of the spectrum, where mature industrial technology exists, providing a more affordable, reliable and scalable commercial solution.

Barium also naturally allows the quantum computer to detect and remove a certain type of error, known as leakage, at the atomic level. By addressing this error directly, programmers can enhance the performance of their computation.

Delivered on Time – in Real Time

As announced earlier this year, Helios launched with a completely new stack equipped with a new software environment that makes quantum programming feel as intuitive as classical development. 

Our new stack also features a real-time engine that massively improves our capability. With a real-time control system, we are evolving from static, pre-planned circuits to dynamic quantum programs that respond to results on the fly. We can now, for the first time on a quantum computer, interleave GPU-accelerated classical and quantum computations in a single program. 

Our real-time engine also means we have dynamic transport – routing qubits as the moment demands reduces time to solution and diminishes the impact of memory errors.  

Programmers can now use our new quantum programming language, Guppy, to write dynamic circuits that were previously impossible. By combining Guppy with our real-time engine, developers can leverage arbitrary control flow driven by quantum measurements, as well as full classical computation—including loops, higher-order functions, early exits, and dynamic qubit allocation. Far from being mere conveniences, these capabilities are essential stepping stones toward achieving fault-tolerant quantum computing at scale—putting us decisively ahead of the competition.

Fully compatible with industry standards like QIR and tools such as NVIDIA CUDA-Q, Helios bridges classical and quantum computing more seamlessly than ever, making hybrid quantum-classical development simple, natural, and accessible, and establishing Helios as the most programmable, general-purpose quantum computer ever built.  

The Most Logical Path to Fault Tolerance

While everyone else is promising fault-tolerance, we’re delivering it. We are the only company to demonstrate a fully universal fault-tolerant gate set, we’ve demonstrated more codes than anyone else, and our logical fidelities are the best in class.

Now, with 98 physical qubits, we’ve been able to make 94 logical qubits, fully entangled in one of the largest GHZ states ever recorded. We did this with better than break-even fidelity, meaning they outperform physical qubits running the same algorithm. Built on our Iceberg code, published last year in Nature Physics, these logical qubits achieve the industry’s highest encoding efficiency, needing only two ancilla qubits per code block, or roughly a 1:1 physical-to-logical qubit ratio.

With 50 error-detected logical qubits, Helios achieved better than break-even performance, running the largest encoded simulation of quantum magnetism to date—an exceptional example of how users can leverage efficient encodings. This range and flexibility let users tailor the encoding rate to their application: fewer logical qubits deliver higher fidelity for less complex tasks, while larger sets enable more complex simulations.

Helios also produced 48 fully error-corrected logical qubits at a remarkable 2:1 encoding rate, a ratio thought impossible just a few years ago. This super high encoding rate stands in stark contrast to other notable demonstrations from industry peers. For example, the demonstration linked in the previous sentence would need a whopping 4800 qubits to make 48 logical qubits. Our 2:1 encoding rate was achieved through a clever technique called code concatenation, a breakthrough that supports single-shot error correction, transversal logic, and full parallelization—all at 99.99% state preparation and measurement fidelity. 

To extend this performance at scale, all future Quantinuum systems—starting with Helios—will integrate real-time decoding using NVIDIA Grace Hopper GPUs, treating decoding as a dynamic computational process rather than a static lookup. Errors can be corrected as computations run without slowing the logical clock rate. Combined with Guppy, NVIDIA CUDA-Q, and NVQLink, this infrastructure forms the foundation for fault-tolerant, real-time quantum computation, delivering immediate quantum advantage in the near term and a clear path to scalable error-corrected computing. 

We remain the only company to perform a fully universal fault-tolerant gate set, with more error-correcting codes and higher logical fidelities than any other company.

Helios is ready to drive practical, commercial quantum applications across industries. Its unprecedented fidelity, scalability, and programmability give users the tools to tackle problems that were previously out of reach. This is just the beginning, and we look forward to seeing what users and companies will achieve with it. 

Read the Helios data sheet

Dive deeper into Helios performance specs

technical
All
October 30, 2025
technical
Scalable Quantum Error Detection

Typically, Quantum Error Detection (QED) is viewed as a short-term solution—a non-scalable, stop-gap until full fault tolerance is achieved at scale.

That’s just changed, thanks to a serendipitous discovery made by our team. Now, QED can be used in a much wider context than previously thought. Our team made this discovery while studying the contact process, which describes things like how diseases spread or how water permeates porous materials. In particular, our team was studying the quantum contact process (QCP), a problem they had tackled before, which helps physicists understand things like phase transitions. In the process (pun intended), they came across what senior advanced physicist, Eli Chertkov, described as “a surprising result.”

While examining the problem, the team realized that they could convert detected errors due to noisy hardware into random resets, a key part of the QCP, thus avoiding the exponentially costly overhead of post-selection normally expected in QED.

To understand this better, the team developed a new protocol in which the encoded, or logical, quantum circuit adapts to the noise generated by the quantum computer. They quickly realized that this method could be used to explore other classes of random circuits similar to the ones they were already studying.

The team put it all together on System Model H2 to run a complex simulation, and were surprised to find that they were able to achieve near break-even results, where the logically encoded circuit performed as well as its physical analog, thanks to their clever application of QED.  Ultimately, this new protocol will allow QED codes to be used in a scalable way, saving considerable computational resources compared to full quantum error correction (QEC).

Researchers at the crossroads of quantum information, quantum simulation, and many-body physics will take interest in this protocol and use it as a springboard for inventing new use cases for QED.

Stay tuned for more, our team always has new tricks up their sleeves.

Learn mode about System Model H2 with this video:

technical
All
October 23, 2025
technical
Mapping the Hunt for Quantum Advantage

By Konstantinos Meichanetzidis

When will quantum computers outperform classical ones?

This question has hovered over the field for decades, shaping billion-dollar investments and driving scientific debate.

The question has more meaning in context, as the answer depends on the problem at hand. We already have estimates of the quantum computing resources needed for Shor’s algorithm, which has a superpolynomial advantage for integer factoring over the best-known classical methods, threatening cryptographic protocols. Quantum simulation allows one to glean insights into exotic materials and chemical processes that classical machines struggle to capture, especially when strong correlations are present. But even within these examples, estimates change surprisingly often, carving years off expected timelines. And outside these famous cases, the map to quantum advantage is surprisingly hazy.

Researchers at Quantinuum have taken a fresh step toward drawing this map. In a new theoretical framework, Harry Buhrman, Niklas Galke, and Konstantinos Meichanetzidis introduce the concept of “queasy instances” (quantum easy) – problem instances that are comparatively easy for quantum computers but appear difficult for classical ones.

From Problem Classes to Problem Instances

Traditionally, computer scientists classify problems according to their worst-case difficulty. Consider the problem of Boolean satisfiability, or SAT, where one is given a set of variables (each can be assigned a 0 or a 1) and a set of constraints and must decide whether there exists a variable assignment that satisfies all the constraints. SAT is a canonical NP-complete problem, and so in the worst case, both classical and quantum algorithms are expected to perform badly, which means that the runtime scales exponentially with the number of variables. On the other hand, factoring is believed to be easier for quantum computers than for classical ones. But real-world computing doesn’t deal only in worst cases. Some instances of SAT are trivial; others are nightmares. The same is true for optimization problems in finance, chemistry, or logistics. What if quantum computers have an advantage not across all instances, but only for specific “pockets” of hard instances? This could be very valuable, but worst-case analysis is oblivious to this and declares that there is no quantum advantage.

To make that idea precise, the researchers turned to a tool from theoretical computer science: Kolmogorov complexity. This is a way of measuring how “regular” a string of bits is, based on the length of the shortest program that generates it. A simple string like 0000000000 can be described by a tiny program (“print ten zeros”), while the description of a program that generates a random string exhibiting no pattern is as long as the string itself. From there, the notion of instance complexity was developed: instead of asking “how hard is it to describe this string?”, we ask “how hard is it to solve this particular problem instance (represented by a string)?” For a given SAT formula, for example, its polynomial-time instance complexity is the size of the smallest program that runs in polynomial time and decides whether the formula is satisfiable. This smallest program must be consistently answering all other instances, and it is also allowed to declare “I don’t know”.

In their new work, the team extends this idea into the quantum realm by defining polynomial-time quantum instance complexity as the size of the shortest quantum program that solves a given instance and runs on polynomial time. This makes it possible to directly compare quantum and classical effort, in terms of program description length, on the very same problem instance. If the quantum description is significantly shorter than the classical one, that problem instance is one the researchers call “queasy”: quantum-easy and classically hard. These queasy instances are the precise places where quantum computers offer a provable advantage – and one that may be overlooked under a worst-case analysis.

Why “Queasy”?

The playful name captures the imbalance between classical and quantum effort. A queasy instance is one that makes classical algorithms struggle, i.e. their shortest descriptions of efficient programs that decide them are long and unwieldy, while a quantum computer can handle the same instance with a much simpler, faster, and shorter program. In other words, these instances make classical computers “queasy,” while quantum ones solve them efficiently and finding them quantum-easy. The key point of these definitions lies in demonstrating that they yield reasonable results for well-known optimisation problems.

By carefully analysing a mapping from the problem of integer factoring to SAT (which is possible because factoring is inside NP and SAT is NP-complete) the researchers prove that there exist infinitely many queasy SAT instances. SAT is one of the most central and well-studied problems in computer science that finds numerous applications in the real-world. The significant realisation that this theoretical framework highlights is that SAT is not expected to yield a blanket quantum advantage, but within it lie islands of queasiness – special cases where quantum algorithms decisively win.

Algorithmic Utility

Finding a queasy instance is exciting in itself, but there is more to this story. Surprisingly, within the new framework it is demonstrated that when a quantum algorithm solves a queasy instance, it does much more than solve that single case. Because the program that solves it is so compact, the same program can provably solve an exponentially large set of other instances, as well. Interestingly, the size of this set depends exponentially on the queasiness of the instance!

Think of it like discovering a special shortcut through a maze. Once you’ve found the trick, it doesn’t just solve that one path, but reveals a pattern that helps you solve many other similarly built mazes, too (even if not optimally). This property is called algorithmic utility, and it means that queasy instances are not isolated curiosities. Each one can open a doorway to a whole corridor with other doors, behind which quantum advantage might lie.

A North Star for the Field

Queasy instances are more than a mathematical curiosity; this is a new framework that provides a language for quantum advantage. Even though the quantities defined in the paper are theoretical, involving Turing machines and viewing programs as abstract bitstrings, they can be approximated in practice by taking an experimental and engineering approach. This work serves as a foundation for pursuing quantum advantage by targeting problem instances and proving that in principle this can be a fruitful endeavour.

The researchers see a parallel with the rise of machine learning. The idea of neural networks existed for decades along with small scale analogue and digital implementations, but only when GPUs enabled large-scale trial and error did they explode into practical use. Quantum computing, they suggest, is on the cusp of its own heuristic era. “Quristics” will be prominent in finding queasy instances, which have the right structure so that classical methods struggle but quantum algorithms can exploit, to eventually arrive at solutions to typical real-world problems. After all, quantum computing is well-suited for small-data big-compute problems, and our framework employs the concepts to quantify that; instance complexity captures both their size and the amount of compute required to solve them.

Most importantly, queasy instances shift the conversation. Instead of asking the broad question of when quantum computers will surpass classical ones, we can now rigorously ask where they do. The queasy framework provides a language and a compass for navigating the rugged and jagged computational landscape, pointing researchers, engineers, and industries toward quantum advantage.

events
All
September 15, 2025
events
Quantum World Congress 2025

From September 16th – 18th, Quantum World Congress (QWC) brought together visionaries, policymakers, researchers, investors, and students from across the globe to discuss the future of quantum computing in Tysons, Virginia.

Quantinuum is forging the path to universal, fully fault-tolerant quantum computing with our integrated full-stack. With our quantum experts were on site, we showcased the latest on Quantinuum Systems, the world’s highest-performing, commercially available quantum computers, our new software stack featuring the key additions of Guppy and Selene, our path to error correction, and more.

Highlights from QWC

Dr. Patty Lee Named the Industry Pioneer in Quantum

The Quantum Leadership Awards celebrate visionaries transforming quantum science into global impact. This year at QWC, Dr. Patty Lee, our Chief Scientist for Hardware Technology Development, was named the Industry Pioneer in Quantum! This honor celebrates her more than two decades of leadership in quantum computing and her pivotal role advancing the world’s leading trapped-ion systems. Watch the Award Ceremony here.

Keynote with Quantinuum's CEO, Dr. Rajeeb Hazra

At QWC 2024, Quantinuum’s President & CEO, Dr. Rajeeb “Raj” Hazra, took the stage to showcase our commitment to advancing quantum technologies through the unveiling of our roadmap to universal, fully fault-tolerant quantum computing by the end of this decade. This year at QWC 2025, Raj shared the progress we’ve made over the last year in advancing quantum computing on both commercial and technical fronts and exciting insights on what’s to come from Quantinuum. Access the full session here.

Panel Session: Policy Priorities for Responsible Quantum and AI

As part of the Track Sessions on Government & Security, Quantinuum’s Director of Government Relations, Ryan McKenney, discussed “Policy Priorities for Responsible Quantum and AI” with Jim Cook from Actions to Impact Strategies and Paul Stimers from Quantum Industry Coalition.

Fireside Chat: Establishing a Pro-Innovation Regulatory Framework

During the Track Session on Industry Advancement, Quantinuum’s Chief Legal Officer, Kaniah Konkoly-Thege, and Director of Government Relations, Ryan McKenney, discussed the importance of “Establishing a Pro-Innovation Regulatory Framework”.

technical
All
September 15, 2025
technical
Quantum gravity in the lab

In the world of physics, ideas can lie dormant for decades before revealing their true power. What begins as a quiet paper in an academic journal can eventually reshape our understanding of the universe itself.

In 1993, nestled deep in the halls of Yale University, physicist Subir Sachdev and his graduate student Jinwu Ye stumbled upon such an idea. Their work, originally aimed at unraveling the mysteries of “spin fluids”, would go on to ignite one of the most surprising and profound connections in modern physics—a bridge between the strange behavior of quantum materials and the warped spacetime of black holes.

Two decades after the paper was published, it would be pulled into the orbit of a radically different domain: quantum gravity. Thanks to work by renowned physicist Alexei Kitaev in 2015, the model found new life as a testing ground for the mind-bending theory of holography—the idea that the universe we live in might be a projection, from a lower-dimensional reality.

Holography is an exotic approach to understanding reality where scientists use holograms to describe higher dimensional systems in one less dimension. So, if our world is 3+1 dimensional (3 spatial directions plus time), there exists a 2+1, or 3-dimensional description of it. In the words of Leonard Susskind, a pioneer in quantum holography, "the three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface."  

The “SYK” model, as it is known today, is now considered a quintessential framework for studying strongly correlated quantum phenomena, which occur in everything from superconductors to strange metals—and even in black holes. In fact, The SYK model has also been used to study one of physics’ true final frontiers, quantum gravity, with the authors of the paper calling it “a paradigmatic model for quantum gravity in the lab.”  

The SYK model involves Majorana fermions, a type of particle that is its own antiparticle. A key feature of the model is that these fermions are all-to-all connected, leading to strong correlations. This connectivity makes the model particularly challenging to simulate on classical computers, where such correlations are difficult to capture. Our quantum computers, however, natively support all-to-all connectivity making them a natural fit for studying the SYK model.

Now, 10 years after Kitaev’s watershed lectures, we’ve made new progress in studying the SYK model. In a new paper, we’ve completed the largest ever SYK study on a quantum computer. By exploiting our system’s native high fidelity and all-to-all connectivity, as well as our scientific team’s deep expertise across many disciplines, we were able to study the SYK model at a scale three times larger than the previous best experimental attempt.

While this work does not exceed classical techniques, it is very close to the classical state-of-the-art. The biggest ever classical study was done on 64 fermions, while our recent result, run on our smallest processor (System Model H1), included 24 fermions. Modelling 24 fermions costs us only 12 qubits (plus one ancilla) making it clear that we can quickly scale these studies: our System Model H2 supports 56 qubits (or ~100 fermions), and Helios, which is coming online this year, will have over 90 qubits (or ~180 fermions).

However, working with the SYK model takes more than just qubits. The SYK model has a complex Hamiltonian that is difficult to work with when encoded on a computer—quantum or classical. Studying the real-time dynamics of the SYK model means first representing the initial state on the qubits, then evolving it properly in time according to an intricate set of rules that determine the outcome. This means deep circuits (many circuit operations), which demand very high fidelity, or else an error will occur before the computation finishes.

Our cross-disciplinary team worked to ensure that we could pull off such a large simulation on a relatively small quantum processor, laying the groundwork for quantum advantage in this field.

First, the team adopted a randomized quantum algorithm called TETRIS to run the simulation. By using random sampling, among other methods, the TETRIS algorithm allows one to compute the time evolution of a system without the pernicious discretization errors or sizable overheads that plague other approaches. TETRIS is particularly suited to simulating the SYK model because with a high level of disorder in the material, simulating the SYK Hamiltonian means averaging over many random Hamiltonians. With TETRIS, one generates random circuits to compute evolution (even with a deterministic Hamiltonian). Therefore, when applying TETRIS on SYK, for every shot one can just generate a random instance of the Hamiltonain, and generate a random circuit on TETRIS at the same time. This simple approach enables less gate counts required per shot, meaning users can run more shots, naturally mitigating noise.

In addition, the team “sparsified” the SYK model, which means “pruning” the fermion interactions to reduce the complexity while still maintaining its crucial features. By combining sparsification and the TETRIS algorithm, the team was able to significantly reduce the circuit complexity, allowing it to be run on our machine with high fidelity.

They didn’t stop there. The team also proposed two new noise mitigation techniques, ensuring that they could run circuits deep enough without devolving entirely into noise. The two techniques both worked quite well, and the team was able to show that their algorithm, combined with the noise mitigation, performed significantly better and delivered more accurate results. The perfect agreement between the circuit results and the true theoretical results is a remarkable feat coming from a co-design effort between algorithms and hardware.

As we scale to larger systems, we come closer than ever to realizing quantum gravity in the lab, and thus, answering some of science’s biggest questions.

technical
All
September 9, 2025
technical
Preparation is everything

At Quantinuum, we pay attention to every detail. From quantum gates to teleportation, we work hard every day to ensure our quantum computers operate as effectively as possible. This means not only building the most advanced hardware and software, but that we constantly innovate new ways to make the most of our systems.

A key step in any computation is preparing the initial state of the qubits. Like lining up dominoes, you first need a special setup to get meaningful results. This process, known as state preparation or “state prep,” is an open field of research that can mean the difference between realizing the next breakthrough or falling short. Done ineffectively, state prep can carry steep computational costs, scaling exponentially with the qubit number.

Recently, our algorithm teams have been tackling this challenge from all angles. We’ve published three new papers on state prep, covering state prep for chemistry, materials, and fault tolerance.

In the first paper, our team tackled the issue of preparing states for quantum chemistry. Representing chemical systems on gate-based quantum computers is a tricky task; partly because you often want to prepare multiconfigurational states, which are very complex. Preparing states like this can cost a lot of resources, so our team worked to ensure we can do it without breaking the (quantum) bank.

To do this, our team investigated two different state prep methods. The first method uses Givens rotations, implemented to save computational costs. The second method exploits the sparsity of the molecular wavefunction to maximize efficiency.

Once the team perfected the two methods, they implemented them in InQuanto to explore the benefits across a range of applications, including calculating the ground and excited states of a strongly correlated molecule (twisted C_2 H_4). The results showed that the “sparse state preparation” scheme performed especially well, requiring fewer gates and shorter runtimes than alternative methods.

In the second paper, our team focused on state prep for materials simulation. Generally, it’s much easier for computers to simulate materials that are at zero temperature, which is, obviously, unrealistic. Much more relevant to most scientists is what happens when a material is not at zero temperature. In this case, you have two options: when the material is steadily at a given temperature, which scientists call thermal equilibrium, or when the material is going through some change, also known as out of equilibrium. Both are much harder for classical computers to work with.

In this paper, our team looked to solve an outstanding problem: there is no standard protocol for preparing thermal states. In this work, our team only targeted equilibrium states but, interestingly, they used an out of equilibrium protocol to do the work. By slowly and gently evolving from a simple state that we know how to prepare, they were able to prepare the desired thermal states in a way that was remarkably insensitive to noise.

Ultimately, this work could prove crucial for studying materials like superconductors. After all, no practical superconductor will ever be used at zero temperature. In fact, we want to use them at room temperature – and approaches like this are what will allow us to perform the necessary studies to one day get us there.

Finally, as we advance toward the fault-tolerant era, we encounter a new set of challenges: making computations fault-tolerant at every step can be an expensive venture, eating up qubits and gates. In the third paper, our team made fault-tolerant state preparation—the critical first step in any fault-tolerant algorithm—roughly twice as efficient. With our new “flag at origin” technique, gate counts are significantly reduced, bringing fault-tolerant computation closer to an everyday reality.

The method our researchers developed is highly modular: in the past, to perform optimized state prep like this, developers needed to solve one big expensive optimization problem. In this new work, we’ve figured out how to break the problem up into smaller pieces, in the sense that one now needs to solve a set of much smaller problems. This means that now, for the first time, developers can prepare fault-tolerant states for much larger error correction codes, a crucial step forward in the early-fault-tolerant era.

On top of this, our new method is highly general: it applies to almost any QEC code one can imagine. Normally, fault-tolerant state prep techniques must be anchored to a single code (or a family of codes), making it so that when you want to use a different code, you need a new state prep method. Now, thanks to our team’s work, developers have a single, general-purpose, fault-tolerant state prep method that can be widely applied and ported between different error correction codes. Like the modularity, this is a huge advance for the whole ecosystem—and is quite timely given our recent advances into true fault-tolerance.

This generality isn’t just applicable to different codes, it’s also applicable to the states that you are preparing: while other methods are optimized for preparing only the |0> state, this method is useful for a wide variety of states that are needed to set up a fault tolerant computation. This “state diversity” is especially valuable when working with the best codes – codes that give you many logical qubits per physical qubit. This new approach to fault-tolerant state prep will likely be the method used for fault-tolerant computations across the industry, and if not, it will inform new approaches moving forward.

From the initial state preparation to the final readout, we are ensuring that not only is our hardware the best, but that every single operation is as close to perfect as we can get it.