Quantum Volume Testing: Setting the Steady Pace to Higher Performing Devices

May 11, 2022

When it comes to completing the statistical tests and other steps necessary for calculating quantum volume, few people have as much as experience as Dr. Charlie Baldwin.

Baldwin, a lead physicist at Quantinuum, and his team have performed the tests numerous times on three different H-Series quantum computers, which have set six industry records for measured quantum volume since 2020.

Quantum volume is a benchmark developed by IBM in 2019 to measure the overall performance of a quantum computer regardless of the hardware technology. (Quantinuum builds trapped ion systems).

Baldwin’s experience with quantum volume prompted him to share what he’s learned and suggest ways to improve the benchmark in a peer-reviewed paper published this week in Quantum.

“We’ve learned a lot by running these tests and believe there are ways to make quantum volume an even stronger benchmark,” Baldwin said.

We sat down with Baldwin to discuss quantum volume, the paper, and the team’s findings.

How is quantum volume measured? What tests do you run?

Quantum volume is measured by running many randomly constructed circuits on a quantum computer and comparing the outputs to a classical simulation. The circuits are chosen to require random gates and random connectivity to not favor any one architecture. We follow the construction proposed by IBM to build the circuits.

What does quantum volume measure? Why is it important?

In some sense, quantum volume only measures your ability to run the specific set of random quantum volume circuits. That probably doesn’t sound very useful if you have some other application in mind for a quantum computer, but quantum volume is sensitive to many aspects that we believe are key to building more powerful devices.

Quantum computers are often built from the ground up. Different parts—for example, single- and two-qubit gates—have been developed independently over decades of academic research. When these parts are put together in a large quantum circuit, there’re often other errors that creep in and can degrade the overall performance. That’s what makes full-system tests like quantum volume so important; they’re sensitive to these errors.

Increasing quantum volume requires adding more qubits while simultaneously decreasing errors. Our quantum volume results demonstrate all the amazing progress Quantinuum has made at upgrading our trapped-ion systems to include more qubits and identifying and mitigating errors so that users can expect high-fidelity performance on many other algorithms.

You’ve been running quantum volume tests since 2020. What is your biggest takeaway?

I think there’re a couple of things I’ve learned. First, quantum volume isn’t an easy test to run on current machines. While it doesn’t necessarily require a lot of qubits, it does have fairly demanding error requirements. That’s also clear when comparing progress in quantum volume tests across different platforms, which researchers at Los Alamos National Lab did in a recent paper.

Second, I’m always impressed by the continuous and sustained performance progress that our hardware team achieves. And that the progress is actually measurable by using the quantum volume benchmark.

The hardware team has been able to push down many different error sources in the last year while also running customer jobs. This is proven by the quantum volume measurement. For example, H1-2 launched in Fall 2021 with QV=128. But since then, the team has implemented many performance upgrades, recently achieving QV=4096 in about 8 months while also running commercial jobs.

What are the key findings from your paper?

The paper is about four small findings that when put together, we believe, give a clearer view of the quantum volume test.

First, we explored how compiling the quantum volume circuits scales with qubit number and, also proposed using arbitrary angle gates to improve performance—an optimization that many companies are currently exploring.

Second, we studied how quantum volume circuits behave without errors to better relate circuit results to ideal performance.

Third, we ran many numerical simulations to see how the quantum volume test behaved with errors and constructed a method to efficiently estimate performance in larger future systems.

Finally, and I think most importantly, we explored what it takes to meet the quantum volume threshold and what passing it implies about the ability of the quantum computer, especially compared to the requirements for quantum error correction.

What does it take to “pass” the quantum volume threshold?

Passing the threshold for quantum volume is defined by the results of a statistical test on the output of the circuits called the heavy output test. The result of the heavy output test—called the heavy output probability or HOP—must have an uncertainty bar that clears a threshold (2/3).

Originally, IBM constructed a method to estimate that uncertainty based on some assumptions about the distribution and number of samples. They acknowledged that this construction was likely too conservative, meaning it made much larger uncertainty estimates than necessary.

We were able to verify this with simulations and proposed a different method that constructed much tighter uncertainty estimates. We’ve verified the method with numerical simulations. The method allows us to run the test with many fewer circuits while still having the same confidence in the returned estimate.

How do you think the quantum volume test can be improved?

Quantum volume has been criticized for a variety of reasons, but I think there’s still a lot to like about the test. Unlike some other full-system tests, quantum volume has a well-defined procedure, requires challenging circuits, and sets reasonable fidelity requirements.

However, it still has some room for improvement. As machines start to scale up, runtime will become an important dimension to probe. IBM has proposed a metric for measuring run time of quantum volume tests (CLOPS). We also agree that the duration of the computation is important but that there should also be tests that balance run time with fidelity, sometimes called ‘time-to-solution.”

Another aspect that could be improved is filling the gap between when quantum volume is no longer feasible to run—at around 30 qubits—and larger machines. There’s recent work in this area that will be interesting to compare to quantum volume tests.

You presented these findings to IBM researchers who first proposed the benchmark. How was that experience?

It was great to talk to the experts at IBM. They have so much knowledge and experience on running and testing quantum computers. I’ve learned a lot from their previous work and publications.

There is a lot of debate about quantum volume and how long it will be a useful benchmark. What are your thoughts?

The current iteration of quantum volume definitely has an expiration date. It’s limited by our ability to classically simulate the system, so being unable to run quantum volume actually is a goal for quantum computing development. Similarly, quantum volume is a good measuring stick for early development.

Building a large-scale quantum computer is an incredibly challenging task. Like any large project, you break the task up into milestones that you can reach in a reasonable amount of time.

It's like if you want to run a marathon. You wouldn’t start your training by trying to run a marathon on Day 1. You’d build up the distance you run every day at a steady pace. The quantum volume test has been setting our pace of development to steadily reach our goal of building ever higher performing devices.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
|
events
March 9, 2026
APS Global Physics Summit 2026

Every year, APS Global Physics Summit brings together scientific community members from around the world across all disciplines of physics.

Join Quantinuum at this year’s conference, taking place in our backyard, Denver, Colorado, from March 15th – 20th, where we will showcase how our quantum hardware, software, and partnerships are helping define the next era of high-performance and quantum computing.

Find our team at booth #1020 and join our sessions below to discover how we’re advancing quantum technologies and building the bridge between HPC and quantum.

Monday, March 16th

Programmable quantum matter at the frontier of classical computation
Speaker: Andrew Potter
Time: 10:12 – 10:48 am

Benchmarking a 98-qubit trapped-ion quantum computer
Speaker: Charles Baldwin
Time: 12:36 – 12:48 pm

High-Fidelity Quantum operations in the Helios Barium-Ion Processor
Speaker: Anthony Ransford
Time: 4:18 – 4:30 pm

Generative AI Model for Quantum State Preparation
Speaker: Jem Guhit
Time: 4:42 – 4:54 pm

Quantum digital simulations of holographic models using Quantinuum Systems
Speaker: Enrico Rinaldi
Time: 5:54 – 6:30 pm

Tuesday, March 17th

Software-Enabled Innovations that Drive Robust Commercial Operation on Quantinuum Helios
Speaker: Caroline Figgatt
Time: 8:00 – 8:12 am

Improving Clock Speed in the Quantinuum Helios Quantum Computer
Speaker: Adam Reed
Time: 8:12 – 8:24 am

Less Quantum, More Advantage: An End-to-End Quantum Algorithm for the Jones Polynomial
Speaker: Konstantinos Meichanetzidis
Time: 8:48 – 9:00 am

Quantum Operation Pipelining in the Quantinuum Helios Processor
Speaker: Colin Kennedy
Time: 9:00 - 9:12 am

Directly estimating the fidelity of measurement-based quantum computation
Speaker: David Stephen
Time: 9:12 - 9:24 am

Logical algorithms in a quantum error-detecting code on a trapped-ion quantum processor
Speaker: Matthew DeCross
Time: 9:36 - 9:48 am

Separate and efficient characterization of SPAM errors in the presence of leakage
Speaker: Leigh Norris
Time: 10:00 - 10:12 am

Logical benchmarking on a trapped-ion quantum processor
Speaker: Andrew Guo
Time: 12:00 - 12:12 pm

Modelling Actinides Chemistry with Trapped Ion Quantum Computers
Speaker: Carlo Alberto Gaggioli
Time: 3:30 - 3:42 pm

Wednesday, March 18th

Digital quantum magnetism at the frontier of classical simulation
Speaker: Michael Foss-Feig
Time: 8:36  - 9:12 am

Shorter width truncated Taylor series for Hamiltonian dynamics simulations
Speaker: Michelle Wynne Sze
Time: 9:24 - 9:36 am

Quantum-Accelerated DFT+DMFT for Correlated Subspaces in Hemoglobin
Speaker: Juan Pedersen 
Time: 9:48 - 10:00 am

Simple logical quantum computation with concatenated symplectic double codes
Speaker: Noah Berthusen
Time: 12:48 - 1:00 pm

When is enough enough? Efficient estimation of quantum properties by stopping early
Speaker: Oliver Hart
Time: 12:48 - 1:00 pm

High-Level Programming of the Quantinuum Helios Processor
Speaker: John Campora
Time: 1:48 - 2:24 pm

Error detection without post-selection in adaptive quantum circuits 
Speaker: Eli Chertkov
Time: 4:42 - 4:54 pm

Thursday, March 19th

Below Threshold Logical Quantum Computation at Quantinuum
Speaker: Shival Dasu
Time: 8:00 - 8:36 am

Performing optimal phase measurements with a universal quantum processor
Speaker: Ross Hutson
Time: 8:36 - 8:48 am

Benchmarking with leakage heralded measurements on the Quantinuum Helios processor
Speaker: Victor Colussi
Time: 10:00 am

High-throughput bidirectional microwave-to-optical transduction assessed with a practical quantum capacity
Speaker: Maxwell Urmey
Time: 12:00 - 12:36 pm

Fast quantum state preparation via AI-based Graph Decimation
Speaker: Matteo Puviani
Time: 5:54 - 6:06 pm

Friday, March 20th

2D Tensor Network Methods for Simulation of Spin Models on Quantum Computers
Speaker: Reza Haghshenas
Time: 8:36 - 8:48 am

High-Performance Computing Simulations for Optical Multidimensional Coherent Spectroscopy Studies of Strained Silicon-Vacancy Centers in Diamond
Speaker: Imran Bashir
Time: 10:36 - 10:48 am

High-Performance Statevector Simulation for TKET and Selene with NVIDIA cuStateVec
Speaker: Fabian Finger
Time: 12:36 - 12:48 pm

Part 1: Logic gates on High-rate Quantum LDPC codes using ion trap devices
Speaker: Elijah Durso-Sabina
Time: 12:48 - 1:00 pm

Driving Quantum Computing Forward: QEC, Hardware, and Applications with Quantinuum
Speaker: Natalie Brown
Time: 1:12 - 1:48 pm

A new QCCD computer and new applications
Speaker: Anthony Ransford
Time: 2:24 - 3:00 pm

*All times in MT

events
All
Blog
|
technical
March 4, 2026
Skinny Logic: Quantum Codes Go on a Diet

In our latest paper, we’ve taken a big step toward large scale fault-tolerant quantum computing, squeezing up to 94 error-detected qubits (and 48 error-corrected qubits) out of just 98 physical qubits, a low-fat encoding that cuts overhead to the bone. With 64 of our logical qubits, we were able to simulate quantum magnetism at a scale that can be exceedingly difficult for classical computers.

The "holy grail" of quantum computing is universal fault-tolerance: the ability to correct errors faster than they occur during any computation. To realize this, we aim to create “logical qubits,” which are groups of entangled physical qubits that share quantum information in a way that protects it. Better protection leads to lower “logical” error rate and greater ability to solve complex problems.

However, it’s never that easy. An unofficial law of physics is “there’s no such thing as a free lunch”. Creating high quality, low error-rate logical qubits often costs many physical qubits, thus reducing the size of calculations you can run, despite your new, lower-than-ever error rates.

With our latest paper, we are thrilled to announce that we have hit a key milestone on the Quantinuum roadmap: an ultra-efficient method for creating logical qubits, extracting a whopping 48 error-corrected and 64 error-detected logical qubits out of just 98 physical qubits. Our logical qubits boasted better than “break-even” fidelity, beating their physical counterparts with lower error rates on several different fronts. And still that isn’t the end of the story: we used our 64 error-detected logical qubits in a large-scale quantum magnetism simulation, laying the groundwork for future studies of exotic interactions in materials.

Stacking Wins

To get this world-leading result, we employed a neat trick: ‘nesting’ super efficient quantum error-detecting codes together to make a new, ultra-efficient error-correcting code. Dr. DeCross, a primary author on the paper, said this nesting is like “braiding together ropes made out of ropes made out of ropes”.  Physicists call this ‘code concatenation’, and you can think of it as adding layers of protection on top of each other.

To begin, we took the now-famous ‘iceberg code’, a quantum error detection code that gives an almost 1:1 ratio of physical qubits to logical qubits. The iceberg code only detects errors, however, which means that instead of actually correcting errors it lets you throw out bits where errors were detected. To make a code that could both detect and correct errors, we concatenated two iceberg codes together, giving a code that can correct small errors while still boasting a world-record 2:1 physical:logical ratio (physicists call this a “high encoding rate”).

The team then benchmarked the logical qubits, checking large system-scale operations and comparing them to their physical counterparts. This introduces a crucial hurdle to clear: oftentimes, researchers end up with logical qubits that perform *worse* than their physical counterparts. It’s critical that logical qubits actually beat physical ones, after all – that is the whole point!

Thanks to some clever circuit design and our natively high fidelities, the new logical qubits outperformed their physical counterparts in every test we performed, sometimes by a factor of 10 to 100.

Computing Logically

Of course, the whole point is to use our logical qubits for something useful, the ultimate measure of functionality. With 64 error-detected qubits, we performed a simulation of quantum magnetism; a crucial milestone that validates our roadmap.

The team took extra care to perform their simulation in 3 dimensions to best reflect the real-world (often, studies like this will only be in 1D or 2D to make them easier). Problems like this are both incredibly important for expanding our understanding of materials, but are also incredibly hard, as their complexity scales quickly. To make qubits interact as if they are in a 3D material when they are trapped in 2D inside the computer, we used our all-to-all connectivity, a feature that results from our movable qubits.

Maximizing Entanglement

Breaking the encoding rate record and performing a world-leading logical simulation wasn’t enough for the team. For their final feat, the team generated 94 error-detected logical qubits, and entangled them all in a special state called a “GHZ” state (also known as a ‘cat’ state, alluding to Schrödinger’s cat). GHZ states are often used by experts as a simple benchmark for showcasing quantum computing’s unique capacity to use entanglement across many qubits. Our best 94-logical qubit GHZ state boasted a fidelity of 94.9%, crushing its un-encoded counterpart.

Logical Qubits Are the New Normal

Taken together, these results show that we can suppress errors more effectively than ever before, proving that Helios is capable of delivering complex, high-fidelity operations that were previously thought to be years away. While the magnetism simulation was only error-detected, it showcases our ability to protect universal computations with partially fault-tolerant methods. On top of that, the team also demonstrated key error-corrected primitives on Helios at scale.

All of this has real-world implications for the quantum ecosystem: we are working to package these iceberg codes into QCorrect, an upcoming tool that will help developers automatically improve the performance of their own applications.

This is just the beginning: we are officially entering the era of large-scale logical computing. The path to fault-tolerance is no longer just theoretical—it is being built, gate by gate, on Helios.

technical
All
Blog
|
technical
March 2, 2026
Hybrid quantum–HPC computing with trapped ions is here

Japan has made bold, strategic investments in both high-performance computing (HPC) and quantum technologies. As these capabilities mature, an important question arises for policymakers and research leaders: how do we move from building advanced machines to demonstrating meaningful, integrated use?

Last year, Quantinuum installed its Reimei quantum computer at a world-class facility in Japan operated by RIKEN, the country’s largest comprehensive research institution. The system was integrated with Japan’s famed supercomputer Fugaku, one of the most powerful in the world, as part of an ambitious national project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), the national research and development entity under the Ministry of Economy, Trade and Industry.

Now, for the first time, a full scientific workflow has been executed across Fugaku, one of the world’s most powerful supercomputers, and Reimei, our trapped-ion quantum computer. This marks a transition from infrastructure development to practical deployment.

Quantum Biology

In this first foray into hybrid HPC-quantum computation, the team explored chemical reactions that occur inside biomolecules such as proteins. Reactions of this type are found throughout biology, from enzyme functions to drug interactions.

Simulating such reactions accurately is extremely challenging. The region where the chemical reaction occurs—the “active site”—requires very high precision, because subtle electronic effects determine the outcome. At the same time, this active site is embedded within a much larger molecular environment that must also be represented, though typically at a lower level of detail.

To address this complexity, computational chemistry has long relied on layered approaches, in which different parts of a system are treated with different methods. In our work, we extended this concept into the hybrid computing era by combining classical supercomputing with quantum computing.

Shifting the Paradigm

While the long-term goal of quantum computing is to outperform classical approaches alone, the purpose of this project was to demonstrate a fully functional hybrid system working as an end-to-end platform for real scientific applications. We believe it is not enough to develop hardware in isolation – we must also build workflows where classical and quantum resources create a whole that is greater than the parts. We believe this is a crucial step for our industry; large-scale national investments in quantum computing must ultimately show how the technology can be embedded within existing research infrastructure.

In this work, the supercomputer Fugaku handled geometry optimization and baseline electronic structure calculations. The quantum computer Reimei was used to enhance the treatment of the most difficult electronic interactions in the active site, those that are known to challenge conventional approximate methods. The entire process was coordinated through Quantinuum’s workflow system Tierkreis, which allows jobs to move efficiently between machines.

Hybrid Computation is Now an Operational Reality

With this infrastructure in place, we are now poised to truly leverage the power of quantum computing. In this instance, the researchers designed the algorithm to specifically exploit the strengths of both the quantum and the classical hardware.

First, the classical computer constructs an approximate description of the molecular system. Then, the quantum computer is used to model the detailed quantum mechanics that the classical computer can’t handle. Together, this improves accuracy, extending the utility of the classical system.

A Path to Hybrid Advantage

Accurate simulation of biomolecular reactions remains one of the major challenges in biochemistry. Although the present study uses simplified systems to focus on methodology, it lays the groundwork for future applications in drug design, enzyme engineering, and photoactive biological systems.

While fully fault-tolerant, large-scale quantum computers are still under development, hybrid approaches allow today’s quantum hardware to augment powerful classical systems, such as Fugaku, to explore meaningful applications. As quantum technology matures, the same workflows can scale accordingly.

High-performance computing centers worldwide are actively exploring how quantum devices might integrate into their ecosystems. By demonstrating coordinated job scheduling, direct hardware access, and workflow orchestration across heterogeneous architectures, this work offers a concrete example of how such integration can be achieved.

As quantum hardware matures, we believe the algorithms and workflows developed here can be extended to increasingly realistic and industrially relevant problems. For Japan’s research ecosystem, this first application milestone signals that hybrid quantum–supercomputing is moving from ambition to implementation.

technical
All
partnership
All