Quantum Volume Testing: Setting the Steady Pace to Higher Performing Devices

May 11, 2022

When it comes to completing the statistical tests and other steps necessary for calculating quantum volume, few people have as much as experience as Dr. Charlie Baldwin.

Baldwin, a lead physicist at Quantinuum, and his team have performed the tests numerous times on three different H-series computers, which have set six industry records for measured quantum volume since 2020.

Quantum volume is a benchmark developed by IBM in 2019 to measure the overall performance of a quantum computer regardless of the hardware technology. (Quantinuum builds trapped ion systems).

Baldwin’s experience with quantum volume prompted him to share what he’s learned and suggest ways to improve the benchmark in a peer-reviewed paper published this week in Quantum.

“We’ve learned a lot by running these tests and believe there are ways to make quantum volume an even stronger benchmark,” Baldwin said.

We sat down with Baldwin to discuss quantum volume, the paper, and the team’s findings.

How is quantum volume measured? What tests do you run?

Quantum volume is measured by running many randomly constructed circuits on a quantum computer and comparing the outputs to a classical simulation. The circuits are chosen to require random gates and random connectivity to not favor any one architecture. We follow the construction proposed by IBM to build the circuits.

What does quantum volume measure? Why is it important?

In some sense, quantum volume only measures your ability to run the specific set of random quantum volume circuits. That probably doesn’t sound very useful if you have some other application in mind for a quantum computer, but quantum volume is sensitive to many aspects that we believe are key to building more powerful devices.  

Quantum computers are often built from the ground up. Different parts—for example, single- and two-qubit gates—have been developed independently over decades of academic research. When these parts are put together in a large quantum circuit, there’re often other errors that creep in and can degrade the overall performance. That’s what makes full-system tests like quantum volume so important; they’re sensitive to these errors.  

Increasing quantum volume requires adding more qubits while simultaneously decreasing errors. Our quantum volume results demonstrate all the amazing progress Quantinuum has made at upgrading our trapped-ion systems to include more qubits and identifying and mitigating errors so that users can expect high-fidelity performance on many other algorithms.

You’ve been running quantum volume tests since 2020.  What is your biggest takeaway?

I think there’re a couple of things I’ve learned. First, quantum volume isn’t an easy test to run on current machines. While it doesn’t necessarily require a lot of qubits, it does have fairly demanding error requirements. That’s also clear when comparing progress in quantum volume tests across different platforms, which researchers at Los Alamos National Lab did in a recent paper.  

Second, I’m always impressed by the continuous and sustained performance progress that our hardware team achieves. And that the progress is actually measurable by using the quantum volume benchmark.  

The hardware team has been able to push down many different error sources in the last year while also running customer jobs. This is proven by the quantum volume measurement. For example, H1-2 launched in Fall 2021 with QV=128. But since then, the team has implemented many performance upgrades, recently achieving QV=4096 in about 8 months while also running commercial jobs.  

What are the key findings from your paper?

The paper is about four small findings that when put together, we believe, give a clearer view of the quantum volume test.  

First, we explored how compiling the quantum volume circuits scales with qubit number and, also proposed using arbitrary angle gates to improve performance—an optimization that many companies are currently exploring.  

Second, we studied how quantum volume circuits behave without errors to better relate circuit results to ideal performance.

Third, we ran many numerical simulations to see how the quantum volume test behaved with errors and constructed a method to efficiently estimate performance in larger future systems.  

Finally, and I think most importantly, we explored what it takes to meet the quantum volume threshold and what passing it implies about the ability of the quantum computer, especially compared to the requirements for quantum error correction.

What does it take to “pass” the quantum volume threshold?

Passing the threshold for quantum volume is defined by the results of a statistical test on the output of the circuits called the heavy output test. The result of the heavy output test—called the heavy output probability or HOP—must have an uncertainty bar that clears a threshold (2/3).  

Originally, IBM constructed a method to estimate that uncertainty based on some assumptions about the distribution and number of samples. They acknowledged that this construction was likely too conservative, meaning it made much larger uncertainty estimates than necessary.

We were able to verify this with simulations and proposed a different method that constructed much tighter uncertainty estimates. We’ve verified the method with numerical simulations. The method allows us to run the test with many fewer circuits while still having the same confidence in the returned estimate.

How do you think the quantum volume test can be improved?

Quantum volume has been criticized for a variety of reasons, but I think there’s still a lot to like about the test. Unlike some other full-system tests, quantum volume has a well-defined procedure, requires challenging circuits, and sets reasonable fidelity requirements.  

However, it still has some room for improvement. As machines start to scale up, runtime will become an important dimension to probe. IBM has proposed a metric for measuring run time of quantum volume tests (CLOPS). We also agree that the duration of the computation is important but that there should also be tests that balance run time with fidelity, sometimes called ‘time-to-solution.”

Another aspect that could be improved is filling the gap between when quantum volume is no longer feasible to run—at around 30 qubits—and larger machines. There’s recent work in this area that will be interesting to compare to quantum volume tests.

You presented these findings to IBM researchers who first proposed the benchmark.  How was that experience?

It was great to talk to the experts at IBM. They have so much knowledge and experience on running and testing quantum computers. I’ve learned a lot from their previous work and publications.  

There is a lot of debate about quantum volume and how long it will be a useful benchmark.  What are your thoughts?

The current iteration of quantum volume definitely has an expiration date. It’s limited by our ability to classically simulate the system, so being unable to run quantum volume actually is a goal for quantum computing development.  Similarly, quantum volume is a good measuring stick for early development.  

Building a large-scale quantum computer is an incredibly challenging task. Like any large project, you break the task up into milestones that you can reach in a reasonable amount of time.  

It's like if you want to run a marathon. You wouldn’t start your training by trying to run a marathon on Day 1. You’d build up the distance you run every day at a steady pace. The quantum volume test has been setting our pace of development to steadily reach our goal of building ever higher performing devices.

arrow
Kaniah Konkoly-Thege

Kaniah is Chief Legal Counsel and SVP of Government Relations for Quantinuum. In her previous role, she served as General Counsel, Honeywell Quantum Solutions. Prior to Honeywell, she was General Counsel, Honeywell Federal Manufacturing and Technologies, LLC, and Senior Attorney, U.S. Department of Energy. She was Lead Counsel before the Civilian Board of Contract Appeals, the Merit Systems Protection Board, and the Equal Employment Opportunity Commission. Kaniah holds a J.D. from American University, Washington College of Law and B.A., International Relations and Spanish from the College of William and Mary.

Jeff Miller

Jeff Miller is Chief Information Officer for Quantinuum. In his previous role, he served as CIO for Honeywell Quantum Solutions and led a cross-functional team responsible for Information Technology, Cybersecurity, and Physical Security. For Honeywell, Jeff has held numerous management and executive roles in Information Technology, Security, Integrated Supply Chain and Program Management. Jeff holds a B.S., Computer Science, University of Arizona. He is a veteran of the U.S. Navy, attaining the rank of Commander.

Matthew Bohne

Matthew Bohne is the Vice President & Chief Product Security Officer for Honeywell Corporation. He is a passionate cybersecurity leader and executive with a proven track record of building and leading cybersecurity organizations securing energy, industrial, buildings, nuclear, pharmaceutical, and consumer sectors. He is a sought-after expert with deep experience in DevSecOps, critical infrastructure, software engineering, secure SDLC, supply chain security, privacy, and risk management.

Todd Moore

Todd Moore is the Global Vice President of Data Encryption Products at Thales. He is responsible for setting the business line and go to market strategies for an industry leading cybersecurity business. He routinely helps enterprises build solutions for a wide range of complex data security problems and use cases. Todd holds several management and technical degrees from the University of Virginia, Rochester Institute of Technology, Cornell University and Ithaca College. He is active in his community, loves to travel and spends much of his free time supporting his family in pursuing their various passions.

John Davis

Retired U.S. Army Major General John Davis is the Vice President, Public Sector for Palo Alto Networks, where he is responsible for expanding cybersecurity initiatives and global policy for the international public sector and assisting governments around the world to prevent successful cyber breaches. Prior to joining Palo Alto Networks, John served as the Senior Military Advisor for Cyber to the Under Secretary of Defense for Policy and served as the Acting Deputy Assistant Secretary of Defense for Cyber Policy.  Prior to this assignment, he served in multiple leadership positions in special operations, cyber, and information operations.