

Quantinuum’s H-Series team has hit the ground running in 2023, achieving a new performance milestone. The H1-1 trapped ion quantum computer has achieved a Quantum Volume (QV) of 32,768 (215), the highest in the industry to date.

The team previously increased the QV to 8,192 (or 213) for the System Model H1 system in September, less than six months ago. The next goal was a QV of 16,384 (214). However, continuous improvements to the H1-1's controls and subsystems advanced the system enough to successfully reach 214 as expected, and then to go one major step further, and reach a QV of 215.
The Quantum Volume test is a full-system benchmark that produces a single-number measure of a quantum computer’s general capability. The benchmark takes into account qubit number, fidelity, connectivity, and other quantities important in building useful devices.1 While other measures such as gate fidelity and qubit count are significant and worth tracking, neither is as comprehensive as Quantum Volume which better represents the operational ability of a quantum computer.
Dr. Brian Neyenhuis, Director of Commercial Operations, credits reductions in the phase noise of the computer’s lasers as one key factor in the increase.
"We've had enough qubits for a while, but we've been continually pushing on reducing the error in our quantum operations, specifically the two-qubit gate error, to allow us to do these Quantum Volume measurements,” he said.
The Quantinuum team improved memory error and elements of the calibration process as well.
“It was a lot of little things that got us to the point where our two-qubit gate error and our memory error are both low enough that we can pass these Quantum Volume circuit tests,” he said.
The work of increasing Quantum Volume means improving all the subsystems and subcomponents of the machine individually and simultaneously, while ensuring all the systems continue to work well together. Such a complex task takes a high degree of orchestration across the Quantinuum team, with the benefits of the work passed on to H-Series users.
To illustrate what this 5-digit Quantum Volume milestone means for the H-Series, here are 5 perspectives that reflect Quantinuum teams and H-Series users.
Dr. Henrik Dreyer is Managing Director and Scientific Lead at Quantinuum’s office in Munich, Germany. In the context of his work, an improvement in Quantum Volume is important as it relates to gate fidelity.
“As application developers, the signal-to-noise ratio is what we're interested in,” Henrik said. “If the signal is small, I might run the circuits 10 times and only get one good shot. To recover the signal, I have to do a lot more shots and throw most of them away. Every shot takes time."
“The signal-to-noise ratio is sensitive to the gate fidelity. If you increase the gate fidelity by a little bit, the runtime of a given algorithm may go down drastically,” he said. “For a typical circuit, as the plot shows, even a relatively modest 0.16 percentage point improvement in fidelity, could mean that it runs in less than half the time.”
To demonstrate this point, the Quantinuum team has been benchmarking the System Model H1 performance on circuits relevant for near-term applications. The graph below shows repeated benchmarking of the runtime of these circuits before and after the recent improvement in gate fidelity. The result of this moderate change in fidelity is a 3x change in runtime. The runtimes calculated below are based on the number of shots required to obtain accurate results from the benchmarking circuit – the example uses 430 arbitrary-angle two-qubit gates and an accuracy of 3%.

Dr. Natalie Brown and Dr, Ciaran Ryan-Anderson both work on quantum error correction at Quantinuum. They see the QV advance as an overall boost to this work.
“Hitting a Quantum Volume number like this means that you have low error rates, a lot of qubits, and very long circuits,” Natalie said. “And all three of those are wonderful things for quantum error correction. A higher Quantum Volume most certainly means we will be able to run quantum error correction better. Error correction is a critical ingredient to large-scale quantum computing. The earlier we can start exploring error correction on today’s small-scale hardware, the faster we’ll be able to demonstrate it at large-scale.”
Ciaran said that H1-1's low error rates allow scientists to make error correction better and start to explore decoding options.
“If you can have really low error rates, you can apply a lot of quantum operations, known as gates,” Ciaran said. "This makes quantum error correction easier because we can suppress the noise even further and potentially use fewer resources to do it, compared to other devices.”
“This accomplishment shows that gate improvements are getting translated to full-system circuits,” said Dr. Charlie Baldwin, a research scientist at Quantinuum.
Charlie specializes in quantum computing performance benchmarks, conducting research with the Quantum Economic Development Consortium (QED-C).
“Other benchmarking tests use easier circuits or incorporate other options like post-processing data. This can make it more difficult to determine what part improved,” he said. “With Quantum Volume, it’s clear that the performance improvements are from the hardware, which are the hardest and most significant improvements to make.”
“Quantum Volume is a well-established test. You really can’t cheat it,” said Charlie.
Dr. Ross Duncan, Head of Quantum Software, sees Quantum Volume measurements as a good way to show overall progress in the process of building a quantum computer.
“Quantum Volume has merit, compared to any other measure, because it gives a clear answer,” he said.
“This latest increase reveals the extent of combined improvements in the hardware in recent months and means researchers and developers can expect to run deeper circuits with greater success.”
Quantinuum’s business model is unique in that the H-Series systems are continuously upgraded through their product lifecycle. For users, this means they continually and immediately get access to the latest breakthroughs in performance. The reported improvements were not done on an internal testbed, but rather implemented on the H1-1 system which is commercially available and used extensively by users around the world.
“As soon as the improvements were implemented, users were benefiting from them,” said Dr. Jenni Strabley, Sr. Director of Offering Management. “We take our Quantum Volume measurement intermixed with customers’ jobs, so we know that the improvements we’re seeing are also being seen by our customers.”
Jenni went on to say, “Continuously delivering increasingly better performance shows our commitment to our customers’ success with these early small-scale quantum computers as well as our commitment to accuracy and transparency. That’s how we accelerate quantum computing.”
This latest QV milestone demonstrates how the Quantinuum team continues to boost the performance of the System Model H1, making improvements to the two-qubit gate fidelity while maintaining high single-qubit fidelity, high SPAM fidelity, and low cross-talk.
The average single-qubit gate fidelity for these milestones was 99.9955(8)%, the average two-qubit gate fidelity was 99.795(7)% with fully connected qubits, and state preparation and measurement fidelity was 99.69(4)%.
For both tests, the Quantinuum team ran 100 circuits with 200 shots each, using standard QV optimization techniques to yield an average of 219.02 arbitrary angle two-qubit gates per circuit on the 214 test, and 244.26 arbitrary angle two-qubit gates per circuit on the 215 test.
The Quantinuum H1-1 successfully passed the quantum volume 16,384 benchmark, outputting heavy outcomes 69.88% of the time, and passed the 32,768 benchmark, outputting heavy outcomes 69.075% of the time. The heavy output frequency is a simple measure of how well the measured outputs from the quantum computer match the results from an ideal simulation. Both results are above the two-thirds passing threshold with high confidence. More details on the Quantum Volume test can be found here.


Quantum Volume data and analysis code can be accessed on Quantinuum’s GitHub repository for quantum volume data. Contemporary benchmarking data can be accessed at Quantinuum’s GitHub repository for hardware specifications.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Typically, Quantum Error Detection (QED) is viewed as a short-term solution—a non-scalable, stop-gap until full fault tolerance is achieved at scale.
That’s just changed, thanks to a serendipitous discovery made by our team. Now, QED can be used in a much wider context than previously thought. Our team made this discovery while studying the contact process, which describes things like how diseases spread or how water permeates porous materials. In particular, our team was studying the quantum contact process (QCP), a problem they had tackled before, which helps physicists understand things like phase transitions. In the process (pun intended), they came across what senior advanced physicist, Eli Chertkov, described as “a surprising result.”
While examining the problem, the team realized that they could convert detected errors due to noisy hardware into random resets, a key part of the QCP, thus avoiding the exponentially costly overhead of post-selection normally expected in QED.
To understand this better, the team developed a new protocol in which the encoded, or logical, quantum circuit adapts to the noise generated by the quantum computer. They quickly realized that this method could be used to explore other classes of random circuits similar to the ones they were already studying.
The team put it all together on System Model H2 to run a complex simulation, and were surprised to find that they were able to achieve near break-even results, where the logically encoded circuit performed as well as its physical analog, thanks to their clever application of QED. Ultimately, this new protocol will allow QED codes to be used in a scalable way, saving considerable computational resources compared to full quantum error correction (QEC).
Researchers at the crossroads of quantum information, quantum simulation, and many-body physics will take interest in this protocol and use it as a springboard for inventing new use cases for QED.
Stay tuned for more, our team always has new tricks up their sleeves.
Learn mode about System Model H2 with this video:
By Konstantinos Meichanetzidis
When will quantum computers outperform classical ones?
This question has hovered over the field for decades, shaping billion-dollar investments and driving scientific debate.
The question has more meaning in context, as the answer depends on the problem at hand. We already have estimates of the quantum computing resources needed for Shor’s algorithm, which has a superpolynomial advantage for integer factoring over the best-known classical methods, threatening cryptographic protocols. Quantum simulation allows one to glean insights into exotic materials and chemical processes that classical machines struggle to capture, especially when strong correlations are present. But even within these examples, estimates change surprisingly often, carving years off expected timelines. And outside these famous cases, the map to quantum advantage is surprisingly hazy.
Researchers at Quantinuum have taken a fresh step toward drawing this map. In a new theoretical framework, Harry Buhrman, Niklas Galke, and Konstantinos Meichanetzidis introduce the concept of “queasy instances” (quantum easy) – problem instances that are comparatively easy for quantum computers but appear difficult for classical ones.

Traditionally, computer scientists classify problems according to their worst-case difficulty. Consider the problem of Boolean satisfiability, or SAT, where one is given a set of variables (each can be assigned a 0 or a 1) and a set of constraints and must decide whether there exists a variable assignment that satisfies all the constraints. SAT is a canonical NP-complete problem, and so in the worst case, both classical and quantum algorithms are expected to perform badly, which means that the runtime scales exponentially with the number of variables. On the other hand, factoring is believed to be easier for quantum computers than for classical ones. But real-world computing doesn’t deal only in worst cases. Some instances of SAT are trivial; others are nightmares. The same is true for optimization problems in finance, chemistry, or logistics. What if quantum computers have an advantage not across all instances, but only for specific “pockets” of hard instances? This could be very valuable, but worst-case analysis is oblivious to this and declares that there is no quantum advantage.
To make that idea precise, the researchers turned to a tool from theoretical computer science: Kolmogorov complexity. This is a way of measuring how “regular” a string of bits is, based on the length of the shortest program that generates it. A simple string like 0000000000 can be described by a tiny program (“print ten zeros”), while the description of a program that generates a random string exhibiting no pattern is as long as the string itself. From there, the notion of instance complexity was developed: instead of asking “how hard is it to describe this string?”, we ask “how hard is it to solve this particular problem instance (represented by a string)?” For a given SAT formula, for example, its polynomial-time instance complexity is the size of the smallest program that runs in polynomial time and decides whether the formula is satisfiable. This smallest program must be consistently answering all other instances, and it is also allowed to declare “I don’t know”.
In their new work, the team extends this idea into the quantum realm by defining polynomial-time quantum instance complexity as the size of the shortest quantum program that solves a given instance and runs on polynomial time. This makes it possible to directly compare quantum and classical effort, in terms of program description length, on the very same problem instance. If the quantum description is significantly shorter than the classical one, that problem instance is one the researchers call “queasy”: quantum-easy and classically hard. These queasy instances are the precise places where quantum computers offer a provable advantage – and one that may be overlooked under a worst-case analysis.
The playful name captures the imbalance between classical and quantum effort. A queasy instance is one that makes classical algorithms struggle, i.e. their shortest descriptions of efficient programs that decide them are long and unwieldy, while a quantum computer can handle the same instance with a much simpler, faster, and shorter program. In other words, these instances make classical computers “queasy,” while quantum ones solve them efficiently and finding them quantum-easy. The key point of these definitions lies in demonstrating that they yield reasonable results for well-known optimisation problems.
By carefully analysing a mapping from the problem of integer factoring to SAT (which is possible because factoring is inside NP and SAT is NP-complete) the researchers prove that there exist infinitely many queasy SAT instances. SAT is one of the most central and well-studied problems in computer science that finds numerous applications in the real-world. The significant realisation that this theoretical framework highlights is that SAT is not expected to yield a blanket quantum advantage, but within it lie islands of queasiness – special cases where quantum algorithms decisively win.

Finding a queasy instance is exciting in itself, but there is more to this story. Surprisingly, within the new framework it is demonstrated that when a quantum algorithm solves a queasy instance, it does much more than solve that single case. Because the program that solves it is so compact, the same program can provably solve an exponentially large set of other instances, as well. Interestingly, the size of this set depends exponentially on the queasiness of the instance!
Think of it like discovering a special shortcut through a maze. Once you’ve found the trick, it doesn’t just solve that one path, but reveals a pattern that helps you solve many other similarly built mazes, too (even if not optimally). This property is called algorithmic utility, and it means that queasy instances are not isolated curiosities. Each one can open a doorway to a whole corridor with other doors, behind which quantum advantage might lie.
Queasy instances are more than a mathematical curiosity; this is a new framework that provides a language for quantum advantage. Even though the quantities defined in the paper are theoretical, involving Turing machines and viewing programs as abstract bitstrings, they can be approximated in practice by taking an experimental and engineering approach. This work serves as a foundation for pursuing quantum advantage by targeting problem instances and proving that in principle this can be a fruitful endeavour.
The researchers see a parallel with the rise of machine learning. The idea of neural networks existed for decades along with small scale analogue and digital implementations, but only when GPUs enabled large-scale trial and error did they explode into practical use. Quantum computing, they suggest, is on the cusp of its own heuristic era. “Quristics” will be prominent in finding queasy instances, which have the right structure so that classical methods struggle but quantum algorithms can exploit, to eventually arrive at solutions to typical real-world problems. After all, quantum computing is well-suited for small-data big-compute problems, and our framework employs the concepts to quantify that; instance complexity captures both their size and the amount of compute required to solve them.
Most importantly, queasy instances shift the conversation. Instead of asking the broad question of when quantum computers will surpass classical ones, we can now rigorously ask where they do. The queasy framework provides a language and a compass for navigating the rugged and jagged computational landscape, pointing researchers, engineers, and industries toward quantum advantage.
From September 16th – 18th, Quantum World Congress (QWC) brought together visionaries, policymakers, researchers, investors, and students from across the globe to discuss the future of quantum computing in Tysons, Virginia.
Quantinuum is forging the path to universal, fully fault-tolerant quantum computing with our integrated full-stack. With our quantum experts were on site, we showcased the latest on Quantinuum Systems, the world’s highest-performing, commercially available quantum computers, our new software stack featuring the key additions of Guppy and Selene, our path to error correction, and more.
Dr. Patty Lee Named the Industry Pioneer in Quantum
The Quantum Leadership Awards celebrate visionaries transforming quantum science into global impact. This year at QWC, Dr. Patty Lee, our Chief Scientist for Hardware Technology Development, was named the Industry Pioneer in Quantum! This honor celebrates her more than two decades of leadership in quantum computing and her pivotal role advancing the world’s leading trapped-ion systems. Watch the Award Ceremony here.
Keynote with Quantinuum's CEO, Dr. Rajeeb Hazra
At QWC 2024, Quantinuum’s President & CEO, Dr. Rajeeb “Raj” Hazra, took the stage to showcase our commitment to advancing quantum technologies through the unveiling of our roadmap to universal, fully fault-tolerant quantum computing by the end of this decade. This year at QWC 2025, Raj shared the progress we’ve made over the last year in advancing quantum computing on both commercial and technical fronts and exciting insights on what’s to come from Quantinuum. Access the full session here.
Panel Session: Policy Priorities for Responsible Quantum and AI
As part of the Track Sessions on Government & Security, Quantinuum’s Director of Government Relations, Ryan McKenney, discussed “Policy Priorities for Responsible Quantum and AI” with Jim Cook from Actions to Impact Strategies and Paul Stimers from Quantum Industry Coalition.
Fireside Chat: Establishing a Pro-Innovation Regulatory Framework
During the Track Session on Industry Advancement, Quantinuum’s Chief Legal Officer, Kaniah Konkoly-Thege, and Director of Government Relations, Ryan McKenney, discussed the importance of “Establishing a Pro-Innovation Regulatory Framework”.