

When thinking about changes in phases of matter, the first images that come to mind might be ice melting or water boiling. The critical point in these processes is located at the boundary between the two phases – the transition from solid to liquid or from liquid to gas.
Phase transitions like these get right to the heart of how large material systems behave and are at the frontier of research in condensed matter physics for their ability to provide insights into emergent phenomena like magnetism and topological order. In classical systems, phase transitions are generally driven by thermal fluctuations and occur at finite temperature. On the contrary, quantum systems can exhibit phase transitions even at zero temperatures; the residual fluctuations that control such phase transitions at zero temperature are due to entanglement and are entirely quantum in origin.
Quantinuum researchers recently used the H1-1 quantum computer to computationally model a group of highly correlated quantum particles at a quantum critical point — on the border of a transition between a paramagnetic state (a state of magnetism characterized by a weak attraction) to a ferromagnetic one (characterized by a strong attraction).
Simulating such a transition on a classical computer is possible using tensor network methods, though it is difficult. However, generalizations of such physics to more complicated systems can pose serious problems to classical tensor network techniques, even when deployed on the most powerful supercomputers. On a quantum computer, on the other hand, such generalizations will likely only require modest increases in the number and quality of available qubits.
In a technical paper submitted to the arXiv, Probing critical states of matter on a digital quantum computer, the Quantinuum team demonstrated how the powerful components and high fidelity of the H-Series digital quantum computers could be harnessed to tackle a 128-site condensed matter physics problem, combining a quantum tensor network method with qubit reuse to make highly productive use of the 20-qubit H1-1 quantum computer.
Reza Haghshenas, Senior Advanced Physicist, and the lead author the paper said, “This is the kind of problem that appeals to condensed-matter physicists working with quantum computers, who are looking forward to revealing exotic aspects of strongly correlated systems that are still unknown to the classical realm. Digital quantum computers have the potential to become a versatile tool for working scientists, particularly in fields like condensed matter and particle physics, and may open entirely new directions in fundamental research.”

Tensor networks are mathematical frameworks whose structure enables them to represent and manipulate quantum states in an efficient manner. Originally associated with the mathematics of quantum mechanics, tensor network methods now crop up in many places, from machine learning to natural language processing, or indeed any model with a large number of interacting, high-dimensional mathematical objects.
The Quantinuum team described using a tensor network method--the multi-scale entanglement renormalization ansatz (MERA)--to produce accurate estimates for the decay of ferromagnetic correlations and the ground state energy of the system. MERA is particularly well-suited to studying scale invariant quantum states, such as ground states at continuous quantum phase transitions, where each layer in the mathematical model captures entanglement at different scales of distance.
“By calculating the critical state properties with MERA on a digital quantum computer like the H-Series, we have shown that research teams can program the connectivity and system interactions into the problem,” said Dave Hayes, Lead of the U.S. quantum theory team at Quantinuum and one of the paper’s authors. “So, it can, in principle, go out and simulate any system that you can dream of.”
In this experiment, the researchers wanted to accurately calculate the ground state of the quantum system in its critical state. This quantum system is composed of many tiny quantum magnets interacting with one another and pointing in different directions, known as a quantum spin model. In the paramagnetic phase, tiny, individual magnets in the material are randomly oriented, and only correlated with each other over small length-scales. In the ferromagnetic phase, these individual atomic magnetic moments align spontaneously over macroscopic length scales due to strong magnetic interactions.
In the computational model, the quantum magnets were initially arranged in one dimension, along a line. To describe the critical point in this quantum magnetism problem, particles in the line needed to be entangled with one another in a complex way, making this as a very challenging problem for a classical computer to solve in high dimensional and non-equilibrium systems.
“That's as hard as it gets for these systems,” Dave explained. “So that's where we want to look for quantum advantage – because we want the problem to be as hard as possible on the classical computer, and then have a quantum computer solve it.”
To improve the results, the team used two error mitigation techniques, symmetry-based error heralding, which is made possible by the MERA structure, and zero-noise extrapolation, a method originally developed by researchers at IBM. The first involved enforcing local symmetry in the model so that errors affecting the symmetry of the state could be detected. The second strategy, zero-noise extrapolation, involves adding noise to the qubits to measure the impact it has, and then using those results to extrapolate the results that would be expected under conditions with less noise than was present in the experiment.
The Quantinuum team describes this sort of problem as a stepping-stone, which allows the researchers to explore quantum tensor network methods on today’s devices and compare them either to simulations or analytical results produced using classical computers. It is a chance to learn how to tackle a problem really well before quantum computers scale up in the future and begin to offer solutions that are not possible to achieve on classical computers.
“Potentially, our biggest applications over the next couple of years will include studying solid-state systems, physics systems, many-body systems, and modeling them,” said Jenni Strabley, Senior Director of Offering Management at Quantinuum.
The team now looks forward to future work, exploring more complex MERA generalizations to compute the states of 2D and 3D many-body and condensed matter systems on a digital quantum computer – quantum states that are much more difficult to calculate classically.
The H-Series allows researchers to simulate a much broader range of systems than analog devices as well as to incorporate quantum error mitigation strategies, as demonstrated in the experiment. Plus, Quantinuum’s System Model H2 quantum computer, which was launched earlier this year, should scale this type of simulation beyond what is possible using classical computers.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Every year, APS Global Physics Summit brings together scientific community members from around the world across all disciplines of physics.
Join Quantinuum at this year’s conference, taking place in our backyard, Denver, Colorado, from March 15th – 20th, where we will showcase how our quantum hardware, software, and partnerships are helping define the next era of high-performance and quantum computing.
Find our team at booth #1020 and join our sessions below to discover how we’re advancing quantum technologies and building the bridge between HPC and quantum.
Programmable quantum matter at the frontier of classical computation
Speaker: Andrew Potter
Time: 10:12 – 10:48 am
Benchmarking a 98-qubit trapped-ion quantum computer
Speaker: Charles Baldwin
Time: 12:36 – 12:48 pm
High-Fidelity Quantum operations in the Helios Barium-Ion Processor
Speaker: Anthony Ransford
Time: 4:18 – 4:30 pm
Generative AI Model for Quantum State Preparation
Speaker: Jem Guhit
Time: 4:42 – 4:54 pm
Quantum digital simulations of holographic models using Quantinuum Systems
Speaker: Enrico Rinaldi
Time: 5:54 – 6:30 pm
Software-Enabled Innovations that Drive Robust Commercial Operation on Quantinuum Helios
Speaker: Caroline Figgatt
Time: 8:00 – 8:12 am
Improving Clock Speed in the Quantinuum Helios Quantum Computer
Speaker: Adam Reed
Time: 8:12 – 8:24 am
Less Quantum, More Advantage: An End-to-End Quantum Algorithm for the Jones Polynomial
Speaker: Konstantinos Meichanetzidis
Time: 8:48 – 9:00 am
Quantum Operation Pipelining in the Quantinuum Helios Processor
Speaker: Colin Kennedy
Time: 9:00 - 9:12 am
Directly estimating the fidelity of measurement-based quantum computation
Speaker: David Stephen
Time: 9:12 - 9:24 am
Logical algorithms in a quantum error-detecting code on a trapped-ion quantum processor
Speaker: Matthew DeCross
Time: 9:36 - 9:48 am
Separate and efficient characterization of SPAM errors in the presence of leakage
Speaker: Leigh Norris
Time: 10:00 - 10:12 am
Logical benchmarking on a trapped-ion quantum processor
Speaker: Andrew Guo
Time: 12:00 - 12:12 pm
Modelling Actinides Chemistry with Trapped Ion Quantum Computers
Speaker: Carlo Alberto Gaggioli
Time: 3:30 - 3:42 pm
Digital quantum magnetism at the frontier of classical simulation
Speaker: Michael Foss-Feig
Time: 8:36 - 9:12 am
Shorter width truncated Taylor series for Hamiltonian dynamics simulations
Speaker: Michelle Wynne Sze
Time: 9:24 - 9:36 am
Quantum-Accelerated DFT+DMFT for Correlated Subspaces in Hemoglobin
Speaker: Juan Pedersen
Time: 9:48 - 10:00 am
Simple logical quantum computation with concatenated symplectic double codes
Speaker: Noah Berthusen
Time: 12:48 - 1:00 pm
When is enough enough? Efficient estimation of quantum properties by stopping early
Speaker: Oliver Hart
Time: 12:48 - 1:00 pm
High-Level Programming of the Quantinuum Helios Processor
Speaker: John Campora
Time: 1:48 - 2:24 pm
Error detection without post-selection in adaptive quantum circuits
Speaker: Eli Chertkov
Time: 4:42 - 4:54 pm
Below Threshold Logical Quantum Computation at Quantinuum
Speaker: Shival Dasu
Time: 8:00 - 8:36 am
Performing optimal phase measurements with a universal quantum processor
Speaker: Ross Hutson
Time: 8:36 - 8:48 am
Benchmarking with leakage heralded measurements on the Quantinuum Helios processor
Speaker: Victor Colussi
Time: 10:00 am
High-throughput bidirectional microwave-to-optical transduction assessed with a practical quantum capacity
Speaker: Maxwell Urmey
Time: 12:00 - 12:36 pm
Fast quantum state preparation via AI-based Graph Decimation
Speaker: Matteo Puviani
Time: 5:54 - 6:06 pm
2D Tensor Network Methods for Simulation of Spin Models on Quantum Computers
Speaker: Reza Haghshenas
Time: 8:36 - 8:48 am
High-Performance Computing Simulations for Optical Multidimensional Coherent Spectroscopy Studies of Strained Silicon-Vacancy Centers in Diamond
Speaker: Imran Bashir
Time: 10:36 - 10:48 am
High-Performance Statevector Simulation for TKET and Selene with NVIDIA cuStateVec
Speaker: Fabian Finger
Time: 12:36 - 12:48 pm
Part 1: Logic gates on High-rate Quantum LDPC codes using ion trap devices
Speaker: Elijah Durso-Sabina
Time: 12:48 - 1:00 pm
Driving Quantum Computing Forward: QEC, Hardware, and Applications with Quantinuum
Speaker: Natalie Brown
Time: 1:12 - 1:48 pm
A new QCCD computer and new applications
Speaker: Anthony Ransford
Time: 2:24 - 3:00 pm
*All times in MT
In our latest paper, we’ve taken a big step toward large scale fault-tolerant quantum computing, squeezing up to 94 error-detected qubits (and 48 error-corrected qubits) out of just 98 physical qubits, a low-fat encoding that cuts overhead to the bone. With 64 of our logical qubits, we were able to simulate quantum magnetism at a scale that can be exceedingly difficult for classical computers.
The "holy grail" of quantum computing is universal fault-tolerance: the ability to correct errors faster than they occur during any computation. To realize this, we aim to create “logical qubits,” which are groups of entangled physical qubits that share quantum information in a way that protects it. Better protection leads to lower “logical” error rate and greater ability to solve complex problems.
However, it’s never that easy. An unofficial law of physics is “there’s no such thing as a free lunch”. Creating high quality, low error-rate logical qubits often costs many physical qubits, thus reducing the size of calculations you can run, despite your new, lower-than-ever error rates.
With our latest paper, we are thrilled to announce that we have hit a key milestone on the Quantinuum roadmap: an ultra-efficient method for creating logical qubits, extracting a whopping 48 error-corrected and 64 error-detected logical qubits out of just 98 physical qubits. Our logical qubits boasted better than “break-even” fidelity, beating their physical counterparts with lower error rates on several different fronts. And still that isn’t the end of the story: we used our 64 error-detected logical qubits in a large-scale quantum magnetism simulation, laying the groundwork for future studies of exotic interactions in materials.
To get this world-leading result, we employed a neat trick: ‘nesting’ super efficient quantum error-detecting codes together to make a new, ultra-efficient error-correcting code. Dr. DeCross, a primary author on the paper, said this nesting is like “braiding together ropes made out of ropes made out of ropes”. Physicists call this ‘code concatenation’, and you can think of it as adding layers of protection on top of each other.
To begin, we took the now-famous ‘iceberg code’, a quantum error detection code that gives an almost 1:1 ratio of physical qubits to logical qubits. The iceberg code only detects errors, however, which means that instead of actually correcting errors it lets you throw out bits where errors were detected. To make a code that could both detect and correct errors, we concatenated two iceberg codes together, giving a code that can correct small errors while still boasting a world-record 2:1 physical:logical ratio (physicists call this a “high encoding rate”).
The team then benchmarked the logical qubits, checking large system-scale operations and comparing them to their physical counterparts. This introduces a crucial hurdle to clear: oftentimes, researchers end up with logical qubits that perform *worse* than their physical counterparts. It’s critical that logical qubits actually beat physical ones, after all – that is the whole point!
Thanks to some clever circuit design and our natively high fidelities, the new logical qubits outperformed their physical counterparts in every test we performed, sometimes by a factor of 10 to 100.
Of course, the whole point is to use our logical qubits for something useful, the ultimate measure of functionality. With 64 error-detected qubits, we performed a simulation of quantum magnetism; a crucial milestone that validates our roadmap.
The team took extra care to perform their simulation in 3 dimensions to best reflect the real-world (often, studies like this will only be in 1D or 2D to make them easier). Problems like this are both incredibly important for expanding our understanding of materials, but are also incredibly hard, as their complexity scales quickly. To make qubits interact as if they are in a 3D material when they are trapped in 2D inside the computer, we used our all-to-all connectivity, a feature that results from our movable qubits.
Breaking the encoding rate record and performing a world-leading logical simulation wasn’t enough for the team. For their final feat, the team generated 94 error-detected logical qubits, and entangled them all in a special state called a “GHZ” state (also known as a ‘cat’ state, alluding to Schrödinger’s cat). GHZ states are often used by experts as a simple benchmark for showcasing quantum computing’s unique capacity to use entanglement across many qubits. Our best 94-logical qubit GHZ state boasted a fidelity of 94.9%, crushing its un-encoded counterpart.
Taken together, these results show that we can suppress errors more effectively than ever before, proving that Helios is capable of delivering complex, high-fidelity operations that were previously thought to be years away. While the magnetism simulation was only error-detected, it showcases our ability to protect universal computations with partially fault-tolerant methods. On top of that, the team also demonstrated key error-corrected primitives on Helios at scale.
All of this has real-world implications for the quantum ecosystem: we are working to package these iceberg codes into QCorrect, an upcoming tool that will help developers automatically improve the performance of their own applications.
This is just the beginning: we are officially entering the era of large-scale logical computing. The path to fault-tolerance is no longer just theoretical—it is being built, gate by gate, on Helios.
Japan has made bold, strategic investments in both high-performance computing (HPC) and quantum technologies. As these capabilities mature, an important question arises for policymakers and research leaders: how do we move from building advanced machines to demonstrating meaningful, integrated use?
Last year, Quantinuum installed its Reimei quantum computer at a world-class facility in Japan operated by RIKEN, the country’s largest comprehensive research institution. The system was integrated with Japan’s famed supercomputer Fugaku, one of the most powerful in the world, as part of an ambitious national project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), the national research and development entity under the Ministry of Economy, Trade and Industry.
Now, for the first time, a full scientific workflow has been executed across Fugaku, one of the world’s most powerful supercomputers, and Reimei, our trapped-ion quantum computer. This marks a transition from infrastructure development to practical deployment.
In this first foray into hybrid HPC-quantum computation, the team explored chemical reactions that occur inside biomolecules such as proteins. Reactions of this type are found throughout biology, from enzyme functions to drug interactions.
Simulating such reactions accurately is extremely challenging. The region where the chemical reaction occurs—the “active site”—requires very high precision, because subtle electronic effects determine the outcome. At the same time, this active site is embedded within a much larger molecular environment that must also be represented, though typically at a lower level of detail.
To address this complexity, computational chemistry has long relied on layered approaches, in which different parts of a system are treated with different methods. In our work, we extended this concept into the hybrid computing era by combining classical supercomputing with quantum computing.
While the long-term goal of quantum computing is to outperform classical approaches alone, the purpose of this project was to demonstrate a fully functional hybrid system working as an end-to-end platform for real scientific applications. We believe it is not enough to develop hardware in isolation – we must also build workflows where classical and quantum resources create a whole that is greater than the parts. We believe this is a crucial step for our industry; large-scale national investments in quantum computing must ultimately show how the technology can be embedded within existing research infrastructure.
In this work, the supercomputer Fugaku handled geometry optimization and baseline electronic structure calculations. The quantum computer Reimei was used to enhance the treatment of the most difficult electronic interactions in the active site, those that are known to challenge conventional approximate methods. The entire process was coordinated through Quantinuum’s workflow system Tierkreis, which allows jobs to move efficiently between machines.
With this infrastructure in place, we are now poised to truly leverage the power of quantum computing. In this instance, the researchers designed the algorithm to specifically exploit the strengths of both the quantum and the classical hardware.
First, the classical computer constructs an approximate description of the molecular system. Then, the quantum computer is used to model the detailed quantum mechanics that the classical computer can’t handle. Together, this improves accuracy, extending the utility of the classical system.
Accurate simulation of biomolecular reactions remains one of the major challenges in biochemistry. Although the present study uses simplified systems to focus on methodology, it lays the groundwork for future applications in drug design, enzyme engineering, and photoactive biological systems.
While fully fault-tolerant, large-scale quantum computers are still under development, hybrid approaches allow today’s quantum hardware to augment powerful classical systems, such as Fugaku, to explore meaningful applications. As quantum technology matures, the same workflows can scale accordingly.
High-performance computing centers worldwide are actively exploring how quantum devices might integrate into their ecosystems. By demonstrating coordinated job scheduling, direct hardware access, and workflow orchestration across heterogeneous architectures, this work offers a concrete example of how such integration can be achieved.
As quantum hardware matures, we believe the algorithms and workflows developed here can be extended to increasingly realistic and industrially relevant problems. For Japan’s research ecosystem, this first application milestone signals that hybrid quantum–supercomputing is moving from ambition to implementation.