For a quantum computer to be useful, it must be universal, have lots of qubits, and be able to detect and correct errors. The error correction step must be done so well that in the final calculations, you only see an error in less than one in a billion (or maybe even one in a trillion) tries. Correcting errors on a quantum computer is quite tricky, and most current error correcting schemes are quite expensive for quantum computers to run.
We’ve teamed up with researchers at the University of Colorado to make error correction a little easier – bringing the era of quantum ‘fault tolerance’ closer to reality. Current approaches to error correction involve encoding the quantum information of one qubit into several entangled qubits (called a “logical” qubit). Most of the encoding schemes (called a “code”) in use today are relatively inefficient – they can only make one logical qubit out of a set of physical qubits. As we mentioned earlier, we want lots of error corrected qubits in our machines, so this is highly suboptimal – a “low encoding rate” means that you need many, many more physical qubits to realize a machine with lots of error corrected logical qubits.
Ideally, our computers will have “high-rate” codes (meaning that you get more logical qubits per physical qubit), and researchers have identified promising schemes known as “non-local qLDPC codes”. This type of code has been discussed theoretically for years, but until now had never been realized in practice. In a new paper on the arXiv, the joint team has implemented a high rate non-local qLDPC code on our H2 quantum processor, with impressive results.
The team used the code to create 4 error protected (logical) qubits, then entangled them in a “GHZ state” with better fidelity than doing the same operation on physical qubits – meaning that the error protection code improved fidelity in a difficult entangling operation. The team chose to encode a GHZ state because it is widely used as a system-level benchmark, and its better-than-physical logical preparation marks a highly mature system.
It is worth noting that this remarkable accomplishment was achieved with a very small team, half of whom do not have specialized knowledge about the underlying physics of our processors. Our hardware and software stack are now so mature that advances can be achieved by “quantum programmers” who don’t need advanced quantum hardware knowledge, and who can run their programs on a commercial machine between commercial jobs. This places us bounds ahead of the competition in terms of accessibility and reliability.
This paper marks the first time anyone has entangled 4 logical qubits with better fidelity than the physical analog. This work is in strong synergy with our recent announcement in partnership with Microsoft, where we demonstrated logical fidelities better than physical fidelities on entangled bell pairs and demonstrated multiple rounds of error correction. These results with two different codes underscore how we are moving into the era of fault-tolerance ahead of the competition.
The code used in this paper is significantly more optimized for architectures capable of moving the qubits around, like ours. In practice, this means that we are capable of “non-local” gates and reconfigurability. A big advantage in particular is that some of the critical operations amount to a simple relabeling of the individual qubits, which is virtually error-free.
The biggest advantage, however, is in this code’s very high encoding rate. Unlike many codes in use today, this code offers a very high rate of logical qubits per physical qubit – in fact, the number of logical qubits is proportional to the number of physical qubits, which will allow our machines to scale much more quickly than more traditional codes that have a hard limit on the number of logical qubits one can get in each code block. This is yet another proof point that our machines will scale effectively and quickly.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
At the heart of quantum computing’s promise lies the ability to solve problems that are fundamentally out of reach for classical computers. One of the most powerful ways to unlock that promise is through a novel approach we call Generative Quantum AI, or GenQAI. A key element of this approach is the Generative Quantum Eigensolver (GQE).
GenQAI is based on a simple but powerful idea: combine the unique capabilities of quantum hardware with the flexibility and intelligence of AI. By using quantum systems to generate data, and then using AI to learn from and guide the generation of more data, we can create a powerful feedback loop that enables breakthroughs in diverse fields.
Unlike classical systems, our quantum processing unit (QPU) produces data that is extremely difficult, if not impossible, to generate classically. That gives us a unique edge: we’re not just feeding an AI more text from the internet; we’re giving it new and valuable data that can’t be obtained anywhere else.
One of the most compelling challenges in quantum chemistry and materials science is computing the properties of a molecule’s ground state. For any given molecule or material, the ground state is its lowest energy configuration. Understanding this state is essential for understanding molecular behavior and designing new drugs or materials.
The problem is that accurately computing this state for anything but the simplest systems is incredibly complicated. You cannot even do it by brute force—testing every possible state and measuring its energy—because the number of quantum states grows as a double-exponential, making this an ineffective solution. This illustrates the need for an intelligent way to search for the ground state energy and other molecular properties.
That’s where GQE comes in. GQE is a methodology that uses data from our quantum computers to train a transformer. The transformer then proposes promising trial quantum circuits; ones likely to prepare states with low energy. You can think of it as an AI-guided search engine for ground states. The novelty is in how our transformer is trained from scratch using data generated on our hardware.
Here's how it works:
To test our system, we tackled a benchmark problem: finding the ground state energy of the hydrogen molecule (H₂). This is a problem with a known solution, which allows us to verify that our setup works as intended. As a result, our GQE system successfully found the ground state to within chemical accuracy.
To our knowledge, we’re the first to solve this problem using a combination of a QPU and a transformer, marking the beginning of a new era in computational chemistry.
The idea of using a generative model guided by quantum measurements can be extended to a whole class of problems—from combinatorial optimization to materials discovery, and potentially, even drug design.
By combining the power of quantum computing and AI we can unlock their unified full power. Our quantum processors can generate rich data that was previously unobtainable. Then, an AI can learn from that data. Together, they can tackle problems neither could solve alone.
This is just the beginning. We’re already looking at applying GQE to more complex molecules—ones that can’t currently be solved with existing methods, and we’re exploring how this methodology could be extended to real-world use cases. This opens many new doors in chemistry, and we are excited to see what comes next.
Last year, we joined forces with RIKEN, Japan's largest comprehensive research institution, to install our hardware at RIKEN’s campus in Wako, Saitama. This deployment is part of RIKEN’s project to build a quantum-HPC hybrid platform consisting of high-performance computing systems, such as the supercomputer Fugaku and Quantinuum Systems.
Today, a paper published in Physical Review Research marks the first of many breakthroughs coming from this international supercomputing partnership. The team from RIKEN and Quantinuum joined up with researchers from Keio University to show that quantum information can be delocalized (scrambled) using a quantum circuit modeled after periodically driven systems.
"Scrambling" of quantum information happens in many quantum systems, from those found in complex materials to black holes. Understanding information scrambling will help researchers better understand things like thermalization and chaos, both of which have wide reaching implications.
To visualize scrambling, imagine a set of particles (say bits in a memory), where one particle holds specific information that you want to know. As time marches on, the quantum information will spread out across the other bits, making it harder and harder to recover the original information from local (few-bit) measurements.
While many classical techniques exist for studying complex scrambling dynamics, quantum computing has been known as a promising tool for these types of studies, due to its inherently quantum nature and ease with implementing quantum elements like entanglement. The joint team proved that to be true with their latest result, which shows that not only can scrambling states be generated on a quantum computer, but that they behave as expected and are ripe for further study.
Thanks to this new understanding, we now know that the preparation, verification, and application of a scrambling state, a key quantum information state, can be consistently realized using currently available quantum computers. Read the paper here, and read more about our partnership with RIKEN here.
In our increasingly connected, data-driven world, cybersecurity threats are more frequent and sophisticated than ever. To safeguard modern life, government and business leaders are turning to quantum randomness.
The term to know: quantum random number generators (QRNGs).
QRNGs exploit quantum mechanics to generate truly random numbers, providing the highest level of cryptographic security. This supports, among many things:
Quantum technologies, including QRNGs, could protect up to $1 trillion in digital assets annually, according to a recent report by the World Economic Forum and Accenture.
The World Economic Forum report identifies five industry groups where QRNGs offer high business value and clear commercialization potential within the next few years. Those include:
In line with these trends, recent research by The Quantum Insider projects the quantum security market will grow from approximately $0.7 billion today to $10 billion by 2030.
Quantum randomness is already being deployed commercially:
Recognizing the value of QRNGs, the financial services sector is accelerating its path to commercialization.
On the basis of the latter achievement, we aim to broaden our cybersecurity portfolio with the addition of a certified randomness product in 2025.
The National Institute of Standards and Technology (NIST) defines the cryptographic regulations used in the U.S. and other countries.
This week, we announced Quantum Origin received NIST SP 800-90B Entropy Source validation, marking the first software QRNG approved for use in regulated industries.
This means Quantum Origin is now available for high-security cryptographic systems and integrates seamlessly with NIST-approved solutions without requiring recertification.
The NIST validation, combined with our peer-reviewed papers, further establishes Quantum Origin as the leading QRNG on the market.
--
It is paramount for governments, commercial enterprises, and critical infrastructure to stay ahead of evolving cybersecurity threats to maintain societal and economic security.
Quantinuum delivers the highest quality quantum randomness, enabling our customers to confront the most advanced cybersecurity challenges present today.