Quantinuum Introduces First Commercial Application for Quantum Computers

March 26, 2025

Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.

Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.

This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.

It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.

Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.

By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.

Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.

This Matters Because Cybersecurity Matters

Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.

In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.

Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.

What follows is intended as a non-technical explainer of the results in today’s Nature paper.

Certified Randomness: The First Commercial Application for Quantum Computers

For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.

At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.

In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?

This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:

  • The team sent random circuits to Quantinuum’s H2, the world’s highest performing commercially available quantum computer.
  • The quantum computer executed each circuit and returned the corresponding sample. The response times were remarkably short, and it could be proven that the circuits could not have been simulated classically within those times, even using the best-known techniques on computing resources greater than those available in the world’s most powerful classical supercomputer.
  • The randomness of the returned sample was mathematically certified using Frontier, the world’s most powerful classical supercomputer, establishing it achieved a “passing threshold” on a measure known as the “cross-entropy benchmark”. The better your quantum computer, the higher you can set the “passing threshold”. When the threshold is sufficiently high, "spoofing" the cross-entropy benchmark using only classical methods becomes inefficient.
  • Therefore, if the samples are returned quickly and meet the high threshold, the team could be confident that they were generated by a quantum computer – and thus be truly random.

This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.

Looking ahead

The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.

The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.

Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.

In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.

Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.

And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
October 23, 2025
Mapping the Hunt for Quantum Advantage

By Konstantinos Meichanetzidis

When will quantum computers outperform classical ones?

This question has hovered over the field for decades, shaping billion-dollar investments and driving scientific debate.

The question has more meaning in context, as the answer depends on the problem at hand. We already have estimates of the quantum computing resources needed for Shor’s algorithm, which has a superpolynomial advantage for integer factoring over the best-known classical methods, threatening cryptographic protocols. Quantum simulation allows one to glean insights into exotic materials and chemical processes that classical machines struggle to capture, especially when strong correlations are present. But even within these examples, estimates change surprisingly often, carving years off expected timelines. And outside these famous cases, the map to quantum advantage is surprisingly hazy.

Researchers at Quantinuum have taken a fresh step toward drawing this map. In a new theoretical framework, Harry Buhrman, Niklas Galke, and Konstantinos Meichanetzidis introduce the concept of “queasy instances” (quantum easy) – problem instances that are comparatively easy for quantum computers but appear difficult for classical ones.

From Problem Classes to Problem Instances

Traditionally, computer scientists classify problems according to their worst-case difficulty. Consider the problem of Boolean satisfiability, or SAT, where one is given a set of variables (each can be assigned a 0 or a 1) and a set of constraints and must decide whether there exists a variable assignment that satisfies all the constraints. SAT is a canonical NP-complete problem, and so in the worst case, both classical and quantum algorithms are expected to perform badly, which means that the runtime scales exponentially with the number of variables. On the other hand, factoring is believed to be easier for quantum computers than for classical ones. But real-world computing doesn’t deal only in worst cases. Some instances of SAT are trivial; others are nightmares. The same is true for optimization problems in finance, chemistry, or logistics. What if quantum computers have an advantage not across all instances, but only for specific “pockets” of hard instances? This could be very valuable, but worst-case analysis is oblivious to this and declares that there is no quantum advantage.

To make that idea precise, the researchers turned to a tool from theoretical computer science: Kolmogorov complexity. This is a way of measuring how “regular” a string of bits is, based on the length of the shortest program that generates it. A simple string like 0000000000 can be described by a tiny program (“print ten zeros”), while the description of a program that generates a random string exhibiting no pattern is as long as the string itself. From there, the notion of instance complexity was developed: instead of asking “how hard is it to describe this string?”, we ask “how hard is it to solve this particular problem instance (represented by a string)?” For a given SAT formula, for example, its polynomial-time instance complexity is the size of the smallest program that runs in polynomial time and decides whether the formula is satisfiable. This smallest program must be consistently answering all other instances, and it is also allowed to declare “I don’t know”.

In their new work, the team extends this idea into the quantum realm by defining polynomial-time quantum instance complexity as the size of the shortest quantum program that solves a given instance and runs on polynomial time. This makes it possible to directly compare quantum and classical effort, in terms of program description length, on the very same problem instance. If the quantum description is significantly shorter than the classical one, that problem instance is one the researchers call “queasy”: quantum-easy and classically hard. These queasy instances are the precise places where quantum computers offer a provable advantage – and one that may be overlooked under a worst-case analysis.

Why “Queasy”?

The playful name captures the imbalance between classical and quantum effort. A queasy instance is one that makes classical algorithms struggle, i.e. their shortest descriptions of efficient programs that decide them are long and unwieldy, while a quantum computer can handle the same instance with a much simpler, faster, and shorter program. In other words, these instances make classical computers “queasy,” while quantum ones solve them efficiently and finding them quantum-easy. The key point of these definitions lies in demonstrating that they yield reasonable results for well-known optimisation problems.

By carefully analysing a mapping from the problem of integer factoring to SAT (which is possible because factoring is inside NP and SAT is NP-complete) the researchers prove that there exist infinitely many queasy SAT instances. SAT is one of the most central and well-studied problems in computer science that finds numerous applications in the real-world. The significant realisation that this theoretical framework highlights is that SAT is not expected to yield a blanket quantum advantage, but within it lie islands of queasiness – special cases where quantum algorithms decisively win.

Algorithmic Utility

Finding a queasy instance is exciting in itself, but there is more to this story. Surprisingly, within the new framework it is demonstrated that when a quantum algorithm solves a queasy instance, it does much more than solve that single case. Because the program that solves it is so compact, the same program can provably solve an exponentially large set of other instances, as well. Interestingly, the size of this set depends exponentially on the queasiness of the instance!

Think of it like discovering a special shortcut through a maze. Once you’ve found the trick, it doesn’t just solve that one path, but reveals a pattern that helps you solve many other similarly built mazes, too (even if not optimally). This property is called algorithmic utility, and it means that queasy instances are not isolated curiosities. Each one can open a doorway to a whole corridor with other doors, behind which quantum advantage might lie.

A North Star for the Field

Queasy instances are more than a mathematical curiosity; this is a new framework that provides a language for quantum advantage. Even though the quantities defined in the paper are theoretical, involving Turing machines and viewing programs as abstract bitstrings, they can be approximated in practice by taking an experimental and engineering approach. This work serves as a foundation for pursuing quantum advantage by targeting problem instances and proving that in principle this can be a fruitful endeavour.

The researchers see a parallel with the rise of machine learning. The idea of neural networks existed for decades along with small scale analogue and digital implementations, but only when GPUs enabled large-scale trial and error did they explode into practical use. Quantum computing, they suggest, is on the cusp of its own heuristic era. “Quristics” will be prominent in finding queasy instances, which have the right structure so that classical methods struggle but quantum algorithms can exploit, to eventually arrive at solutions to typical real-world problems. After all, quantum computing is well-suited for small-data big-compute problems, and our framework employs the concepts to quantify that; instance complexity captures both their size and the amount of compute required to solve them.

Most importantly, queasy instances shift the conversation. Instead of asking the broad question of when quantum computers will surpass classical ones, we can now rigorously ask where they do. The queasy framework provides a language and a compass for navigating the rugged and jagged computational landscape, pointing researchers, engineers, and industries toward quantum advantage.

technical
All
Blog
September 15, 2025
Quantum World Congress 2025

From September 16th – 18th, Quantum World Congress (QWC) brought together visionaries, policymakers, researchers, investors, and students from across the globe to discuss the future of quantum computing in Tysons, Virginia.

Quantinuum is forging the path to universal, fully fault-tolerant quantum computing with our integrated full-stack. With our quantum experts were on site, we showcased the latest on Quantinuum Systems, the world’s highest-performing, commercially available quantum computers, our new software stack featuring the key additions of Guppy and Selene, our path to error correction, and more.

Highlights from QWC

Dr. Patty Lee Named the Industry Pioneer in Quantum

The Quantum Leadership Awards celebrate visionaries transforming quantum science into global impact. This year at QWC, Dr. Patty Lee, our Chief Scientist for Hardware Technology Development, was named the Industry Pioneer in Quantum! This honor celebrates her more than two decades of leadership in quantum computing and her pivotal role advancing the world’s leading trapped-ion systems. Watch the Award Ceremony here.

Keynote with Quantinuum's CEO, Dr. Rajeeb Hazra

At QWC 2024, Quantinuum’s President & CEO, Dr. Rajeeb “Raj” Hazra, took the stage to showcase our commitment to advancing quantum technologies through the unveiling of our roadmap to universal, fully fault-tolerant quantum computing by the end of this decade. This year at QWC 2025, Raj shared the progress we’ve made over the last year in advancing quantum computing on both commercial and technical fronts and exciting insights on what’s to come from Quantinuum. Access the full session here.

Panel Session: Policy Priorities for Responsible Quantum and AI

As part of the Track Sessions on Government & Security, Quantinuum’s Director of Government Relations, Ryan McKenney, discussed “Policy Priorities for Responsible Quantum and AI” with Jim Cook from Actions to Impact Strategies and Paul Stimers from Quantum Industry Coalition.

Fireside Chat: Establishing a Pro-Innovation Regulatory Framework

During the Track Session on Industry Advancement, Quantinuum’s Chief Legal Officer, Kaniah Konkoly-Thege, and Director of Government Relations, Ryan McKenney, discussed the importance of “Establishing a Pro-Innovation Regulatory Framework”.

events
All
Blog
September 15, 2025
Quantum gravity in the lab

In the world of physics, ideas can lie dormant for decades before revealing their true power. What begins as a quiet paper in an academic journal can eventually reshape our understanding of the universe itself.

In 1993, nestled deep in the halls of Yale University, physicist Subir Sachdev and his graduate student Jinwu Ye stumbled upon such an idea. Their work, originally aimed at unraveling the mysteries of “spin fluids”, would go on to ignite one of the most surprising and profound connections in modern physics—a bridge between the strange behavior of quantum materials and the warped spacetime of black holes.

Two decades after the paper was published, it would be pulled into the orbit of a radically different domain: quantum gravity. Thanks to work by renowned physicist Alexei Kitaev in 2015, the model found new life as a testing ground for the mind-bending theory of holography—the idea that the universe we live in might be a projection, from a lower-dimensional reality.

Holography is an exotic approach to understanding reality where scientists use holograms to describe higher dimensional systems in one less dimension. So, if our world is 3+1 dimensional (3 spatial directions plus time), there exists a 2+1, or 3-dimensional description of it. In the words of Leonard Susskind, a pioneer in quantum holography, "the three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface."  

The “SYK” model, as it is known today, is now considered a quintessential framework for studying strongly correlated quantum phenomena, which occur in everything from superconductors to strange metals—and even in black holes. In fact, The SYK model has also been used to study one of physics’ true final frontiers, quantum gravity, with the authors of the paper calling it “a paradigmatic model for quantum gravity in the lab.”  

The SYK model involves Majorana fermions, a type of particle that is its own antiparticle. A key feature of the model is that these fermions are all-to-all connected, leading to strong correlations. This connectivity makes the model particularly challenging to simulate on classical computers, where such correlations are difficult to capture. Our quantum computers, however, natively support all-to-all connectivity making them a natural fit for studying the SYK model.

Now, 10 years after Kitaev’s watershed lectures, we’ve made new progress in studying the SYK model. In a new paper, we’ve completed the largest ever SYK study on a quantum computer. By exploiting our system’s native high fidelity and all-to-all connectivity, as well as our scientific team’s deep expertise across many disciplines, we were able to study the SYK model at a scale three times larger than the previous best experimental attempt.

While this work does not exceed classical techniques, it is very close to the classical state-of-the-art. The biggest ever classical study was done on 64 fermions, while our recent result, run on our smallest processor (System Model H1), included 24 fermions. Modelling 24 fermions costs us only 12 qubits (plus one ancilla) making it clear that we can quickly scale these studies: our System Model H2 supports 56 qubits (or ~100 fermions), and Helios, which is coming online this year, will have over 90 qubits (or ~180 fermions).

However, working with the SYK model takes more than just qubits. The SYK model has a complex Hamiltonian that is difficult to work with when encoded on a computer—quantum or classical. Studying the real-time dynamics of the SYK model means first representing the initial state on the qubits, then evolving it properly in time according to an intricate set of rules that determine the outcome. This means deep circuits (many circuit operations), which demand very high fidelity, or else an error will occur before the computation finishes.

Our cross-disciplinary team worked to ensure that we could pull off such a large simulation on a relatively small quantum processor, laying the groundwork for quantum advantage in this field.

First, the team adopted a randomized quantum algorithm called TETRIS to run the simulation. By using random sampling, among other methods, the TETRIS algorithm allows one to compute the time evolution of a system without the pernicious discretization errors or sizable overheads that plague other approaches. TETRIS is particularly suited to simulating the SYK model because with a high level of disorder in the material, simulating the SYK Hamiltonian means averaging over many random Hamiltonians. With TETRIS, one generates random circuits to compute evolution (even with a deterministic Hamiltonian). Therefore, when applying TETRIS on SYK, for every shot one can just generate a random instance of the Hamiltonain, and generate a random circuit on TETRIS at the same time. This simple approach enables less gate counts required per shot, meaning users can run more shots, naturally mitigating noise.

In addition, the team “sparsified” the SYK model, which means “pruning” the fermion interactions to reduce the complexity while still maintaining its crucial features. By combining sparsification and the TETRIS algorithm, the team was able to significantly reduce the circuit complexity, allowing it to be run on our machine with high fidelity.

They didn’t stop there. The team also proposed two new noise mitigation techniques, ensuring that they could run circuits deep enough without devolving entirely into noise. The two techniques both worked quite well, and the team was able to show that their algorithm, combined with the noise mitigation, performed significantly better and delivered more accurate results. The perfect agreement between the circuit results and the true theoretical results is a remarkable feat coming from a co-design effort between algorithms and hardware.

As we scale to larger systems, we come closer than ever to realizing quantum gravity in the lab, and thus, answering some of science’s biggest questions.

technical
All