Blog

Discover how we are pushing the boundaries in the world of quantum computing

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
technical
All
partnership
All
March 27, 2025
Quantinuum and Google DeepMind Unveil the Reality of the Symbiotic Relationship Between Quantum and AI

The marriage of AI and quantum computing is going to have a widespread and meaningful impact in many aspects of our lives, combining the strengths of both fields to tackle complex problems.

Quantum and AI are the ideal partners. At Quantinuum, we are developing tools to accelerate AI with quantum computers, and quantum computers with AI. According to recent independent analysis, our quantum computers are the world’s most powerful, enabling state-of-the-art approaches like Generative Quantum AI (Gen QAI), where we train classical AI models with data generated from a quantum computer.

We harness AI methods to accelerate the development and performance of our full quantum computing stack as opposed to simply theorizing from the sidelines. A paper in Nature Machine Intelligence reveals the results of a recent collaboration between Quantinuum and Google DeepMind to tackle the hard problem of quantum compilation.

The work shows a classical AI model supporting quantum computing by demonstrating its potential for quantum circuit optimization. An AI approach like this has the potential to lead to more effective control at the hardware level, to a richer suite of middleware tools for quantum circuit compilation, error mitigation and correction, even to novel high-level quantum software primitives and quantum algorithms.

An AI power-up for circuit optimization

The joint Quantinuum-Google DeepMind team of researchers tackled one of quantum computing’s most pressing challenges: minimizing the number of highly expensive but essential T-gates required for universal quantum computation. This is important specifically for the fault-tolerant regime, which is becoming increasingly relevant as quantum error correction protocols are being explored on rapidly developing quantum hardware. The joint team of researchers adapted AlphaTensor, Google DeepMind’s reinforcement learning AI system for algorithm discovery, which was introduced to improve the efficiency of linear algebra computations. The team introduced AlphaTensor-Quantum, which takes as input a quantum circuit and returns a new, more efficient one in terms of number of T-gates, with exactly the same functionality!

AlphaTensor-Quantum outperformed current state-of-the art optimization methods and matched the best human-designed solutions across multiple circuits in a thoroughly curated set of circuits, chosen for their prevalence in many applications, from quantum arithmetic to quantum chemistry. This breakthrough shows the potential for AI to automate the process of finding the most efficient quantum circuit. This is the first time that such an AI model has been put to the problem of T-count reduction at such a large scale.

A quantum power-up for machine learning

The symbiotic relationship between quantum and AI works both ways. When AI and quantum computing work together, quantum computers could dramatically accelerate machine learning algorithms, whether by the development and application of natively quantum algorithms, or by offering quantum-generated training data that can be used to train a classical AI model.

Our recent announcement about Generative Quantum AI (Gen QAI) spells out our commitment to unlocking the value of the data generated by our H2 quantum computer. This value arises from the world’s leading fidelity and computational power of our System Model H2, making it impossible to exactly simulate on any classical computer, and therefore the data it generates – that we can use to train AI – is inaccessible by any other means. Quantinuum’s Chief Scientist for Algorithms and Innovation, Prof. Harry Buhrman, has likened accessing the first truly quantum-generated training data to the invention of the modern microscope in the seventeenth century, which revealed an entirely new world of tiny organisms thriving unseen within a single drop of water.

Recently, we announced a wide-ranging partnership with NVIDIA. It charts a course to commercial scale applications arising from the partnership between high-performance classical computers, powerful AI systems, and quantum computers that breach the boundaries of what previously could and could not be done. Our President & CEO, Dr. Raj Hazra spoke to CNBC recently about our partnership. Watch the video here.

As we prepare for the next stage of quantum processor development, with the launch of our Helios system in 2025, we’re excited to see how AI can help write more efficient code for quantum computers – and how our quantum processors, the most powerful in the world, can provide a backend for AI computations.

As in any truly symbiotic relationship, the addition of AI to quantum computing equally benefits both sides of the equation.

To read more about Quantinuum and Google DeepMind’s collaboration, please read the scientific paper here.

partnership
All
technical
All
March 26, 2025
Quantinuum Introduces First Commercial Application for Quantum Computers

Few things are more important to the smooth functioning of our digital economies than trustworthy security. From finance to healthcare, from government to defense, quantum computers provide a means of building trust in a secure future.

Quantinuum and its partners JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory and the University of Texas used quantum computers to solve a known industry challenge, generating the “random seeds” that are essential for the cryptography behind all types of secure communication. As our partner and collaborator, JPMorganChase explain in this blog post that true randomness is a scarce and valuable commodity.

This year, Quantinuum will introduce a new product based on this development that has long been anticipated, but until now thought to be some years away from reality.

It represents a major milestone for quantum computing that will reshape commercial technology and cybersecurity: Solving a critical industry challenge by successfully generating certifiable randomness.

Building on the extraordinary computational capabilities of Quantinuum’s H2 System – the highest-performing quantum computer in the world – our team has implemented a groundbreaking approach that is ready-made for industrial adoption. Nature today reported the results of a proof of concept with JPMorganChase, Oak Ridge National Laboratory, Argonne National Laboratory, and the University of Texas alongside Quantinuum. It lays out a new quantum path to enhanced security that can provide early benefits for applications in cryptography, fairness, and privacy.

By harnessing the powerful properties of quantum mechanics, we’ve shown how to generate the truly random seeds critical to secure electronic communication, establishing a practical use-case that was unattainable before the fidelity and scalability of the H2 quantum computer made it reliable. So reliable, in fact, that it is now possible to turn this into a commercial product.

Quantinuum will integrate quantum-generated certifiable randomness into our commercial portfolio later this year. Alongside Generative Quantum AI and our upcoming Helios system – capable of tackling problems a trillion times more computationally complex than H2 – Quantinuum is further cementing its leadership in the rapidly-advancing quantum computing industry.

This Matters Because Cybersecurity Matters

Cryptographic security, a bedrock of the modern economy, relies on two essential ingredients: standardized algorithms and reliable sources of randomness – the stronger the better. Non-deterministic physical processes, such as those governed by quantum mechanics, are ideal sources of randomness, offering near-total unpredictability and therefore, the highest cryptographic protection. Google, when it originally announced it had achieved quantum supremacy, speculated on the possibility of using the random circuit sampling (RCS) protocol for the commercial production of certifiable random numbers. RCS has been used ever since to demonstrate the performance of quantum computers, including a milestone achievement in June 2024 by Quantinuum and JPMorganChase, demonstrating their first quantum computer to defy classical simulation. More recently RCS was used again by Google for the launch of its Willow processor.

In today’s announcement, our joint team used the world’s highest-performing quantum and classical computers to generate certified randomness via RCS. The work was based on advanced research by Shih-Han Hung and Scott Aaronson of the University of Texas at Austin, who are co-authors on today’s paper.

Following a string of major advances in 2024 – solving the scaling challenge, breaking new records for reliability in partnership with Microsoft, and unveiling a hardware roadmap, today proves how quantum technology is capable of creating tangible business value beyond what is available with classical supercomputers alone.

What follows is intended as a non-technical explainer of the results in today’s Nature paper.

Certified Randomness: The First Commercial Application for Quantum Computers

For security sensitive applications, classical random number generation is unsuitable because it is not fundamentally random and there is a risk it can be “cracked”. The holy grail is randomness whose source is truly unpredictable, and Nature provides just the solution: quantum mechanics. Randomness is built into the bones of quantum mechanics, where determinism is thrown out the door and outcomes can be true coin flips.

At Quantinuum, we have a strong track record in developing methods for generating certifiable randomness using a quantum computer. In 2021, we introduced Quantum Origin to the market, as a quantum-generated source of entropy targeted at hardening classically-generated encryption keys, using well known quantum technologies that prior to that it had not been possible to use.

In their theory paper, “Certified Randomness from Quantum Supremacy”, Hung and Aaronson ask the question: is it possible to repurpose RCS, and use it to build an application that moves beyond quantum technologies and takes advantage of the power of a quantum computer running quantum circuits?

This was the inspiration for the collaboration team led by JPMorganChase and Quantinuum to draw up plans to execute the proposal using real-world technology. Here’s how it worked:

  • The team sent random circuits to Quantinuum’s H2, the world’s highest performing commercially available quantum computer.
  • The quantum computer executed each circuit and returned the corresponding sample. The response times were remarkably short, and it could be proven that the circuits could not have been simulated classically within those times, even using the best-known techniques on computing resources greater than those available in the world’s most powerful classical supercomputer.
  • The randomness of the returned sample was mathematically certified using Frontier, the world’s most powerful classical supercomputer, establishing it achieved a “passing threshold” on a measure known as the “cross-entropy benchmark”. The better your quantum computer, the higher you can set the “passing threshold”. When the threshold is sufficiently high, "spoofing" the cross-entropy benchmark using only classical methods becomes inefficient.
  • Therefore, if the samples are returned quickly and meet the high threshold, the team could be confident that they were generated by a quantum computer – and thus be truly random.

This confirmed that Quantinuum’s quantum computer is not only incapable of being matched by classical computers but can also be used reliably to produce a certifiably random seed from a quantum computer without the need to build your own device, or even trust the device you are accessing.

Looking ahead

The use of randomness in critical cybersecurity environments will gravitate towards quantum resources, as the security demands of end users grows in the face of ongoing cyber threats.

The era of quantum utility offers the promise of radical new approaches to solving substantial and hard problems for businesses and governments.

Quantinuum’s H2 has now demonstrated practical value for cybersecurity vendors and customers alike, where non-deterministic sources of encryption may in time be overtaken by nature’s own source of randomness.

In 2025, we will launch our Helios device, capable of supporting at least 50 high-fidelity logical qubits – and further extending our lead in the quantum computing sector. We thus continue our track record of disclosing our objectives and then meeting or surpassing them. This commitment is essential, as it generates faith and conviction among our partners and collaborators, that empirical results such as those reported today can lead to successful commercial applications.

Helios, which is already in its late testing phase, ahead of being commercially available later this year, brings higher fidelity, greater scale, and greater reliability. It promises to bring a wider set of hybrid quantum-supercomputing opportunities to our customers – making quantum computing more valuable and more accessible than ever before.

And in 2025 we look forward to adding yet another product, building out our cybersecurity portfolio with a quantum source of certifiably random seeds for a wide range of customers who require this foundational element to protect their businesses and organizations.

technical
All
March 25, 2025
Untangling the Mysteries of Knots with Quantum Computers

One of the greatest privileges of working directly with the world’s most powerful quantum computer at Quantinuum is building meaningful experiments that convert theory into practice. The privilege becomes even more compelling when considering that our current quantum processor – our H2 system – will soon be enhanced by Helios, a quantum computer potentially a stunning trillion times more powerful, and due for launch in just a few months. The moment has now arrived when we can build a timeline for applications that quantum computing professionals have anticipated for decades and which are experimentally supported.

Quantinuum’s applied algorithms team has released an end-to-end implementation of a quantum algorithm to solve a central problem in knot theory. Along with an efficiently verifiable benchmark for quantum processors, it allows for concrete resource estimates for quantum advantage in the near-term. The research team, included Quantinuum researchers Enrico Rinaldi, Chris Self, Eli Chertkov, Matthew DeCross, David Hayes, Brian Neyenhuis, Marcello Benedetti, and Tuomas Laakkonen of the Massachusetts Institute of Technology. In this article, Konstantinos Meichanetzidis, a team leader from Quantinuum’s AI group who led the project, writes about the problem being addressed and how the team, adopting an aggressively practical mindset, quantified the resources required for quantum advantage:

Knot theory is a field of mathematics called ‘low-dimensional topology’, with a rich history, stemming from a wild idea proposed by Lord Kelvin, who conjectured that chemical elements are different knots formed by vortices in the aether. Of course, we know today that the aether theory was falsified by the Michelson-Morley experiment, but mathematicians have been classifying, tabulating, and studying knots ever since. Regarding applications, the pure mathematics of knots can find their way into cryptography, but knot theory is also intrinsically related to many aspects of the natural sciences. For example, it naturally shows up in certain spin models in statistical mechanics, when one studies thermodynamic quantities, and the magnetohydrodynamical properties of knotted magnetic fields on the surface of the sun are an important indicator of solar activity, to name a few examples. Remarkably, physical properties of knots are important in understanding the stability of macromolecular structures. This is highlighted by work of Cozzarelli and Sumners in the 1980’s, on the topology of DNA, particularly how it forms knots and supercoils. Their interdisciplinary research helped explain how enzymes untangle and manage DNA topology, crucial for replication and transcription, laying the foundation for using mathematical models to predict and manipulate DNA behavior, with broad implications in drug development and synthetic biology. Serendipitously, this work was carried out during the same decade as Richard Feynman, David Deutsch, and Yuri Manin formed the first ideas for a quantum computer.

Most importantly for our context, knot theory has fundamental connections to quantum computation, originally outlined by Witten’s work in topological quantum field theory, concerning spacetimes without any notion of distance but only shape. In fact, this connection formed the very motivation for attempting to build topological quantum computers, where anyons – exotic quasiparticles that live in two-dimensional materials – are braided to perform quantum gates. The relation between knot theory and quantum physics is the most beautiful and bizarre facts you have never heard of.

The fundamental problem in knot theory is distinguishing knots, or more generally, links. To this end, mathematicians have defined link invariants, which serve as ‘fingerprints’ of a link. As there are many equivalent representations of the same link, an invariant, by definition, is the same for all of them. If the invariant is different for two links then they are not equivalent. The specific invariant our team focused on is the Jones polynomial.

Four equivalent representations of the trefoil knot, the simplest non-trivial knot.
They all have the same Jones polynomial, as it is an invariant.
These knots have different Jones polynomials, so they are not equivalent.

The mind-blowing fact here is that any quantum computation corresponds to evaluating the Jones polynomial of some link, as shown by the works of Freedman, Larsen, Kitaev, Wang, Shor, Arad, and Aharonov. It reveals that this abstract mathematical problem is truly quantum native. In particular, the problem our team tackled was estimating the value of the Jones polynomial at the 5th root of unity. This is a well-studied case due to its relation to the infamous Fibonacci anyons, whose braiding is capable of universal quantum computation.

Building and improving on the work of Shor, Aharonov, Landau, Jones, and Kauffman, our team developed an efficient quantum algorithm that works end-to end. That is, given a link, it outputs a highly optimized quantum circuit that is readily executable on our processors and estimates the desired quantity. Furthermore, our team designed problem-tailored error detection and error mitigation strategies to achieve a higher accuracy.

Demonstration of the quantum algorithm on the H2 quantum computer for estimating the value of Jones polynomial of a link with ~100 crossings. The raw signal (orange) can be amplified (green) with error detection, and corrected via a problem-tailored error mitigation method (purple), bringing the experimental estimate closer to the actual value (blue).

In addition to providing a full pipeline for solving this problem, a major aspect of this work was to use the fact that the Jones polynomial is an invariant to introduce a benchmark for noisy quantum computers. Most importantly, this benchmark is efficiently verifiable, a rare property since for most applications, exponentially costly classical computations are necessary for verification. Given a link whose Jones polynomial is known, the benchmark constructs a large set of topologically equivalent links of varying sizes. In turn, these result in a set of circuits of varying numbers of qubits and gates, all of which should return the same answer. Thus, one can characterize the effect of noise present in a given quantum computer by quantifying the deviation of its output from the known result.

The benchmark introduced in this work allows one to identify the link sizes for which there is exponential quantum advantage in terms of time to solution against the state-of-the-art classical methods. These resource estimates indicate our next processor, Helios, with 96 qubits and at least 99.95% two-qubit gate-fidelity, is extremely close to meeting these requirements. Furthermore, Quantinuum’s hardware roadmap includes even more powerful machines that will come online by the end of the decade. Notably, an advantage in energy consumption emerges for even smaller link sizes. Meanwhile, our teams aim to continue reducing errors through improvements in both hardware and software, thereby moving deeper into quantum advantage territory.

Rigorous resource estimation of our quantum algorithm pinpoints the exponential quantum advantage quantified in terms of time-to-solution, namely the time necessary for the classical state-of-the-art to reach the same error as the achieved by quantum. The advantage crossover happens at large link sizes, requiring circuits with ~85 qubits and ~8.5k two-qubit gates, assuming 99.99% two-qubit gate fidelity and 30ms per circuit-layer. The classical algorithms are assumed to run on the Frontier Supercomputer.

The importance of this work, indeed the uniqueness of this work in the quantum computing sector, is its practical end-to-end approach. The advantage-hunting strategies introduced are transferable to other “quantum-easy classically-hard” problems. Our team’s efforts motivate shifting the focus toward specific problem instances rather than broad problem classes, promoting an engineering-oriented approach to identifying quantum advantage. This involves first carefully considering how quantum advantage should be defined and quantified, thereby setting a high standard for quantum advantage in scientific and mathematical domains. And thus, making sure we instill confidence in our customers and partners.

Edited

partnership
All
March 20, 2025
Initiating Impact Today: Combining the World’s Most Powerful in Quantum and Classical Compute
A diagram of a diagram of a diagramDescription automatically generated with medium confidence

Quantinuum and NVIDIA, world leaders in their respective sectors, are combining forces to fast-track commercially scalable quantum supercomputers, further bolstering the announcement Quantinuum made earlier this year about the exciting new potential in Generative Quantum AI. 

Make no mistake about it, the global quantum race is on. With over $2 billion raised by companies in 2024 alone, and over 150 new startups in the past five years, quantum computing is no longer restricted to ‘the lab’.  

The United Nations proclaimed 2025 as the International Year of Quantum Science and Technology (IYQ), and as we march toward the end of the first quarter, the old maxim that quantum computing is still a decade (or two, or three) away is no longer relevant in today’s world. Governments, commercial enterprises and scientific organizations all stand to benefit from quantum computers, led by those built by Quantinuum.

That is because, amid the flurry of headlines and social media chatter filled with aspirational statements of future ambitions shared by those in the heat of this race, we at Quantinuum continue to lead by example. We demonstrate what that future looks like today, rather than relying solely on slide deck presentations.

Our quantum computers are the most powerful systems in the world. Our H2 system, the only quantum computer that cannot be classically simulated, is years ahead of any other system being developed today. In the coming months, we’ll introduce our customers to Helios, a trillion times more powerful than H2, further extending our lead beyond where the competition is still only planning to be. 

At Quantinuum, we have been convinced for years that the impact of quantum computers on the real world will happen earlier than anticipated. However, we have known that impact will be when powerful quantum computers and powerful classical systems work together. 

This sort of hybrid ‘supercomputer’ has been referenced a few times in the past few months, and there is, rightly, a sense of excitement about what such an accelerated quantum supercomputer could achieve.

The Power of Hybrid Quantum and Classical Compute

In a revolutionary move on March 18th, 2025, at the GTC AI conference, NVIDIA announced the opening of a world-class accelerated quantum research center with Quantinuum selected as a key founding collaborator to work on projects with NVIDIA at the center. 

With details shared in an accompanying press statement and blog post, the NVIDIA Accelerated Quantum Research Center (NVAQC) being built in Boston, Massachusetts, will integrate quantum computers with AI supercomputers to ultimately explore how to build accelerated quantum supercomputers capable of solving some of the world’s most challenging problems. The center will begin operations later this year.

As shared in Quantinuum’s accompanying statement, the center will draw on the NVIDIA CUDA-Q platform, alongside a NVIDIA GB200 NVL72 system containing 576 NVIDIA Blackwell GPUs dedicated to quantum research. 

The Role of CUDA-Q in Quantum-Classical Integration  

Integrating quantum and classical hardware relies on a platform that can allow researchers and developers to quickly shift context between these two disparate computing paradigms within a single application. NVIDIA CUDA-Q platform will be the entry-point for researchers to exploit the NVAQC quantum-classical integration. 

In 2022, Quantinuum became the first company to bring CUDA-Q to its quantum systems, establishing a pioneering collaboration that continues to today. Users of CUDA-Q are currently offered access to Quantinuum’s System H1 QPU and emulator for 90 days.

Quantinuum’s future systems will continue to support the CUDA-Q platform. Furthermore, Quantinuum and NVIDIA are committed to evolving and improving tools for quantum classical integration to take advantage of the latest hardware features, for example, on our upcoming Helios generation. 

The Gen-Q-AI Moment

A few weeks ago, we disclosed high level details about an AI system that we refer to as Generative Quantum AI, or GenQAI. We highlighted a timeline between now and the end of this year when the first commercial systems that can accelerate both existing AI and quantum computers.

At a high level, an AI system such as GenQAI will be enhanced by access to information that has not previously been accessible. Information that is generated from a quantum computer that cannot be simulated. This information and its effect can be likened to a powerful microscope that brings accuracy and detail to already powerful LLM’s, bridging the gap from today’s impressive accomplishments towards truly impactful outcomes in areas such as biology and healthcare, material discovery and optimization.

Through the integration of the most powerful in quantum and classical systems, and by enabling tighter integration of AI with quantum computing, the NVAQC will be an enabler for the realization of the accelerated quantum supercomputer needed for GenQAI products and their rapid deployment and exploitation.

Innovating our Roadmap

The NVAQC will foster the tools and innovations needed for fully fault-tolerant quantum computing and will be enabler to the roadmap Quantinuum released last year.

With each new generation of our quantum computing hardware and accompanying stack, we continue to scale compute capabilities through more powerful hardware and advanced features, accelerating the timeline for practical applications. To achieve these advances, we integrate the best CPU and GPU technologies alongside our quantum innovations. Our long-standing collaboration with NVIDIA drives these advancements forward and will be further enriched by the NVAQC. 

Here are a couple of examples: 

In quantum error correction, error syndromes detected by measuring "ancilla" qubits are sent to a "decoder." The decoder analyzes this information to determine if any corrections are needed. These complex algorithms must be processed quickly and with low latency, requiring advanced CPU and GPU power to calculate and apply corrections keeping logical qubits error-free. Quantinuum has been collaborating with NVIDIA on the development of customized GPU-based decoders which can be coupled with our upcoming Helios system. 

In our application space, we recently announced the integration of InQuanto v4.0, the latest version of Quantinuum’s cutting edge computational chemistry platform, with NVIDIA cuQuantum SDK to enable previously inaccessible tensor-network-based methods for large-scale and high-precision quantum chemistry simulations.

Our work with NVIDIA underscores the partnership between quantum computers and classical processors to maximize the speed toward scaled quantum computers. These systems offer error-corrected qubits for operations that accelerate scientific discovery across a wide range of fields, including drug discovery and delivery, financial market applications, and essential condensed matter physics, such as high-temperature superconductivity.

We look forward to sharing details with our partners and bringing meaningful scientific discovery to generate economic growth and sustainable development for all of humankind.

technical
All
March 18, 2025
Setting the Benchmark: Independent Study Ranks Quantinuum #1 in Performance

By Dr. Chris Langer

In the rapidly advancing world of quantum computing, to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance. A recent independent study comparing 19 quantum processing units (QPUs) on the market today has validated what we’ve long known to be true: Quantinuum’s systems are the undisputed leaders in performance.

The Benchmarking Study

A comprehensive study conducted by a joint team from the Jülich Supercomputing Centre, AIDAS, RWTH Aachen University, and Purdue University compared QPUs from leading companies like IBM, Rigetti, and IonQ, evaluating how well each executed the Quantum Approximate Optimization Algorithm (QAOA), a widely used algorithm that provides a system level measure of performance. After thorough examination, the study concluded that:

“...the performance of quantinuum H1-1 and H2-1 is superior to that of the other QPUs.”

Quantinuum emerged as the clear leader, particularly in full connectivity, the most critical category for solving real-world optimization problems. Full connectivity is a huge comparative advantage, offering more computational power and more flexibility in both error correction and algorithmic design. Our dominance in full connectivity—unattainable for platforms with natively limited connectivity—underscores why we are the partner of choice in quantum computing.

Leading Across the Board

We take benchmarking seriously at Quantinuum. We lead in nearly every industry benchmark, from best-in-class gate fidelities to a 4000x lead in quantum volume, delivering top performance to our customers.

Our Quantum Charged-coupled Device (QCCD) architecture has been the foundation of our success, delivering consistent performance gains year-over-year. Unlike other architectures, QCCD offers all-to-all connectivity, world-record fidelities, and advanced features like real-time decoding. Altogether, it’s clear we have superior performance metrics across the board.

While many claim to be the best, we have the data to prove it. This table breaks down industry benchmarks, using the leading commercial spec for each quantum computing architecture.

TABLE 1. Leading commercial spec for each listed architecture or demonstrated capabilities on commercial hardware. Download Benchmarking Results

These metrics are the key to our success. They demonstrate why Quantinuum is the only company delivering meaningful results to customers at a scale beyond classical simulation limits.

Our progress builds upon a series of Quantinuum’s technology breakthroughs, including the creation of the most reliable and highest-quality logical qubits, as well as solving the key scalability challenge associated with ion-trap quantum computers — culminating in a commercial system with greater than 99.9% two-qubit gate fidelity.

From our groundbreaking progress with System Model H2 to advances in quantum teleportation and solving the wiring problem, we’re taking major steps to tackle the challenges our whole industry faces, like execution speed and circuit depth. Advancements in parallel gate execution, faster ion transport, and high-rate quantum error correction (QEC) are just a few ways we’re maintaining our lead far ahead of the competition.

This commitment to excellence ensures that we not only meet but exceed expectations, setting the bar for reliability, innovation, and transformative quantum solutions. 

Onward and Upward

To bring it back to the opening message: to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance.

We are just months away from launching Quantinuum’s next generation system, Helios, which will be one trillion times more powerful than H2. By 2027, Quantinuum will launch the industry’s first 100-logical-qubit system, featuring best-in-class error rates, and we are on track to deliver fault-tolerant computation on hundreds of logical qubits by the end of the decade. 

The evidence speaks for itself: Quantinuum is setting the standard in quantum computing. Our unrivaled specs, proven performance, and commitment to innovation make us the partner of choice for those serious about unlocking value with quantum computing. Quantinuum is committed to doing the hard work required to continue setting the standard and delivering on our promises. This is Quantinuum. This is leadership.

Dr. Chris Langer is a Fellow, a key inventor and architect for the Quantinuum hardware, and serves as an advisor to the CEO.

_______________________________________

Citations from Benchmarking Table
1 Quantinuum. System Model H2. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h2
2 IBM. Quantum Services & Resources. IBM Quantum, https://quantum.ibm.com/services/resources
3 Quantinuum. System Model H1. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h1
4 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
5 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
6 Quantinuum. H1 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sheets/Quantinuum%20H1%20Product%20Data%20Sheet.pdf
7 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
8 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
9 Quantinuum. H2 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sQuantinuum. H2 Product Data Sheet. Quantinuum,heets/Quantinuum%20H2%20Product%20Data%20Sheet.pdf
10 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
11 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
12 Moses, S. A., et al. "A Race-Track Trapped-Ion Quantum Processor." Physical Review X, vol. 13, no. 4, 2023, https://journals.aps.org/prx/pdf/10.1103/PhysRevX.13.041052
13 Google Quantum AI and Collaborators. "Quantum Error Correction Below the Surface Code Threshold." Nature, vol. 638, 2024, https://www.nature.com/articles/s41586-024-08449-y
14 Bluvstein, Dolev, et al. "Logical Quantum Processor Based on Reconfigurable Atom Arrays." Nature, vol. 626, 2023, https://www.nature.com/articles/s41586-023-06927-3
15 DeCross, Matthew, et al. "The Computational Power of Random Quantum Circuits in Arbitrary Geometries." arXiv, Published on 21 June 2024, hhttps://arxiv.org/pdf/2406.02501
16 Montanez-Barrera, J. A., et al. "Evaluating the Performance of Quantum Process Units at Large Width and Depth." arXiv, 10 Feb. 2025, https://arxiv.org/pdf/2502.06471
17 Evered, Simon J., et al. "High-Fidelity Parallel Entangling Gates on a Neutral-Atom Quantum Computer." Nature, vol. 622, 2023, https://www.nature.com/articles/s41586-023-06481-y
18 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." Physical Review X, vol. 11, no. 4, 2021, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.041058
19 Carrera Vazquez, Almudena, et al. "Scaling Quantum Computing with Dynamic Circuits." arXiv, 27 Feb. 2024, https://arxiv.org/html/2402.17833v1
20 Moses, S.A.,, et al. "A Race Track Trapped-Ion Quantum Processor." arXiv, 16 May 2023, https://arxiv.org/pdf/2305.03828
21 Garcia Almeida, D., Ferris, K., Knanazawa, N., Johnson, B., Davis, R. "New fractional gates reduce circuit depth for utility-scale workloads." IBM Quantum Blog, IBM, 18 Nov. 2020, https://www.ibm.com/quantum/blog/fractional-gates
22 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." arXiv, 15 July 2021, https://arxiv.org/pdf/2107.07505
23 Google Quantum AI and Collaborators. “Quantum error correction below the surface code threshold.” arXiv, 24 Aug. 2024, https://arxiv.org/pdf/2408.13687v1
events
All
March 16, 2025
APS Global Physics Summit 2025

The 2025 Joint March Meeting and April Meeting — referred to as the APS Global Physics Summit — is the largest physics research conference in the world, uniting 14,000 scientific community members across all disciplines of physics.  

The Quantinuum team is looking forward to participating in this year’s conference to showcase our latest advancements in quantum technology. Find us throughout the week at the below sessions and visit us at Booth 1001.

Join these sessions to discover how Quantinuum is advancing quantum computing

T11: Quantum Error Correction
Speaker: Natalie Brown
Date: Sunday, March 16th
Time: 8:00 – 8:12am
Location: Anaheim Convention Center, 261B (Level 2)

The computational power of random quantum circuits in arbitrary geometries
Session MAR-F34: Near-Term Quantum Resource Reduction and Random Circuits

Speaker: Matthew DeCross
Date: Tuesday, March 18th
Time: 8:00 – 8:12am
Location: Anaheim Convention Center, 256A (Level 2)

Topological Order from Measurements and Feed-Forward on a Trapped Ion Quantum Computer
Session MAR-F14: Realizing Topological States on Quantum Hardware

Speaker: Henrik Dreyer
Date: Tuesday, March 18th
Time: 9:12 – 9:48am
Location: Anaheim Convention Center, 158 (Level 1)

Trotter error time scaling separation via commutant decomposition
Session MAR-F34: Near-Term Quantum Resource Reduction and Random Circuits
Speaker: Yi-Hsiang Chen (Quantinuum)
Date: Tuesday, March 18th
Time: 10:00 – 10:12am
Location: Anaheim Convention Center, 256A (Level 2)

Squared overlap calculations with linear combination of unitaries
Session MAR-J35: Circuit Optimization and Compilation

Speaker: Michelle Wynne Sze
Date: Tuesday, March 18th
Time: 4:36 – 4:48pm
Location: Anaheim Convention Center, 256B (Level 2)

High-precision quantum phase estimation on a trapped-ion quantum computer
Session MAR-L16: Quantum Simulation for Quantum Chemistry

Speaker: Andrew Tranter
Date: Wednesday, March 19th
Time: 9:48 – 10:00am
Location: Anaheim Convention Center, 160 (Level 1)

Robustness of near-thermal dynamics on digital quantum computers
Session MAR-L16: Quantum Simulation for Quantum Chemistry

Speaker: Eli Chertkov
Date: Wednesday, March 19th
Time: 10:12 – 10:24am
Location: Anaheim Convention Center, 160 (Level 1)

Floquet prethermalization on a digital quantum computer
Session MAR-Q09: Quantum Simulation of Condensed Matter Physics

Speaker: Reza Haghshenas
Date: Thursday, March 20th
Time: 10:00 – 10:12am
Location: Anaheim Convention Center, 204C (Level 2)

Teleportation of a Logical Qubit on a Trapped-ion Quantum Computer
Session MAR-S11: Advances in QEC Experiments

Speaker: Ciaran Ryan-Anderson
Date: Thursday, March 20th
Time: 11:30 – 12:06pm
Location: Anaheim Convention Center, 155 (Level 1)

*All times in Pacific Standard Time