Quantum Computers Will Make AI Better

Quantum computers will drive AI to new heights, enabling better accuracy and therefore better performance, and scalable sustainable growth.

January 22, 2025
Today’s LLMs are often impressive by past standards – but they are far from perfect

Quietly, and determinedly since 2019, we’ve been working on Generative Quantum AI. Our early focus on building natively quantum systems for machine learning has benefitted from and been accelerated by access to the world’s most powerful quantum computers, and quantum computers that cannot be classically simulated.

Our work additionally benefits from being very close to our Helios generation quantum computer, built in Colorado, USA. Helios is 1 trillion times more powerful than our H2 System, which is already significantly more advanced than all other quantum computers available.

While tools like ChatGPT have already made a profound impact on society, a critical limitation to their broader industrial and enterprise use has become clear. Classical large language models (LLMs) are computational behemoths, prohibitively huge and expensive to train, and prone to errors that damage their credibility.

Training models like ChatGPT requires processing vast datasets with billions, even trillions, of parameters. This demands immense computational power, often spread across thousands of GPUs or specialized hardware accelerators. The environmental cost is staggering—simply training GPT-3, for instance, consumed nearly 1,300 megawatt-hours of electricity, equivalent to the annual energy use of 130 average U.S. homes.

This doesn’t account for the ongoing operational costs of running these models, which remain high with every query. 

Despite these challenges, the push to develop ever-larger models shows no signs of slowing down.

Enter quantum computing. Quantum technology offers a more sustainable, efficient, and high-performance solution—one that will fundamentally reshape AI, dramatically lowering costs and increasing scalability, while overcoming the limitations of today's classical systems. 

Quantum Natural Language Processing: A New Frontier

At Quantinuum we have been maniacally focused on “rebuilding” machine learning (ML) techniques for Natural Language Processing (NLP) using quantum computers. 

Our research team has worked on translating key innovations in natural language processing — such as word embeddings, recurrent neural networks, and transformers — into the quantum realm. The ultimate goal is not merely to port existing classical techniques onto quantum computers but to reimagine these methods in ways that take full advantage of the unique features of quantum computers.

We have a deep bench working on this. Our Head of AI, Dr. Steve Clark, previously spent 14 years as a faculty member at Oxford and Cambridge, and over 4 years as a Senior Staff Research Scientist at DeepMind in London. He works closely with Dr. Konstantinos Meichanetzidis, who is our Head of Scientific Product Development and who has been working for years at the intersection of quantum many-body physics, quantum computing, theoretical computer science, and artificial intelligence.

A critical element of the team’s approach to this project is avoiding the temptation to simply “copy-paste”, i.e. taking the math from a classical version and directly implementing that on a quantum computer. 

This is motivated by the fact that quantum systems are fundamentally different from classical systems: their ability to leverage quantum phenomena like entanglement and interference ultimately changes the rules of computation. By ensuring these new models are properly mapped onto the quantum architecture, we are best poised to benefit from quantum computing’s unique advantages. 

These advantages are not so far in the future as we once imagined – partially driven by our accelerating pace of development in hardware and quantum error correction.

Making computers “talk”- a short history

The ultimate problem of making a computer understand a human language isn’t unlike trying to learn a new language yourself – you must hear/read/speak lots of examples, memorize lots of rules and their exceptions, memorize words and their meanings, and so on. However, it’s more complicated than that when the “brain” is a computer. Computers naturally speak their native languages very well, where everything from machine code to Python has a meaningful structure and set of rules. 

In contrast, “natural” (human) language is very different from the strict compliance of computer languages: things like idioms confound any sense of structure, humor and poetry play with semantics in creative ways, and the language itself is always evolving. Still, people have been considering this problem since the 1950’s (Turing’s original “test” of intelligence involves the automated interpretation and generation of natural language).

Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction of machine learning algorithms for language processing. 

Initial ML approaches were largely “statistical”: by analyzing large amounts of text data, one can identify patterns and probabilities. There were notable successes in translation (like translating French into English), and the birth of the web led to more innovations in learning from and handling big data.

What many consider “modern” NLP was born in the late 2000’s, when expanded compute power and larger datasets enabled practical use of neural networks. Being mathematical models, neural networks are “built” out of the tools of mathematics; specifically linear algebra and calculus. 

Building a neural network, then, means finding ways to manipulate language using the tools of linear algebra and calculus. This means representing words and sentences as vectors and matrices, developing tools to manipulate them, and so on. This is precisely the path that researchers in classical NLP have been following for the past 15 years, and the path that our team is now speedrunning in the quantum case.

Quantum Word Embeddings: A Complex Twist

The first major breakthrough in neural NLP came roughly a decade ago, when vector representations of words were developed, using the frameworks known as Word2Vec and GloVe (Global Vectors for Word Representation). In a recent paper, our team, including Carys Harvey and Douglas Brown, demonstrated how to do this in quantum NLP models – with a crucial twist. Instead of embedding words as real-valued vectors (as in the classical case), the team built it to work with complex-valued vectors.

In quantum mechanics, the state of a physical system is represented by a vector residing in a complex vector space, called a Hilbert space. By embedding words as complex vectors, we are able to map language into parameterized quantum circuits, and ultimately the qubits in our processor. This is a major advance that was largely under appreciated by the AI community but which is now rapidly gaining interest.

Using complex-valued word embeddings for QNLP means that from the bottom-up we are working with something fundamentally different. This different “geometry” may provide advantage in any number of areas: natural language has a rich probabilistic and hierarchical structure that may very well benefit from the richer representation of complex numbers.

The Quantum Recurrent Neural Network (RNN)

Another breakthrough comes from the development of quantum recurrent neural networks (RNNs). RNNs are commonly used in classical NLP to handle tasks such as text classification and language modeling. 

Our team, including Dr. Wenduan Xu, Douglas Brown, and Dr. Gabriel Matos, implemented a quantum version of the RNN using parameterized quantum circuits (PQCs). PQCs allow for hybrid quantum-classical computation, where quantum circuits process information and classical computers optimize the parameters controlling the quantum system.

In a recent experiment, the team used their quantum RNN to perform a standard NLP task: classifying movie reviews from Rotten Tomatoes as positive or negative. Remarkably, the quantum RNN performed as well as classical RNNs, GRUs, and LSTMs, using only four qubits. This result is notable for two reasons: it shows that quantum models can achieve competitive performance using a much smaller vector space, and it demonstrates the potential for significant energy savings in the future of AI.

In a similar experiment, our team partnered with Amgen to use PQCs for peptide classification, which is a standard task in computational biology. Working on the Quantinuum System Model H1, the joint team performed sequence classification (used in the design of therapeutic proteins), and they found competitive performance with classical baselines of a similar scale. This work was our first proof-of-concept application of near-term quantum computing to a task critical to the design of therapeutic proteins, and helped us to elucidate the route toward larger-scale applications in this and related fields, in line with our hardware development roadmap.

Quantum Transformers - The Next Big Leap

Transformers, the architecture behind models like GPT-3, have revolutionized NLP by enabling massive parallelism and state-of-the-art performance in tasks such as language modeling and translation. However, transformers are designed to take advantage of the parallelism provided by GPUs, something quantum computers do not yet do in the same way.

In response, our team, including Nikhil Khatri and Dr. Gabriel Matos, introduced “Quixer”, a quantum transformer model tailored specifically for quantum architectures. 

By using quantum algorithmic primitives, Quixer is optimized for quantum hardware, making it highly qubit efficient. In a recent study, the team applied Quixer to a realistic language modeling task and achieved results competitive with classical transformer models trained on the same data. 

This is an incredible milestone achievement in and of itself. 

This paper also marks the first quantum machine learning model applied to language on a realistic rather than toy dataset. 

This is a truly exciting advance for anyone interested in the union of quantum computing and artificial intelligence, and is in danger of being lost in the increased ‘noise’ from the quantum computing sector where organizations who are trying to raise capital will try to highlight somewhat trivial advances that are often duplicative.

Quantum Tensor Networks. A Scalable Approach

Carys Harvey and Richie Yeung from Quantinuum in the UK worked with a broader team that explored the use of quantum tensor networks for NLP. Tensor networks are mathematical structures that efficiently represent high-dimensional data, and they have found applications in everything from quantum physics to image recognition. In the context of NLP, tensor networks can be used to perform tasks like sequence classification, where the goal is to classify sequences of words or symbols based on their meaning.

The team performed experiments on our System Model H1, finding comparable performance to classical baselines. This marked the first time a scalable NLP model was run on quantum hardware – a remarkable advance. 

The tree-like structure of quantum tensor models lends itself incredibly well to specific features inherent to our architecture such as mid-circuit measurement and qubit re-use, allowing us to squeeze big problems onto few qubits.

Since quantum theory is inherently described by tensor networks, this is another example of how fundamentally different quantum machine learning approaches can look – again, there is a sort of “intuitive” mapping of the tensor networks used to describe the NLP problem onto the tensor networks used to describe the operation of our quantum processors.

What we’ve learned so far

While it is still very early days, we have good indications that running AI on quantum hardware will be more energy efficient. 

We recently published a result in “random circuit sampling”, a task used to compare quantum to classical computers. We beat the classical supercomputer in time to solution as well as energy use – our quantum computer cost 30,000x less energy to complete the task than Frontier, the classical supercomputer we compared against. 

We may see, as our quantum AI models grow in power and size, that there is a similar scaling in energy use: it’s generally more efficient to use ~100 qubits than it is to use ~10^18 classical bits.

Another major insight so far is that quantum models tend to require significantly fewer parameters to train than their classical counterparts. In classical machine learning, particularly in large neural networks, the number of parameters can grow into the billions, leading to massive computational demands. 

Quantum models, by contrast, leverage the unique properties of quantum mechanics to achieve comparable performance with a much smaller number of parameters. This could drastically reduce the energy and computational resources required to run these models.

The Path Ahead

As quantum computing hardware continues to improve, quantum AI models may increasingly complement or even replace classical systems. By leveraging quantum superposition, entanglement, and interference, these models offer the potential for significant reductions in both computational cost and energy consumption. With fewer parameters required, quantum models could make AI more sustainable, tackling one of the biggest challenges facing the industry today.

The work being done by Quantinuum reflects the start of the next chapter in AI, and one that is transformative. As quantum computing matures, its integration with AI has the potential to unlock entirely new approaches that are not only more efficient and performant but can also handle the full complexities of natural language. The fact that Quantinuum’s quantum computers are the most advanced in the world, and cannot be simulated classically, gives us a unique glimpse into a future. 

The future of AI now looks very much to be quantum and Quantinuum’s Gen QAI system will usher in the era in which our work will have meaningful societal impact.

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
March 20, 2025
Initiating Impact Today: Combining the World’s Most Powerful in Quantum and Classical Compute
A diagram of a diagram of a diagramDescription automatically generated with medium confidence

Quantinuum and NVIDIA, world leaders in their respective sectors, are combining forces to fast-track commercially scalable quantum supercomputers, further bolstering the announcement Quantinuum made earlier this year about the exciting new potential in Generative Quantum AI. 

Make no mistake about it, the global quantum race is on. With over $2 billion raised by companies in 2024 alone, and over 150 new startups in the past five years, quantum computing is no longer restricted to ‘the lab’.  

The United Nations proclaimed 2025 as the International Year of Quantum Science and Technology (IYQ), and as we march toward the end of the first quarter, the old maxim that quantum computing is still a decade (or two, or three) away is no longer relevant in today’s world. Governments, commercial enterprises and scientific organizations all stand to benefit from quantum computers, led by those built by Quantinuum.

That is because, amid the flurry of headlines and social media chatter filled with aspirational statements of future ambitions shared by those in the heat of this race, we at Quantinuum continue to lead by example. We demonstrate what that future looks like today, rather than relying solely on slide deck presentations.

Our quantum computers are the most powerful systems in the world. Our H2 system, the only quantum computer that cannot be classically simulated, is years ahead of any other system being developed today. In the coming months, we’ll introduce our customers to Helios, a trillion times more powerful than H2, further extending our lead beyond where the competition is still only planning to be. 

At Quantinuum, we have been convinced for years that the impact of quantum computers on the real world will happen earlier than anticipated. However, we have known that impact will be when powerful quantum computers and powerful classical systems work together. 

This sort of hybrid ‘supercomputer’ has been referenced a few times in the past few months, and there is, rightly, a sense of excitement about what such an accelerated quantum supercomputer could achieve.

The Power of Hybrid Quantum and Classical Compute

In a revolutionary move on March 18th, 2025, at the GTC AI conference, NVIDIA announced the opening of a world-class accelerated quantum research center with Quantinuum selected as a key founding collaborator to work on projects with NVIDIA at the center. 

With details shared in an accompanying press statement and blog post, the NVIDIA Accelerated Quantum Research Center (NVAQC) being built in Boston, Massachusetts, will integrate quantum computers with AI supercomputers to ultimately explore how to build accelerated quantum supercomputers capable of solving some of the world’s most challenging problems. The center will begin operations later this year.

As shared in Quantinuum’s accompanying statement, the center will draw on the NVIDIA CUDA-Q platform, alongside a NVIDIA GB200 NVL72 system containing 576 NVIDIA Blackwell GPUs dedicated to quantum research. 

The Role of CUDA-Q in Quantum-Classical Integration  

Integrating quantum and classical hardware relies on a platform that can allow researchers and developers to quickly shift context between these two disparate computing paradigms within a single application. NVIDIA CUDA-Q platform will be the entry-point for researchers to exploit the NVAQC quantum-classical integration. 

In 2022, Quantinuum became the first company to bring CUDA-Q to its quantum systems, establishing a pioneering collaboration that continues to today. Users of CUDA-Q are currently offered access to Quantinuum’s System H1 QPU and emulator for 90 days.

Quantinuum’s future systems will continue to support the CUDA-Q platform. Furthermore, Quantinuum and NVIDIA are committed to evolving and improving tools for quantum classical integration to take advantage of the latest hardware features, for example, on our upcoming Helios generation. 

The Gen-Q-AI Moment

A few weeks ago, we disclosed high level details about an AI system that we refer to as Generative Quantum AI, or GenQAI. We highlighted a timeline between now and the end of this year when the first commercial systems that can accelerate both existing AI and quantum computers.

At a high level, an AI system such as GenQAI will be enhanced by access to information that has not previously been accessible. Information that is generated from a quantum computer that cannot be simulated. This information and its effect can be likened to a powerful microscope that brings accuracy and detail to already powerful LLM’s, bridging the gap from today’s impressive accomplishments towards truly impactful outcomes in areas such as biology and healthcare, material discovery and optimization.

Through the integration of the most powerful in quantum and classical systems, and by enabling tighter integration of AI with quantum computing, the NVAQC will be an enabler for the realization of the accelerated quantum supercomputer needed for GenQAI products and their rapid deployment and exploitation.

Innovating our Roadmap

The NVAQC will foster the tools and innovations needed for fully fault-tolerant quantum computing and will be enabler to the roadmap Quantinuum released last year.

With each new generation of our quantum computing hardware and accompanying stack, we continue to scale compute capabilities through more powerful hardware and advanced features, accelerating the timeline for practical applications. To achieve these advances, we integrate the best CPU and GPU technologies alongside our quantum innovations. Our long-standing collaboration with NVIDIA drives these advancements forward and will be further enriched by the NVAQC. 

Here are a couple of examples: 

In quantum error correction, error syndromes detected by measuring "ancilla" qubits are sent to a "decoder." The decoder analyzes this information to determine if any corrections are needed. These complex algorithms must be processed quickly and with low latency, requiring advanced CPU and GPU power to calculate and apply corrections keeping logical qubits error-free. Quantinuum has been collaborating with NVIDIA on the development of customized GPU-based decoders which can be coupled with our upcoming Helios system. 

In our application space, we recently announced the integration of InQuanto v4.0, the latest version of Quantinuum’s cutting edge computational chemistry platform, with NVIDIA cuQuantum SDK to enable previously inaccessible tensor-network-based methods for large-scale and high-precision quantum chemistry simulations.

Our work with NVIDIA underscores the partnership between quantum computers and classical processors to maximize the speed toward scaled quantum computers. These systems offer error-corrected qubits for operations that accelerate scientific discovery across a wide range of fields, including drug discovery and delivery, financial market applications, and essential condensed matter physics, such as high-temperature superconductivity.

We look forward to sharing details with our partners and bringing meaningful scientific discovery to generate economic growth and sustainable development for all of humankind.

partnership
All
Blog
March 18, 2025
Setting the Benchmark: Independent Study Ranks Quantinuum #1 in Performance

By Dr. Chris Langer

In the rapidly advancing world of quantum computing, to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance. A recent independent study comparing 19 quantum processing units (QPUs) on the market today has validated what we’ve long known to be true: Quantinuum’s systems are the undisputed leaders in performance.

The Benchmarking Study

A comprehensive study conducted by a joint team from the Jülich Supercomputing Centre, AIDAS, RWTH Aachen University, and Purdue University compared QPUs from leading companies like IBM, Rigetti, and IonQ, evaluating how well each executed the Quantum Approximate Optimization Algorithm (QAOA), a widely used algorithm that provides a system level measure of performance. After thorough examination, the study concluded that:

“...the performance of quantinuum H1-1 and H2-1 is superior to that of the other QPUs.”

Quantinuum emerged as the clear leader, particularly in full connectivity, the most critical category for solving real-world optimization problems. Full connectivity is a huge comparative advantage, offering more computational power and more flexibility in both error correction and algorithmic design. Our dominance in full connectivity—unattainable for platforms with natively limited connectivity—underscores why we are the partner of choice in quantum computing.

Leading Across the Board

We take benchmarking seriously at Quantinuum. We lead in nearly every industry benchmark, from best-in-class gate fidelities to a 4000x lead in quantum volume, delivering top performance to our customers.

Our Quantum Charged-coupled Device (QCCD) architecture has been the foundation of our success, delivering consistent performance gains year-over-year. Unlike other architectures, QCCD offers all-to-all connectivity, world-record fidelities, and advanced features like real-time decoding. Altogether, it’s clear we have superior performance metrics across the board.

While many claim to be the best, we have the data to prove it. This table breaks down industry benchmarks, using the leading commercial spec for each quantum computing architecture.

TABLE 1. Leading commercial spec for each listed architecture or demonstrated capabilities on commercial hardware. Download Benchmarking Results

These metrics are the key to our success. They demonstrate why Quantinuum is the only company delivering meaningful results to customers at a scale beyond classical simulation limits.

Our progress builds upon a series of Quantinuum’s technology breakthroughs, including the creation of the most reliable and highest-quality logical qubits, as well as solving the key scalability challenge associated with ion-trap quantum computers — culminating in a commercial system with greater than 99.9% two-qubit gate fidelity.

From our groundbreaking progress with System Model H2 to advances in quantum teleportation and solving the wiring problem, we’re taking major steps to tackle the challenges our whole industry faces, like execution speed and circuit depth. Advancements in parallel gate execution, faster ion transport, and high-rate quantum error correction (QEC) are just a few ways we’re maintaining our lead far ahead of the competition.

This commitment to excellence ensures that we not only meet but exceed expectations, setting the bar for reliability, innovation, and transformative quantum solutions. 

Onward and Upward

To bring it back to the opening message: to be a leader means not just keeping pace with innovation but driving it forward. It means setting new standards that shape the future of quantum computing performance.

We are just months away from launching Quantinuum’s next generation system, Helios, which will be one trillion times more powerful than H2. By 2027, Quantinuum will launch the industry’s first 100-logical-qubit system, featuring best-in-class error rates, and we are on track to deliver fault-tolerant computation on hundreds of logical qubits by the end of the decade. 

The evidence speaks for itself: Quantinuum is setting the standard in quantum computing. Our unrivaled specs, proven performance, and commitment to innovation make us the partner of choice for those serious about unlocking value with quantum computing. Quantinuum is committed to doing the hard work required to continue setting the standard and delivering on our promises. This is Quantinuum. This is leadership.

Dr. Chris Langer is a Fellow, a key inventor and architect for the Quantinuum hardware, and serves as an advisor to the CEO.

_______________________________________

Citations from Benchmarking Table
1 Quantinuum. System Model H2. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h2
2 IBM. Quantum Services & Resources. IBM Quantum, https://quantum.ibm.com/services/resources
3 Quantinuum. System Model H1. Quantinuum, https://www.quantinuum.com/products-solutions/quantinuum-systems/system-model-h1
4 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
5 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
6 Quantinuum. H1 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sheets/Quantinuum%20H1%20Product%20Data%20Sheet.pdf
7 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
8 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
9 Quantinuum. H2 Product Data Sheet. Quantinuum, https://docs.quantinuum.com/systems/data_sQuantinuum. H2 Product Data Sheet. Quantinuum,heets/Quantinuum%20H2%20Product%20Data%20Sheet.pdf
10 Google Quantum AI. Willow Spec Sheet. Google, https://quantumai.google/static/site-assets/downloads/willow-spec-sheet.pdf
11 Sales Rodriguez, P., et al. "Experimental demonstration of logical magic state distillation." arXiv, 19 Dec 2024, https://arxiv.org/pdf/2412.15165
12 Moses, S. A., et al. "A Race-Track Trapped-Ion Quantum Processor." Physical Review X, vol. 13, no. 4, 2023, https://journals.aps.org/prx/pdf/10.1103/PhysRevX.13.041052
13 Google Quantum AI and Collaborators. "Quantum Error Correction Below the Surface Code Threshold." Nature, vol. 638, 2024, https://www.nature.com/articles/s41586-024-08449-y
14 Bluvstein, Dolev, et al. "Logical Quantum Processor Based on Reconfigurable Atom Arrays." Nature, vol. 626, 2023, https://www.nature.com/articles/s41586-023-06927-3
15 DeCross, Matthew, et al. "The Computational Power of Random Quantum Circuits in Arbitrary Geometries." arXiv, Published on 21 June 2024, hhttps://arxiv.org/pdf/2406.02501
16 Montanez-Barrera, J. A., et al. "Evaluating the Performance of Quantum Process Units at Large Width and Depth." arXiv, 10 Feb. 2025, https://arxiv.org/pdf/2502.06471
17 Evered, Simon J., et al. "High-Fidelity Parallel Entangling Gates on a Neutral-Atom Quantum Computer." Nature, vol. 622, 2023, https://www.nature.com/articles/s41586-023-06481-y
18 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." Physical Review X, vol. 11, no. 4, 2021, https://journals.aps.org/prx/abstract/10.1103/PhysRevX.11.041058
19 Carrera Vazquez, Almudena, et al. "Scaling Quantum Computing with Dynamic Circuits." arXiv, 27 Feb. 2024, https://arxiv.org/html/2402.17833v1
20 Moses, S.A.,, et al. "A Race Track Trapped-Ion Quantum Processor." arXiv, 16 May 2023, https://arxiv.org/pdf/2305.03828
21 Garcia Almeida, D., Ferris, K., Knanazawa, N., Johnson, B., Davis, R. "New fractional gates reduce circuit depth for utility-scale workloads." IBM Quantum Blog, IBM, 18 Nov. 2020, https://www.ibm.com/quantum/blog/fractional-gates
22 Ryan-Anderson, C., et al. "Realization of Real-Time Fault-Tolerant Quantum Error Correction." arXiv, 15 July 2021, https://arxiv.org/pdf/2107.07505
23 Google Quantum AI and Collaborators. “Quantum error correction below the surface code threshold.” arXiv, 24 Aug. 2024, https://arxiv.org/pdf/2408.13687v1
technical
All
Blog
March 16, 2025
APS Global Physics Summit 2025

The 2025 Joint March Meeting and April Meeting — referred to as the APS Global Physics Summit — is the largest physics research conference in the world, uniting 14,000 scientific community members across all disciplines of physics.  

The Quantinuum team is looking forward to participating in this year’s conference to showcase our latest advancements in quantum technology. Find us throughout the week at the below sessions and visit us at Booth 1001.

Join these sessions to discover how Quantinuum is advancing quantum computing

T11: Quantum Error Correction
Speaker: Natalie Brown
Date: Sunday, March 16th
Time: 8:00 – 8:12am
Location: Anaheim Convention Center, 261B (Level 2)

The computational power of random quantum circuits in arbitrary geometries
Session MAR-F34: Near-Term Quantum Resource Reduction and Random Circuits

Speaker: Matthew DeCross
Date: Tuesday, March 18th
Time: 8:00 – 8:12am
Location: Anaheim Convention Center, 256A (Level 2)

Topological Order from Measurements and Feed-Forward on a Trapped Ion Quantum Computer
Session MAR-F14: Realizing Topological States on Quantum Hardware

Speaker: Henrik Dreyer
Date: Tuesday, March 18th
Time: 9:12 – 9:48am
Location: Anaheim Convention Center, 158 (Level 1)

Trotter error time scaling separation via commutant decomposition
Session MAR-F34: Near-Term Quantum Resource Reduction and Random Circuits
Speaker: Yi-Hsiang Chen (Quantinuum)
Date: Tuesday, March 18th
Time: 10:00 – 10:12am
Location: Anaheim Convention Center, 256A (Level 2)

Squared overlap calculations with linear combination of unitaries
Session MAR-J35: Circuit Optimization and Compilation

Speaker: Michelle Wynne Sze
Date: Tuesday, March 18th
Time: 4:36 – 4:48pm
Location: Anaheim Convention Center, 256B (Level 2)

High-precision quantum phase estimation on a trapped-ion quantum computer
Session MAR-L16: Quantum Simulation for Quantum Chemistry

Speaker: Andrew Tranter
Date: Wednesday, March 19th
Time: 9:48 – 10:00am
Location: Anaheim Convention Center, 160 (Level 1)

Robustness of near-thermal dynamics on digital quantum computers
Session MAR-L16: Quantum Simulation for Quantum Chemistry

Speaker: Eli Chertkov
Date: Wednesday, March 19th
Time: 10:12 – 10:24am
Location: Anaheim Convention Center, 160 (Level 1)

Floquet prethermalization on a digital quantum computer
Session MAR-Q09: Quantum Simulation of Condensed Matter Physics

Speaker: Reza Haghshenas
Date: Thursday, March 20th
Time: 10:00 – 10:12am
Location: Anaheim Convention Center, 204C (Level 2)

Teleportation of a Logical Qubit on a Trapped-ion Quantum Computer
Session MAR-S11: Advances in QEC Experiments

Speaker: Ciaran Ryan-Anderson
Date: Thursday, March 20th
Time: 11:30 – 12:06pm
Location: Anaheim Convention Center, 155 (Level 1)

*All times in Pacific Standard Time

events
All