If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!
Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.
Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.
Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.
This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.
Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.
This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.
Today we announce the next generation of λambeq , Quantinuum’s quantum natural language processing (QNLP) package.
Incorporating recent developments in both quantum NLP and quantum hardware, λambeq Gen II allows users not only to model the semantics of natural language (in terms of vectors and tensors), but to convert linguistic structures and meaning directly into quantum circuits for real quantum hardware.
Five years ago, our team reported the first realization of Quantum Natural Language Processing (QNLP). In their work, the team realized that there is a direct correspondence between the meanings of words and quantum states, and between grammatical structures and quantum entanglement. As that article put it: “Language is effectively quantum native”.
Our team realized an NLP task on quantum hardware and provided the data and code via a GitHub repository, attracting the interest of a then-nascent quantum NLP community, which has since grown around successive releases of λambeq. We released it 18 months later, supported by a research paper on the arXiv.
Λambeq: an open-source python library that turns sentences into quantum circuits, and then feeds these to quantum computers subject to VQC methodologies. Initial release in October 2021 arXiv:2110.04236
From that moment onwards, anyone could play around with QNLP on the then freely available quantum hardware. Our λambeq software has been downloaded over 50,000 times, and the user community is supported by an active Discord page, where practitioners can interact with each other and with our development team.
In order to demonstrate that QNLP was possible, even on the hardware available in 2021, we focused exclusively on small noisy quantum computers. Our motivation was to produce some exploratory findings, looking for a potential quantum advantage for natural language processing using quantum hardware. We published our original scientific work in 2016, detailing a quadratic speedup over classical computers (in certain circumstances). We are strongly convinced that there is a lot more potential than indicated in that paper.
That first realization of QNLP marked a shift away from brute-force machine learning, which has now taken the world by storm in the shape of large language models (LLMs) running on algorithms called “transformers”.
Instead of the transformer approach, we decoded linguistic structure using a compositional theory of meaning. With deep roots in computational linguistics, our approach was inspired by research into compositional linguistic algorithms, and their resemblance to other quantum primitives such as quantum teleportation. As we continued our work, it became clear that our approach reduced training requirements by relying on a natural relationship between linguistic structure and quantum structure, offering near-term QNLP in practice.
We haven’t sat still, and neither have the teams working in the field of quantum hardware. Quantinuum’s stack now performs at a level we only dreamed of in 2020. While we look forward to continued progress on the hardware front, we are getting ahead of these future developments by shifting the focus in our algorithms and software packages, to ensure we and λambeq’s users are ready to chase far more ambitious goals!
We moved away from the compositional theory of meaning that was the focus of our early experiments, called DisCoCat, to a new mathematical foundation called DisCoCirc. This enabled us to explore the relationship between text generation and text circuits, concluding that “text circuits are generative for text”.
Formally speaking, DisCoCirc embraces substantially more compositional structure present in language than DisCoCat does, and that pays off in many ways:
Today, our users can benefit from these recent developments with the release λambeq Gen II. Our open-source tools have always benefited from the attention and feedback we receive from our users. Please give it a try, and we look forward to hearing your feedback on λambeq Gen II.
Enjoy!