BY HARRY BUHRMAN
Quantum computing continues to push the boundaries of what is computationally possible. A new study by Marcello Benedetti, Harry Buhrman, and Jordi Weggemans introduces Complement Sampling, a problem that highlights a dramatic separation between quantum and classical sample complexity. This work provides a robust demonstration of quantum advantage in a way that is not only provable but also feasible on near-term quantum devices.
Imagine a universe of N = 2n elements, from which a subset S of size K is drawn uniformly at random. The challenge is to sample from the complement S̅ without explicitly knowing S, but having access to samples of S. Classically, solving this problem requires roughly K samples, as the best a classical algorithm can do is guess at random after observing only some of the elements of S.
To better understand this, consider a small example. Suppose N = 8, meaning our universe consists of the numbers {0,1,2,3,4,5,6,7}. If a subset S of size K = 4 is drawn at random—say {1,3,5,7}—the goal is to sample from the complement S̅, which consists of {0,2,4,6}. A classical algorithm would need to collect and verify enough samples from S before it could infer what S̅ might be. However, a quantum algorithm can use a single superposition state over S (a quantum sample) to instantly generate a sample from S̅, eliminating the need for iterative searching.
Quantum advantage is often discussed in terms of computational speedups, such as those achieved by Shor’s algorithm for factoring large numbers. However, quantum resources provide advantages beyond time efficiency—they also affect how data is accessed, stored, and processed.
Complement Sampling fits into the category of sample complexity problems, where the goal is to minimize the number of samples needed to solve a problem. The authors prove that their quantum approach not only outperforms classical methods but does so in a way that is:
At its core, the quantum approach to Complement Sampling relies on the ability to perform a perfect swap between a subset S and its complement S̅. The method draws inspiration from a construction by Aaronson, Atia, and Susskind, which links state distinguishability to state swapping. The quantum algorithm:
This is made possible by quantum interference and superposition, allowing a quantum computer to manipulate distributions in ways that classical systems fundamentally cannot.
A crucial aspect of this work is its robustness. The authors prove that even for subsets generated using strong pseudorandom permutations, the problem remains hard for classical algorithms. This means that classical computers cannot efficiently solve Complement Sampling even with structured input distributions—an important consideration for real-world applications.
This robustness suggests potential applications in cryptography, where generating samples from complements could be useful in privacy-preserving protocols and quantum-secure verification methods.
Unlike some quantum advantage demonstrations that are difficult to verify classically (such as the random circuit sampling experiment), Complement Sampling is designed to be verifiable. The authors propose an interactive quantum versus classical game:
While the classical player must resort to random guessing, the quantum player can leverage the swap algorithm to succeed with near certainty. Running such an experiment on NISQ hardware could serve as a practical demonstration of quantum advantage in a sample complexity setting.
This research raises exciting new questions:
With its blend of theoretical depth and experimental feasibility, Complement Sampling provides a compelling new frontier for demonstrating the power of quantum computing.
Complement Sampling represents one of the cleanest demonstrations of quantum advantage in a practical, verifiable, and NISQ-friendly setting. By leveraging quantum information processing in ways that classical computers fundamentally cannot, this work strengthens the case for near-term quantum technologies and their impact on computational complexity, cryptography, and beyond.
For those interested in the full details, the paper provides rigorous proofs, circuit designs, and further insights into the nature of quantum sample complexity. As quantum computing continues to evolve, Complement Sampling may serve as a cornerstone for future experimental demonstrations of quantum supremacy.
We have commenced work on the experiment – watch this space!
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
If we are to create ‘next-gen’ AI that takes full advantage of the power of quantum computers, we need to start with quantum native transformers. Today we announce yet again that Quantinuum continues to lead by demonstrating concrete progress — advancing from theoretical models to real quantum deployment.
The future of AI won't be built on yesterday’s tech. If we're serious about creating next-generation AI that unlocks the full promise of quantum computing, then we must build quantum-native models—designed for quantum, from the ground up.
Around this time last year, we introduced Quixer, a state-of-the-art quantum-native transformer. Today, we’re thrilled to announce a major milestone: one year on, Quixer is now running natively on quantum hardware.
This marks a turning point for the industry: realizing quantum-native AI opens a world of possibilities.
Classical transformers revolutionized AI. They power everything from ChatGPT to real-time translation, computer vision, drug discovery, and algorithmic trading. Now, Quixer sets the stage for a similar leap — but for quantum-native computation. Because quantum computers differ fundamentally from classical computers, we expect a whole new host of valuable applications to emerge.
Achieving that future requires models that are efficient, scalable, and actually run on today’s quantum hardware.
That’s what we’ve built.
Until Quixer, quantum transformers were the result of a brute force “copy-paste” approach: taking the math from a classical model and putting it onto a quantum circuit. However, this approach does not account for the considerable differences between quantum and classical architectures, leading to substantial resource requirements.
Quixer is different: it’s not a translation – it's an innovation.
With Quixer, our team introduced an explicitly quantum transformer, built from the ground up using quantum algorithmic primitives. Because Quixer is tailored for quantum circuits, it's more resource efficient than most competing approaches.
As quantum computing advances toward fault tolerance, Quixer is built to scale with it.
We’ve already deployed Quixer on real-world data: genomic sequence analysis, a high-impact classification task in biotech. We're happy to report that its performance is already approaching that of classical models, even in this first implementation.
This is just the beginning.
Looking ahead, we’ll explore using Quixer anywhere classical transformers have proven to be useful; such as language modeling, image classification, quantum chemistry, and beyond. More excitingly, we expect use cases to emerge that are quantum-specific, impossible on classical hardware.
This milestone isn’t just about one model. It’s a signal that the quantum AI era has begun, and that Quantinuum is leading the charge with real results, not empty hype.
Stay tuned. The revolution is only getting started.
Our team is participating in ISC High Performance 2025 (ISC 2025) from June 10-13 in Hamburg, Germany!
As quantum computing accelerates, so does the urgency to integrate its capabilities into today’s high-performance computing (HPC) and AI environments. At ISC 2025, meet the Quantinuum team to learn how the highest performing quantum systems on the market, combined with advanced software and powerful collaborations, are helping organizations take the next step in their compute strategy.
Quantinuum is leading the industry across every major vector: performance, hybrid integration, scientific innovation, global collaboration and ease of access.
From June 10–13, in Hamburg, Germany, visit us at Booth B40 in the Exhibition Hall or attend one of our technical talks to explore how our quantum technologies are pushing the boundaries of what’s possible across HPC.
Throughout ISC, our team will present on the most important topics in HPC and quantum computing integration—from near-term hybrid use cases to hardware innovations and future roadmaps.
Multicore World Networking Event
H1 x CUDA-Q Demonstration
HPC Solutions Forum
Whether you're exploring hybrid solutions today or planning for large-scale quantum deployment tomorrow, ISC 2025 is the place to begin the conversation.
We look forward to seeing you in Hamburg!
Quantinuum has once again raised the bar—setting a record in teleportation, and advancing our leadership in the race toward universal fault-tolerant quantum computing.
Last year, we published a paper in Science demonstrating the first-ever fault-tolerant teleportation of a logical qubit. At the time, we outlined how crucial teleportation is to realize large-scale fault tolerant quantum computers. Given the high degree of system performance and capabilities required to run the protocol (e.g., multiple qubits, high-fidelity state-preparation, entangling operations, mid-circuit measurement, etc.), teleportation is recognized as an excellent measure of system maturity.
Today we’re building on last year’s breakthrough, having recently achieved a record logical teleportation fidelity of 99.82% – up from 97.5% in last year’s result. What’s more, our logical qubit teleportation fidelity now exceeds our physical qubit teleportation fidelity, passing the break-even point that establishes our H2 system as the gold standard for complex quantum operations.
This progress reflects the strength and flexibility of our Quantum Charge Coupled Device (QCCD) architecture. The native high fidelity of our QCCD architecture enables us to perform highly complex demonstrations like this that nobody else has yet to match. Further, our ability to perform conditional logic and real-time decoding was crucial for implementing the Steane error correction code used in this work, and our all-to-all connectivity was essential for performing the high-fidelity transversal gates that drove the protocol.
Teleportation schemes like this allow us to “trade space for time,” meaning that we can do quantum error correction more quickly, reducing our time to solution. Additionally, teleportation enables long-range communication during logical computation, which translates to higher connectivity in logical algorithms, improving computational power.
This demonstration underscores our ongoing commitment to reducing logical error rates, which is critical for realizing the promise of quantum computing. Quantinuum continues to lead in quantum hardware performance, algorithms, and error correction—and we’ll extend our leadership come the launch of our next generation system, Helios, in just a matter of months.