

By Kevin Jackson for Quantinuum
The world is a lot smaller than it was in the previous century – or even in the previous decade.
Customers are now accustomed to a wide variety of products that can be delivered from distributors all over the globe. While this is a great opportunity for suppliers, it also presents a challenge in the form of supply chain, logistics, routing, and optimization.
How can distribution companies continue to serve the needs of their customers in the most efficient and effective way possible? This may seem like a simple question, but it becomes a complex computational problem when trying to account for all the variables that can occur within a distribution network.
What’s more, classical computers simply cannot adequately perform this optimization calculation in real-world scenarios. Because of the number of variables, the math just runs too slow.
That said, new work in quantum computing has shown promise in applications within the optimization field. To that end, we interviewed Quantinuum’s Megan Kohagen and Dr. Mattia Fiorentini to better understand how quantum computing could to optimized logistics and supply chains.
Kohagen and Fiorentini are participating in a panel about quantum computing at Manifest: The Future of Logistics conference this week in Las Vegas, Nevada.
When it comes to optimization it is all about maximizing or minimizing an objective. A good example is a company that delivers goods and products but owns a limited number of trucks. To improve efficiency and minimize costs, the company needs to maximize the number of objects its trucks carry and identify the shortest routes between deliveries.
“You have all these constraints, you have your objective, and you’ve got to make decisions,” said Kohagen, an optimization researcher. “The decisions end up being things like how many goods you are going to send between your distribution centers and your stores? Each of these optimization problems, even if you consider them separately, are hard problems. The technical term is that they’re (non-deterministic polynomial)-hard because you’re dealing with discrete things. For example, I can’t send half a T-shirt to my customer. I can only operate with whole integers.”
Fiorentini expands on this: “In logistics, we cannot leave anyone behind. If we need to deliver medicine, we cannot decide ‘the villages with less than 1,000 people – we don’t supply them. There are too many, and not enough people live there’. That’s not an option in today’s world.”
Today’s computers struggle to solve these NP-hard optimization problems because of the number of ever-changing variables. Consider the much-studied Traveling Salesperson Problem, which is often used to illustrate the complexity of managing logistics, routing, and supply chains.
This is a theoretical problem where a machine is tasked with finding the shortest route between an identified list of cities that a “salesperson” must visit before returning to the point of origin. This problem is simple enough with only a few cities, but it becomes exponentially harder as more locations are added, and other factors such as multiple salespeople, weather conditions, and unforeseen events arise.
Classical computers can solve this theoretical problem for a single salesperson traveling to thousands of cities. But this scenario is not realistic, and this is where classical computers begin to struggle.
“The Traveling Salesperson Problem is not very representative of what happens in the real world,” Kohagen said. “For example, with online ordering so prevalent, a retailer has orders coming in constantly. They must determine how to efficiently retrieve those items from the warehouse, pack them into the trucks, and then transport them to the customers.”
Today, the reality of an extended supply chain or distribution network is beyond what the best classical computer can solve. Quantum computers harness unique properties of quantum physics that enable them to examine all possible answers simultaneously and then concentrate the probable output of the computation onto the best option.
“Classical is a great technology, but it doesn’t cut it here,” said Fiorentini, who develops and tests quantum algorithms for optimization. “Quantum is the best alternative to classical computing that we have.“
Optimization problems have long been viewed as “killer applications” for quantum computing and research conducted by Fiorentini, Kohagen and others has begun to prove that.
Fiorentini believes it is time for decision makers to explore and invest in quantum-enabled solutions for optimization problems. “There are two decisions here for decision makers,” he said. “We either give up on the problem and say, ‘we’ll just do the best we can with a classical solution, or we start allocating a budget for really developing quantum technology.”
Quantum computing is expanding rapidly and is poised to disrupt markets such as optimization. A similar situation is the power sector, which is experiencing major disruptions due to innovations in renewable energy resources, energy storage, and regulatory reform.
Every technology has a tipping point, and all signs point to a current trend in quantum computing moving rapidly to real-world applications in optimization.
“There are a lot of algorithms being developed for optimization right now,” said Kohagen. “If you really want to advance your business with quantum methods for logistics or supply chain, this is the moment to start. Decision makers must act quickly. Those that seize the opportunity before others will have a major advantage over those who lag.”
“As quantum computers continue to scale in computational power, they’ll be able to handle increasingly complex calculations to deliver more robust and optimized supply chain solutions,” said Tony Uttley, President and COO of Quantinuum.
“We’re excited by the acceleration of our System Model H1 technologies, Powered by Honeywell. Measured in terms of qubit number as well as quantum volume, we’re meeting our commitment to increase performance by a factor of 10X each year,” he said. “Alongside other revolutionary advances such as real-time error correction, we look forward to supporting the commercialization of quantum applications that will change the way logistical challenges are met. In fact, within the coming few months we’ll be sharing more exciting news regarding our latest technological achievements.”
Want to learn about our work to develop quantum-enabled optimization solutions for companies? Contact our experts
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Quantinuum is focusing on redefining what’s possible in hybrid quantum–classical computing by integrating Quantinuum’s best-in-class systems with high-performance NVIDIA accelerated computing to create powerful new architectures that can solve the world’s most pressing challenges.
The launch of Helios, Powered by Honeywell, the world’s most accurate quantum computer, marks a major milestone in quantum computing. Helios is now available to all customers through the cloud or on-premise deployment, launched with a go-to-market offering that seamlessly pairs Helios with the NVIDIA Grace Blackwell platform, targeting specific end markets such as drug discovery, finance, materials science, and advanced AI research.
We are also working with NVIDIA to adopt NVIDIA NVQLink, an open system architecture, as a standard for advancing hybrid quantum-classical supercomputing. Using this technology with Quantinuum Guppy and the NVIDIA CUDA-Q platform, Quantinuum has implemented NVIDIA accelerated computing across Helios and future systems to perform real-time decoding for quantum error correction.
In an industry-first demonstration, an NVIDIA GPU-based decoder integrated in the Helios control engine improved the logical fidelity of quantum operations by more than 3% — a notable gain given Helios’ already exceptionally low error rate. These results demonstrate how integration with NVIDIA accelerated computing through NVQLink can directly enhance the accuracy and scalability of quantum computation.

This unique collaboration spans the full Quantinuum technology stack. Quantinuum’s next-generation software development environment allows users to interleave quantum and GPU-accelerated classical computations in a single workflow. Developers can build hybrid applications using tools such as NVIDIA CUDA-Q, NVIDIA CUDA-QX, and Quantinuum’s Guppy, to make advanced quantum programming accessible to a broad community of innovators.
The collaboration also reaches into applied research through the NVIDIA Accelerated Quantum Computing Research Center (NVAQC), where an NVIDIA GB200 NVL72 supercomputer can be paired with Quantinuum’s Helios to further drive hybrid quantum-GPU research, including the development of breakthrough quantum-enhanced AI applications.
A recent achievement illustrates this potential: The ADAPT-GQE framework, a transformer-based Generative Quantum AI (GenQAI) approach, uses a Generative AI model to efficiently synthesize circuits to prepare the ground state of a chemical system on a quantum computer. Developed by Quantinuum, NVIDIA, and a pharmaceutical industry leader—and leveraging NVIDIA CUDA-Q with GPU-accelerated methods—ADAPT-GQE achieved a 234x speed-up in generating training data for complex molecules. The team used the framework to explore imipramine, a molecule crucial to pharmaceutical development. The transformer was trained on imipramine conformers to synthesize ground state circuits at orders of magnitude faster than ADAPT-VQE, and the circuit produced by the transformer was run on Helios to prepare the ground state using InQuanto, Quantinuum's computational chemistry platform.
From collaborating on hardware and software integrations to GenQAI applications, the collaboration between Quantinuum and NVIDIA is building the bridge between classical and quantum computing and creating a future where AI becomes more expansive through quantum computing, and quantum computing becomes more powerful through AI.
By Dr. Noah Berthusen
The earliest works on quantum error correction showed that by combining many noisy physical qubits into a complex entangled state called a "logical qubit," this state could survive for arbitrarily long times. QEC researchers devote much effort to hunt for codes that function well as "quantum memories," as they are called. Many promising code families have been found, but this is only half of the story.
Being able to keep a qubit around for a long time is one thing, but to realize the theoretical advantages of quantum computing we need to run quantum circuits. And to make sure noise doesn't ruin our computation, these circuits need to be run on the logical qubits of our code. This is often much more challenging than performing gates on the physical qubits of our device, as these "logical gates" often require many physical operations in their implementation. What's more, it often is not immediately obvious which logical gates a code has, and so converting a physical circuit into a logical circuit can be rather difficult.
Some codes, like the famous surface code, are good quantum memories and also have easy logical gates. The drawback is that the ratio of physical qubits to logical qubits (the "encoding rate") is low, and so many physical qubits are required to implement large logical algorithms. High-rate codes that are good quantum memories have also been found, but computing on them is much more difficult. The holy grail of QEC, so to speak, would be a high-rate code that is a good quantum memory and also has easy logical gates. Here, we make progress on that front by developing a new code with those properties.
A recent work from Quantinuum QEC researchers introduced genon codes. The underlying construction method for these codes, called the "symplectic double cover," also provided a way to obtain logical gates that are well suited for Quantinuum's QCCD architecture. Namely, these "SWAP-transversal" gates are performed by applying single qubit operations and relabeling the physical qubits of the device. Thanks to the all-to-all connectivity facilitated through qubit movement on the QCCD architecture, this relabeling can be done in software essentially for free. Combined with extremely high fidelity (~1.2 x10-5) single-qubit operations, the resulting logical gates are similarly high fidelity.
Given the promise of these codes, we take them a step further in our new paper. We combine the symplectic double codes with the [[4,2,2]] Iceberg code using a procedure called "code concatenation". A concatenated code is a bit like nesting dolls, with an outer code containing codes within it---with these too potentially containing codes. More technically, in a concatenated code the logical qubits of one code act as the physical qubits of another code.
The new codes, which we call "concatenated symplectic double codes", were designed in such a way that they have many of these easily-implementable SWAP-transversal gates. Central to its construction, we show how the concatenation method allows us to "upgrade" logical gates in terms of their ease of implementation; this procedure may provide insights for constructing other codes with convenient logical gates. Notably, the SWAP-transversal gate set on this code is so powerful that only two additional operations (logical T and S) are necessary for universal computation. Furthermore, these codes have many logical qubits, and we also present numerical evidence to suggest that they are good quantum memories.
Concatenated symplectic double codes have one of the easiest logical computation schemes, and we didn’t have to sacrifice rate to achieve it. Looking forward in our roadmap, we are targeting hundreds of logical qubits at ~ 1x 10-8 logical error rate by 2029. These codes put us in a prime position to leverage the best characteristics of our hardware and create a device that can achieve real commercial advantage.
Every year, the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) brings together the global supercomputing community to explore the technologies driving the future of computing.
Join Quantinuum at this year’s conference, taking place November 16th – 21st in St. Louis, Missouri, where we will showcase how our quantum hardware, software, and partnerships are helping define the next era of high-performance and quantum computing.
The Quantinuum team will be on-site at booth #4432 to showcase how we’re building the bridge between HPC and quantum.
On Tuesday and Wednesday, our quantum computing experts will host daily tutorials at our booth on Helios, our next-generation hardware platform, Nexus, our all-in-one quantum computing platform, and Hybrid Workflows, featuring the integration of NVIDIA CUDA-Q with Quantinuum Systems.
Join our team as they share insights on the opportunities and challenges of quantum integration within the HPC ecosystem:
Panel Session: The Quantum Era of HPC: Roadmaps, Challenges and Opportunities in Navigating the Integration Frontier
November 19th | 10:30 – 12:00pm CST
During this panel session, Kentaro Yamamoto from Quantinuum, will join experts from Lawrence Berkeley National Laboratory, IBM, QuEra, RIKEN, and Pawsey Supercomputing Research Centre to explore how quantum and classical systems are being brought together to accelerate scientific discovery and industrial innovation.
BoF Session: Bridging the Gap: Making Quantum-Classical Hybridization Work in HPC
November 19th | 5:15 – 6:45pm CST
Quantum-classical hybrid computing is moving from theory to reality, yet no clear roadmap exists for how best to integrate quantum processing units (QPUs) into established HPC environments. In this Birds of a Feather discussion, co-led by Quantinuum’s Grahame Vittorini and representatives from BCS, DOE, EPCC, Inria, ORNL NVIDIA, and RIKEN we hope to bring together a global community of HPC practitioners, system architects, quantum computing specialists and workflow researchers, including participants in the Workflow Community Initiative, to assess the state of hybrid integration and identify practical steps toward scalable, impactful deployment.