Among other research, the Global Technology Applied Research (GTAR) Center at JPMorgan Chase is experimenting with quantum algorithms for constrained optimization to perform Natural Language Processing (NLP) for document summarization, addressing various application points across the firm.
Marco Pistoia, Ph.D., Managing Director, Distinguished Engineer, and Head of GT Applied Research recently led the research effort around a constrained version of the Quantum Approximate Optimization Algorithm (QAOA) that can extract and summarize the most important information from legal documents and contracts. This work was recently published in Nature Scientific Reports (Constrained Quantum Optimization for Extractive Summarization on a Trapped-ion Quantum Computer) and deemed the “largest demonstration to date of constrained optimization on a gate-based quantum computer.”
JPMorgan Chase was one of the early-access users of the Quantinuum H1-1 system when it was upgraded from 12 qubits with 3 parallel gating zones to 20 qubits with 5 parallel gating zones. The research team at JPMorgan Chase found the 20-qubit machine returned significantly better results than random guess without any error mitigation, despite the circuit depth exceeding 100 two-qubit gates. The circuits used were deeper than any quantum optimization circuits previously executed for any problem. “With 20 qubits, we could summarize bigger documents and the results were excellent,” Pistoia said. “We saw a difference, both in terms of the number of qubits and the quality of qubits.”
JPMorgan Chase has been working with Quantinuum’s quantum hardware since 2020 (pre-merger) and Pistoia has seen the evolution of the machine over time, as companies raced to add qubits. “It was clear early on that the number of qubits doesn't matter,” he said. “In the short term, we need computers whose qubits are reliable and give us the results that we expect based on the reference values.”
Jenni Strabley, Sr., Director of Offering Management for Quantinuum, stated, “Quality counts when it comes to quantum computers. We know our users, like JPMC, expect that every time they use our H-Series quantum computers, they get the same, repeatable, high-quality performance. Quality isn’t typically part of the day-to-day conversation around quantum computers, but it needs to be for users like Marco and his team to progress in their research.”
More broadly, the researchers claimed that “this demonstration is a testament to the overall progress of quantum computing hardware. Our successful execution of complex circuits for constrained optimization depended heavily on all-to-all connectivity, as the circuit depth would have significantly increased if the circuit had to be compiled to a nearest-neighbor architecture.”
The objective of the experiment was to produce a condensed text summary by selecting sentences verbatim from the original text. The specific goal was to maximize the centrality and minimize the redundancy of the sentences in the summary and do so with a limited number of sentences.
The JPMorgan Chase researchers used all 20 qubits of the H1-1 and executed circuits with two-qubit gate depths of up to 159 and two-qubit gate counts of up to 765. The team used IBM’s Qiskit for circuit manipulation and noiseless simulation. For the hardware experiments, they used Quantinuum’s TKET to optimize the circuits for H1-1’s native gate set. They also ran the quantum circuits in an emulator of the H1-1 device.
The JPMorgan Chase research team tested three algorithms: L-VQE, QAOA and XY-QAOA. L-VQE was easy to execute on the hardware but difficult to find good parameters for. Regarding the other two algorithms, it was easier to find good parameters, but the circuits were more expensive to execute. The XY-QAOA algorithm provided the best results.
Dr. Pistoia mentions that constrained optimization problems, such as extractive summarization, are ubiquitous in banks, thus finding high-quality solutions to constrained optimization problems can positively impact customers of all lines of business. It is also important to note that the optimization algorithm built for this experiment can also be used across other industries (e.g., transportation) because the underlying algorithm is the same in many cases.
Even with the quality of the results from this extractive summarization work, the NLP algorithm is not ready to roll out just yet. “Quantum computers are not yet that powerful, but we're getting closer,” Pistoia said. “These results demonstrate how algorithm and hardware progress is bringing the prospect of quantum advantage closer, which can be leveraged across many industries.”
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
Wherever you’re sitting right now, you’re probably surrounded by the fruits of modern semiconductor technology. Chips aren't only in your laptops and cell phones – they're in your car, your doorbell, your thermostat, and even your toaster. Importantly, semiconductor-based chips are also in the heart of most quantum computers.
While quantum computing holds transformative potential, it faces two major challenges: first, achieving low error operations (say one in a billion), and second, scaling systems to enough qubits to address complex, real-world problems (say, on the order of a million). Quantinuum is proud to lead the industry in providing the lowest error rates in the business, but some continue to question whether our chosen modality, trapped-ion technology, can scale to meet these ambitious goals.
Why the doubt? Well, early demonstrations of trapped-ion quantum computers relied on bulky, expensive laser sources, large glass optics, and sizeable ion traps assembled by hand. By comparison, other modalities, such as semiconductor and superconductor qubits, resemble conventional computer chips. However, our quantum-charge-coupled device (QCCD) architecture shares the same path to scaling: at their core, our quantum computers are also chip-based. By leveraging modern microfabrication techniques, we can scale effectively while maintaining the advantage of low error rates that trapped ions provide.
Fortunately, we are at a point in history where QCCD quantum computing is already more compact compared to the early days. Traditional oversized laser sources have already been replaced by tiny diode lasers based on semiconductor chips, and our ion traps have already evolved from bulky, hand-assembled objects to traps fabricated on silicon wafers. The biggest remaining challenge lies in the control and manipulation of laser light.
For this next stage in our journey, we have turned to Infineon. Infineon not only builds some of the world’s leading classical computer chips, but they also bring in-house expertise in ion-trap quantum computing. Together, we are developing a chip with integrated photonics, bringing the control and manipulation of light fully onto our chips. This innovation drastically reduces system complexity and paves the way for serious scaling.
Since beginning work with Infineon, our pace of innovation has accelerated. Their expertise in fabricating waveguides, building grating couplers, and optimizing deposition processes for ultra-low optical loss gives us a significant advantage. In fact, Infineon has already developed deposition processes with the lowest optical losses in the world—a critical capability for building high-performance photonic systems.
Their impressive suite of failure analysis tools, such as electron microscopes, SIMS, FIB, AFMs, and Kelvin probes, allow us to diagnose and correct failures in days rather than weeks. Some of these tools are in-line, meaning analysis can be performed without removing devices from the cleanroom environment, minimizing contamination risk and further accelerating development.
Together, we are demonstrating that QCCD quantum computing is fundamentally a semiconductor technology—just like conventional computers. While seeming like it’s a world away, quantum computing is now closer to home than ever.
As organizations assess the impact of quantum computing on cryptography, many focus on algorithm migration and timelines. But preparing for PQC requires a broader view—one that includes not just new algorithms, but also the quality of the inputs that support them, including randomness.
That’s why Quantinuum joined with partners Thales, Keyfactor, and IBM Consulting to form the QSafe 360 Alliance, a collaboration focused on helping organizations build crypto-agile security architectures that are ready for the quantum era. Together, we’ve released a whitepaper—Digital Trust & Cybersecurity After Quantum Computing—to offer practical guidance on post-quantum readiness, from discovery and planning to deployment.
The history of cryptography offers clear examples of what happens when randomness fail, and how long those issues can go unnoticed. The Polynonce attack, first disclosed in 2023, exploited weak randomness in Bitcoin transaction signatures and enabled the theft of at least $25 million across 773 wallets. The vulnerability persisted undetected for nine years. The Randstorm disclosure, published in 2022, revealed that biased key generation in widely used Bitcoin wallet libraries exposed millions of wallets—across a window of more than a decade (2011–2022). In both cases, cryptographic algorithms functioned as designed; it was the randomness beneath them that silently failed, leaving companies vulnerable for many years
Post-quantum cryptography (PQC) algorithms are being designed to resist attacks from quantum computers. But they still depend on random values to generate key material. That means any implementation of PQC inherits the same reliance on randomness—but without a way to prove its quality, that layer remains a potential vulnerability.
As security teams run cryptographic inventories, develop crypto-agility plans, or build software bill-of-materials (SBOMs) for PQC migration, it’s important to include randomness in that scope. No matter how strong the algorithm, poor randomness can undermine its security from the start.
Quantum Origin takes a fundamentally different approach to randomness quality to deliver proven randomness which improves key generation, algorithms, and the entire security stack. It leverages strong seeded randomness extractors—mathematical algorithms that transform even weak local entropy into provably secure output. These extractors are uniquely powered by a Quantum Seed, which is generated once by Quantinuum's quantum computers using quantum processes verified through Bell tests.
This one-time quantum generation enables Quantum Origin as a software-only solution designed for maximum flexibility. It works with existing infrastructure—on cloud systems, on-premises environments, air-gapped networks, and embedded platforms—without requiring special hardware or a network connection. It's also validated to NIST SP 800-90B standards (Entropy Source Validation #E214). This approach strengthens today’s deployments of AES, RSA, ECC, and other algorithms, and lays a secure foundation for implementing the NIST PQC algorithms.
The QSafe 360 Alliance whitepaper outlines the path to post-quantum readiness, emphasizing crypto-agility as a guiding principle: the ability to adapt cryptographic systems without major disruption, from randomness to key generation to algorithmic strength.
For security architects, CISOs, and cryptographic engineering teams building their post-quantum transition strategies, randomness is not a peripheral concern. It is a starting point.
The QSafe 360 Alliance whitepaper offers valuable guidance on structuring a comprehensive PQC journey. As you explore that framework, consider how proven randomness—available today—will help strengthen your security posture from the ground up.
Our quantum algorithms team has been hard at work exploring solutions to continually optimize our system’s performance. Recently, they’ve invented a novel technique, called the Quantum Paldus Transform (QPT), that can offer significant resource savings in future applications.
The transform takes complex representations and makes them simple, by transforming into a different “basis”. This is like looking at a cube from one angle, then rotating it and seeing just a square, instead. Transformations like this save resources because the more complex your problem looks, the more expensive it is to represent and manipulate on qubits.
While it might sound like magic, transforms are a commonly used tool in science and engineering. Transforms simplify problems by reshaping them into something that is easier to deal with, or that provides a new perspective on the situation. For example, sound engineers use Fourier transforms every day to look at complex musical pieces in terms of their frequency components. Electrical engineers use Laplace transforms; people who work in image processing use the Abel transform; physicists use the Legendre transform, and so on.
In a new paper outlining the necessary tools to implement the QPT, Dr. Nathan Fitzpatrick and Mr. Jędrzej Burkat explain how the QPT will be widely applicable in quantum computing simulations, spanning areas like molecular chemistry, materials science, and semiconductor physics. The paper also describes how the algorithm can lead to significant resource savings by offering quantum programmers a more efficient way of representing problems on qubits.
The efficiency of the QPT stems from its use of one of the most profound findings in the field of physics: that symmetries drive the properties of a system.
While the average person can “appreciate” symmetry, for example in design or aesthetics, physicists understand symmetry as a much more profound element present in the fabric of reality. Symmetries are like the universe’s DNA; they lead to conservation laws, which are the most immutable truths we know.
Back in the 1920’s, when women were largely prohibited from practicing physics, one of the great mathematicians of the century, Emmy Noether, turned her attention to the field when she was tasked with helping Einstein with his work. In her attempt to solve a problem Einstein had encountered, Dr. Noether realized that all the most powerful and fundamental laws of physics, such as “energy can neither be created nor destroyed” are in fact the consequence of a deep simplicity – symmetry – hiding behind the curtains of reality. Dr. Noether’s theorem would have a profound effect on the trajectory of physics.
In addition to the many direct consequences of Noether’s theorem is a longstanding tradition amongst physicists to treat symmetry thoughtfully. Because of its role in the fabric of our universe, carefully considering the symmetries of a system often leads to invaluable insights.
Many of the systems we are interested in simulating with quantum computers are, at their heart, systems of electrons. Whether we are looking at how electrons move in a paired dance inside superconductors, or how they form orbitals and bonds in a chemical system, the motion of electrons are at the core.
Seven years after Noether published her blockbuster results, Wolfgang Pauli made waves when he published the work describing his Pauli exclusion principle, which relies heavily on symmetry to explain basic tenets of quantum theory. Pauli’s principle has enormous consequences; for starters, describing how the objects we interact with every day are solid even though atoms are mostly empty space, and outlining the rules of bonds, orbitals, and all of chemistry, among other things.
It is Pauli's symmetry, coupled with a deep respect for the impact of symmetry, that led our team at Quantinuum to the discovery published today.
In their work, they considered the act of designing quantum algorithms, and how one’s design choices may lead to efficiency or inefficiency.
When you design quantum algorithms, there are many choices you can make that affect the final result. Extensive work goes into optimizing each individual step in an algorithm, requiring a cyclical process of determining subroutine improvements, and finally, bringing it all together. The significant cost and time required is a limiting factor in optimizing many algorithms of interest.
This is again where symmetry comes into play. The authors realized that by better exploiting the deepest symmetries of the problem, they could make the entire edifice more efficient, from state preparation to readout. Over the course of a few years, a team lead Dr. Fitzpatrick and his colleague Jędrzej Burkat slowly polished their approach into a full algorithm for performing the QPT.
The QPT functions by using Pauli’s symmetry to discard unimportant details and strip the problem down to its bare essentials. Starting with a Paldus transform allows the algorithm designer to enjoy knock-on effects throughout the entire structure, making it overall more efficient to run.
“It’s amazing to think how something we discovered one hundred years ago is making quantum computing easier and more efficient,” said Dr. Nathan Fitzpatrick.
Ultimately, this innovation will lead to more efficient quantum simulation. Projects we believed to still be many years out can now be realized in the near term.
The discovery of the Quantum Paldus Transform is a powerful reminder that enduring ideas—like symmetry—continue to shape the frontiers of science. By reaching back into the fundamental principles laid down by pioneers like Noether and Pauli, and combining them with modern quantum algorithm design, Dr. Fitzpatrick and Mr. Burkat have uncovered a tool with the potential to reshape how we approach quantum computation.
As quantum technologies continue their crossover from theoretical promise to practical implementation, innovations like this will be key in unlocking their full potential.