Quantinuum Launches the Most Benchmarked Quantum Computer in the World and Publishes All the Data

New H2-1 shows strong performance across 15 benchmarks while expanding to 32 qubits and reaching a new Quantum Volume record of 65,536

May 9, 2023

Quantinuum’s new H2-1 quantum computer proves that trapped-ion architecture, which is well-known for achieving outstanding qubit quality and gate fidelity, is also built for scale – and Quantinuum’s benchmarking team has the data to prove it. 

The bottom line: the new System Model H2 surpasses the H1 in complexity and qubit capacity while maintaining all the capabilities and fidelities of the previous generation – an astounding accomplishment when developing successive generations of quantum systems.

The newest entry in the H-Series is starting off with 32 qubits whereas H1 started with 10. H1 underwent several upgrades, ultimately reaching a 20-qubit capacity, and H2 is poised to pick up the torch and run with it. Staying true to the ultimate goal of increasing performance, H2 does not simply increase the qubit count but has already achieved a higher Quantum Volume than any other quantum computer ever built: 216 or 65,536. 

Most importantly for the growing number of industrials and academic research institutions using the H-Series, benchmarking data shows that none of these hardware changes reduced the high-performance levels achieved by the System Model H1. That’s a key challenge in scaling quantum computers – preserving performance while adding qubits. The error rate on the fully connected circuits is comparable to the H1, even with a significant increase in qubits. Indeed, H2 exceeds H1 in multiple performance metrics: single-qubit gate error, two-qubit gate error, measurement cross talk and SPAM. 

Key to the engineering advances made in the second-generation H-Series quantum computer are reductions in the physical resources required per qubit. To get the most out of the quantum charge-coupled device (QCCD) architecture, which the H-Series is built on, the hardware team at Quantinuum introduced a series of component innovations, to eliminate some performance limitations of the first generation in areas such as ion-loading, voltage sources, and delivering high-precision radio signals to control and manipulate ions.

The research paper, “A Race Track Trapped-Ion Quantum Processor,” details all of these engineering advances, and exactly what impacts they have on the computing performance of the machine. The paper includes results from component and system-level benchmarking tests that document the new machine’s capabilities at launch. These benchmarking metrics, combined with the company’s advances in topological qubits, represent a new phase of quantum computing.

Advancing Beyond Classical Simulation

In addition to the expanded capabilities, the new design provides operational efficiencies and a clear growth path.

At launch, H2’s operations can still be emulated classically. However, Quantinuum released H2 at a small percentage of its full capacity. This new machine has the ability to upgrade to more qubits and gate zones, pushing it past the level where classical computers can hope to keep up.

Increased Efficiency in New Trap Design

This new generation quantum processor represents the first major trap upgrade in the H-Series. One of the most significant changes is the new oval (or racetrack) shape of the ion trap itself, which allows for a more efficient use of space and electrical control signals. 

One key engineering challenge presented by this new design was the ability to route signals beneath the top metal layer of the trap. The hardware team addressed this by using radiofrequency (RF) tunnels. These tunnels allow inner and outer voltage electrodes to be implemented without being directly connected on the top surface of the trap, which is the key to making truly two-dimensional traps that will greatly increase the computational speed of these machines. 

The new trap also features voltage “broadcasting,” which saves control signals by tying multiple DC electrodes within the trap to the same external signal. This is accomplished in “conveyor belt” regions on each side of the trap where ions are stored, improving electrode control efficiency by requiring only three voltage signals for 20 wells on each side of the trap.

The other significant component of H2 is the Magneto Optical Trap (MOT) which replaces the effusive atomic oven that H1 used. The MOT reduces the startup time for H2 by cooling the neutral atoms before shooting them at the trap, which will be crucial for very large machines that use large numbers of qubits. 

Industry-leading Results from 15 Benchmarking Tests

Quantinuum has always valued transparency and supported its performance claims with publicly available data. 

To quantify the impact of these hardware and design improvements, Quantinuum ran 15 tests that measured component operations, overall system performance and application performance. The complete results from the tests are included in the new research paper. 

The hardware team ran four system-level benchmark tests that included more complex, multi-qubit circuits to give a broader picture of overall performance. These tests were:

  • Mirror benchmarking: A scalable way to benchmark arbitrary quantum circuits.
  • Quantum volume: A popular system-level test with a well-established construction that is comparable across gate-based quantum computers.
  • Random circuit sampling: A computational task of sampling the output distributions of random quantum circuits.
  • Entanglement certification in Greenberger-Horne-Zeilinger (GHZ) states: A demanding test of qubit coherence that is widely measured and reported across a variety of quantum hardware.

H2 showed state-of-the-art performance on each of these system-level tests, but the results of the GHZ test were particularly impressive. The verification of the globally entangled GHZ state requires a relatively high fidelity, which becomes harder and harder to achieve with larger numbers of qubits. 

With H2’s 32 qubits and precision control of the environment in the ion trap, Quantinuum researchers were able to achieve an entangled state of 32 qubits with a fidelity of 82.0(7)%, setting a new world record.

In addition to the system level tests, the Quantinuum hardware team ran these component benchmark tests:

  • SPAM experiment
  • Single-qubit gate randomized benchmarking
  • Two-qubit gate randomized benchmarking
  • Two-qubit SU gate randomized benchmarking RB
  • Two-qubit parameterized gate randomized benchmarking
  • Measurement/reset crosstalk benchmarking 
  • Interleaved transport randomized benchmarking

The paper includes results from those tests as well as results from these application benchmarks:

  • Hamiltonian simulation
  • Quantum Approximate Optimization Algorithm 
  • Error correction: repetition code
  • Holographic quantum dynamics simulation 
About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
December 5, 2024
Quantum computing is accelerating

Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.

Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation. 

For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality. 

Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more. 

A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments. 

CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.” 

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems. 

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”

The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version). 

In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.

We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.  

Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.

Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe,  or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.

The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.

Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.

The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.

The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.

In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.

This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of. 

technical
All
Blog
November 21, 2024
InQuanto Integrates NVIDIA cuQuantum for Native GPU Support and Prepares for the Era of Quantum Supercomputing

Chemistry plays a central role in the modern global economy, as it has for centuries. From Antoine Lavoisier to Alessandro Volta, Marie Curie to Venkatraman Ramakrishnan, pioneering chemists drove progress in fields such as combustion, electrochemistry, and biochemistry. They contributed to our mastery of critical 21st century materials such as biodegradable plastics, semiconductors, and life-saving pharmaceuticals. 

Advances in high-performance computing (HPC) and AI have brought fundamental and industrial science ever more within the scope of methods like data science and predictive analysis. In modern chemistry, it has become routine for research to be aided by computational models run in silico. Yet, due to their intrinsically quantum mechanical nature, “strongly correlated” chemical systems – those involving strongly interacting electrons or highly interdependent molecular behaviors – prove extremely hard to accurately simulate using classical computers alone. Quantum computers running quantum algorithms are designed to meet this need. Strongly correlated systems turn up in potential applications such as smart materials, high-temperature superconductors, next-generation electronic devices, batteries and fuel cells, revealing the economic potential of extending our understanding of these systems, and the motivation to apply quantum computing to computational chemistry. 

For senior business and research leaders driving value creation and scientific discovery, a critical question is how will the introduction of quantum computers affect the trajectory of computational approaches to fundamental and industrial science?

Introducing InQuanto v4.0

This is the exciting context for our announcement of InQuanto v4.0, the latest iteration of our computational chemistry platform for quantum computers. Developed over many years in close partnership with computational chemists and materials scientists, InQuanto has become an essential tool for teams using the most advanced methods for simulating molecular and material systems. InQuanto v4.0 is packed with powerful updates, including the capability to incorporate NVIDIA’s tensor network methods for large-scale classical simulations supported by graphical processing units (GPUs). 

When researching chemistry on quantum computers, we use classical HPC to perform tasks such as benchmarking, and for classical pre- and post-processing with computational chemistry methods such as density functional theory. This powerful hybrid quantum-classical combination with InQuanto accelerated our work with partners such as BMW Group, Airbus, and Honeywell. Global businesses and national governments alike are gearing up for the use of such hybrid “quantum supercomputers” to become standard practice. 

In a recent technical blog post, we explored the rapid development and deployment of InQuanto for research and enterprise users, offering insights for combining quantum and high-performance classical methods with only a few lines of code. Here, we provide a higher-level overview of the value InQuanto brings to fundamental and industrial research teams. 

InQuanto v4.0 – under the hood

InQuanto v4.0 is the most powerful version to date of our advanced quantum computational chemistry platform. It supports our users in applying quantum and classical computing methods to problems in chemistry and, increasingly, adjacent fields such as condensed matter physics.

Like previous versions of InQuanto, this one offers state-of-the-art algorithms, methods, and error handling techniques out of the box. Quantum error correction and detection have enabled rapid progress in quantum computing, such as groundbreaking demonstrations in partnership with Microsoft, in April and September 2024, of highly reliable “logical qubits”. Qubits are the core information-carrying components of a quantum computer and by forming them into an ensemble, they are more resistant to errors, allowing more complex problems to be tackled while producing accurate results. InQuanto continues to offer leading-edge quantum error detection protocols as standard and supports users to explore the potential of algorithms for fault-tolerant machines.

InQuanto v4.0 also marks the significant step of introducing native support for tensor networks using GPUs to accelerate simulations. In 2022, Quantinuum and NVIDIA teamed up on one of the quantum computing industry’s earliest quantum-classical collaborations. InQuanto v4.0 introduces classical tensor network methods via an interface with NVIDIA's cuQuantum SDK. Interfacing with cuQuantum enables the simulation of many quantum circuits via the use of GPUs for applications in chemistry that were previously inaccessible, particularly those with larger numbers of qubits.

“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research. With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations” - Tim Costa, Senior Director of HPC and Quantum Computing at NVIDIA

We are also responding to our users’ needs for more robust, enterprise-grade management of applications and data, by incorporating InQuanto into Quantinuum Nexus. This integration makes it far easier and more efficient to build hybrid workflows, decode and store data, and use powerful analytical methods to accelerate scientific and technical progress in critical fields in natural science.

Adding further capabilities, we recently announced our integration of InQuanto with Microsoft’s Azure Quantum Elements (AQE), allowing users to seamlessly combine AQE’s state-of-the-art HPC and AI methods with the enhanced quantum capabilities of InQuanto in a single workflow. The first end-to-end workflow using HPC, AI and quantum computing was demonstrated by Microsoft using AQE and Quantinuum Systems hardware, achieving chemical accuracy and demonstrating the advantage of logical qubits compared to physical qubits in modeling a catalytic reaction.

Where InQuanto takes us next

In the coming years, we expect to see scientific and economic progress using the powerful combination of quantum computing, HPC, and artificial intelligence. Each of these computing paradigms contributes to our ability to solve important problems. Together, their combined impact is far greater than the sum of their parts, and we recognize that these have the potential to drive valuable computational innovation in industrial use-cases that really matter, such as in energy generation, transmission and storage, and in chemical processes essential to agriculture, transport, and medicine.

Building on our recent hardware roadmap announcement, which supports scientific quantum advantage and a commercial tipping point in 2029, we are demonstrating the value of owning and building out the full quantum computing stack with a unified goal of accelerating quantum computing, integrating with HPC and AI resources where it shows promise, and using the power of the “quantum supercomputer” to make a positive difference in fundamental and industrial chemistry and related domains.

In close collaboration with our customers, we are driving towards systems capable of supporting quantum advantage and unlocking tangible and significant business value.

To access InQuanto today, including Quantinuum Systems and third-party hardware and emulators, visit: https://www.quantinuum.com/products-solutions/inquanto 

To get started with Quantinuum Nexus, which meets all your quantum computing needs across Quantinuum Systems and third-party backends, visit: https://www.quantinuum.com/products-solutions/nexus 

To find out more and access Quantinuum Systems, visit: https://www.quantinuum.com/products-solutions/quantinuum-systems 

corporate
All
Blog
November 19, 2024
Introducing InQuanto v4.0

Quantinuum is excited to announce the release of InQuanto™ v4.0, the latest version of our advanced quantum computational chemistry software. This update introduces new features and significant performance improvements, designed to help both industry and academic researchers accelerate their computational chemistry work.

If you're new to InQuanto or want to learn more about how to use it, we encourage you to explore our documentation.

InQuanto v4.0 is being released alongside Quantinuum Nexus, our cloud-based platform for quantum software. Users with Nexus access can leverage the `inquanto-nexus` extension to, for example, take advantage of multiple available backends and seamless cloud storage.

In addition, InQuanto v4.0 introduces enhancements that allow users to run larger chemical simulations on quantum computers. Systems can be easily imported from classical codes using the widely supported FCIDUMP file format. These fermionic representations are then efficiently mapped to qubit representations, benefiting from performance improvements in InQuanto operators. For systems too large for quantum hardware experiments, users can now utilize the new `inquanto-cutensornet` extension to run simulations via tensor networks.

These updates enable users to compile and execute larger quantum circuits with greater ease, while accessing powerful compute resources through Nexus.

Quantinuum Nexus 

InQuanto v4.0 is fully integrated with Quantinuum Nexus via the `inquanto-nexus` extension. This integration allows users to easily run experiments across a range of quantum backends, from simulators to hardware, and access results stored in Nexus cloud storage.

Results can be annotated for better searchability and seamlessly shared with others. Nexus also offers the Nexus Lab, which provides a preconfigured Jupyter environment for compiling circuits and executing jobs. The Lab is set up with InQuanto v4.0 and a full suite of related software, enabling users to get started quickly. 

Enhanced Operator Performance

The `inquanto.mappings` submodule has received a significant performance enhancement in InQuanto v4.0. By integrating a set of operator classes written in C++, the team has increased the performance of the module past that of other open-source packages’ equivalent methods. 

Like any other Python package, InQuanto can benefit from delegating tasks with high computational overhead to compiled languages such as C++. This prescription has been applied to the qubit encoding functions of the `inquanto.mappings` submodule, in which fermionic operators are mapped to their qubit operator equivalents. One such qubit encoding scheme is the Jordan-Wigner (JW) transformation. With respect to JW encoding as a benchmarking task, the integration of C++ operator classes in InQuanto v4.0 has yielded an execution time speed-up of two and a half times that of open-source competitors (Figure 1).


Figure 1. Performance comparison of Jordan Wigner (JW) operator mappings for LiH molecule in several basis sets of increasing size. 

This is a substantial increase in performance that all users will benefit from. InQuanto users will still interact with the familiar Python classes such as `FermionOperator` and `QubitOperator` in v4.0. However, when the `mappings` module is called, the Python operator objects are converted to C++ equivalents and vice versa before and after the qubit encoding procedure (Figure 2). With future total integration of C++ operator classes, we can remove the conversion step and push the performance of the `mappings` module further. Tests, once again using the JW mappings scheme, show a 40 times execution time speed-up as compared to open-source competitors (Figure 1).


Figure 2. Representation of the conversion step from Python objects to C++ objects in the qubit encoding processes handled by the `inquanto.mappings` submodule in InQuanto v4.0.

Efficient classical pre-processing implementations such as this are a crucial step on the path to quantum advantage. As the number of physical qubits available on quantum computers increases, so will the size and complexity of the physical systems that can be simulated. To support this hardware upscaling, computational bottlenecks including those associated with the classical manipulation of operator objects must be alleviated. Aside from keeping pace with hardware advancements, it is important to enlarge the tractable system size in situations that do not involve quantum circuit execution, such as tensor network circuit simulation and resource estimation.

Leveraging Tensor Networks

Users with access to GPU capabilities can now take advantage of tensor networks to accelerate simulations in InQuanto v4.0. This is made possible by the `inquanto-cutensornet` extension, which interfaces InQuanto with the NVIDIA® cuTensorNet library. The `inquanto-cutensornet` extension leverages the `pytket-cutensornet` library, which facilitates the conversion of `pytket` circuits into tensor networks to be evaluated using the NVIDIA® cuTensorNet library. This extension increases the size limit of circuits that can be simulated for chemistry applications. Future work will seek to integrate this functionality with our Nexus platform, allowing InQuanto users to employ the extension without requiring access to their own local GPU resources.

Here we demonstrate the use of the `CuTensorNetProtocol` passed to a VQE experiment. For the sake of brevity, we use the `get_system` method of `inquanto.express` to swiftly define the system, in this case H2 using the STO-3G basis-set.

from inquanto.algorithms import AlgorithmVQE
from inquanto.ansatzes import FermionSpaceAnsatzUCCD
from inquanto.computables import ExpectationValue, ExpectationValueDerivative
from inquanto.express import get_system
from inquanto.mappings import QubitMappingJordanWigner
from inquanto.minimizers import MinimizerScipy
from inquanto.extensions.cutensornet import CuTensorNetProtocol


fermion_hamiltonian, space, state = get_system("h2_sto3g.h5")
qubit_hamiltonian = fermion_hamiltonian.qubit_encode()
ansatz = FermionSpaceAnsatzUCCD(space, state, QubitMappingJordanWigner())
expectation_value = ExpectationValue(ansatz, qubit_hamiltonian)
gradient_expression = ExpectationValueDerivative(
	ansatz, qubit_hamiltonian, ansatz.free_symbols_ordered()
)


protocol_tn = CuTensorNetProtocol()
vqe_tn = (
	AlgorithmVQE(
		objective_expression=expectation_value,
		gradient_expression=gradient_expression,
		minimizer=MinimizerScipy(),
		initial_parameters=ansatz.state_symbols.construct_zeros(),
	)		
	.build(protocol_objective=protocol_tn, protocol_gradient=protocol_tn)
	.run()
)
print(vqe_tn.generate_report()["final_value"])

# -1.136846575472054

The inherently modular design of InQuanto allows for the seamless integration of new extensions and functionality. For instance, a user can simply modify existing code using `SparseStatevectorProtocol` to enable GPU acceleration through `inquanto-cutensornet`. It is worth noting that the extension is also compatible with shot-based simulation via the `CuTensorNetShotsBackend` provided by `pytket-cutensornet`.

“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research,” said Tim Costa, Senior Director at NVIDIA®. “With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations.”

Classical Code Interface

As demonstrated by our `inquanto-pyscf` extension, we want InQuanto to easily interface with classical codes. In InQuanto v4.0, we have clarified integration with other classical codes such as Gaussian and Psi4. All that is required is an FCIDUMP file, which is a common output file for classical codes. An FCIDUMP file encodes all the one and two electron integrals required to set up a CI Hamiltonian. Users can bring their system from classical codes by passing an FCIDUMP file to the `FCIDumpRestricted` class and calling the `to_ChemistryRestrictedIntegralOperator` method or its unrestricted counterpart, depending on how they wish to treat spin. The resulting InQuanto operator object can be used within their workflow as they usually would.

Exposing TKET Compilation

Users can experiment with TKET’s latest circuit compilation tools in a straightforward manner with InQuanto v4.0. Circuit compilation now only occurs within the `inquanto.protocols` module. This allows users to define which optimization passes to run before and/or after the backend specific defaults, all in one line of code. Circuit compilation is a crucial step in all InQuanto workflows. As such, this structural change allows us to cleanly integrate new functionality through extensions such as `inquanto-nexus` and `inquanto-cutensornet`. Looking forward, beyond InQuanto v4.0, this change is a positive step towards bringing quantum error correction to InQuanto.

Conclusion

InQuanto v4.0 pushes the size of the chemical systems that a user can simulate on quantum computers. Users can import larger, carefully constructed systems from classical codes and encode them to optimized quantum circuits. They can then evaluate these circuits on quantum backends with `inquanto-nexus` or execute them as tensor networks using `inquanto-cutensornet`. We look forward to seeing how our users leverage InQuanto v4.0 to demonstrate the increasing power of quantum computational chemistry. If you are curious about InQuanto and want to read further, our initial release blogpost is very informative or visit the InQuanto website.

How to Access InQuanto

If you are interested in trying InQuanto, please request access or a demo at inquanto@quantinuum.com

technical
All