Introducing InQuanto v4.0

The latest version of our advanced quantum computational chemistry platform

November 19, 2024

Quantinuum is excited to announce the release of InQuanto™ v4.0, the latest version of our advanced quantum computational chemistry software. This update introduces new features and significant performance improvements, designed to help both industry and academic researchers accelerate their computational chemistry work.

If you're new to InQuanto or want to learn more about how to use it, we encourage you to explore our documentation.

InQuanto v4.0 is being released alongside Quantinuum Nexus, our cloud-based platform for quantum software. Users with Nexus access can leverage the `inquanto-nexus` extension to, for example, take advantage of multiple available backends and seamless cloud storage.

In addition, InQuanto v4.0 introduces enhancements that allow users to run larger chemical simulations on quantum computers. Systems can be easily imported from classical codes using the widely supported FCIDUMP file format. These fermionic representations are then efficiently mapped to qubit representations, benefiting from performance improvements in InQuanto operators. For systems too large for quantum hardware experiments, users can now utilize the new `inquanto-cutensornet` extension to run simulations via tensor networks.

These updates enable users to compile and execute larger quantum circuits with greater ease, while accessing powerful compute resources through Nexus.

Quantinuum Nexus 

InQuanto v4.0 is fully integrated with Quantinuum Nexus via the `inquanto-nexus` extension. This integration allows users to easily run experiments across a range of quantum backends, from simulators to hardware, and access results stored in Nexus cloud storage.

Results can be annotated for better searchability and seamlessly shared with others. Nexus also offers the Nexus Lab, which provides a preconfigured Jupyter environment for compiling circuits and executing jobs. The Lab is set up with InQuanto v4.0 and a full suite of related software, enabling users to get started quickly. 

Enhanced Operator Performance

The `inquanto.mappings` submodule has received a significant performance enhancement in InQuanto v4.0. By integrating a set of operator classes written in C++, the team has increased the performance of the module past that of other open-source packages’ equivalent methods. 

Like any other Python package, InQuanto can benefit from delegating tasks with high computational overhead to compiled languages such as C++. This prescription has been applied to the qubit encoding functions of the `inquanto.mappings` submodule, in which fermionic operators are mapped to their qubit operator equivalents. One such qubit encoding scheme is the Jordan-Wigner (JW) transformation. With respect to JW encoding as a benchmarking task, the integration of C++ operator classes in InQuanto v4.0 has yielded an execution time speed-up of two and a half times that of open-source competitors (Figure 1).


Figure 1. Performance comparison of Jordan Wigner (JW) operator mappings for LiH molecule in several basis sets of increasing size. 

This is a substantial increase in performance that all users will benefit from. InQuanto users will still interact with the familiar Python classes such as `FermionOperator` and `QubitOperator` in v4.0. However, when the `mappings` module is called, the Python operator objects are converted to C++ equivalents and vice versa before and after the qubit encoding procedure (Figure 2). With future total integration of C++ operator classes, we can remove the conversion step and push the performance of the `mappings` module further. Tests, once again using the JW mappings scheme, show a 40 times execution time speed-up as compared to open-source competitors (Figure 1).


Figure 2. Representation of the conversion step from Python objects to C++ objects in the qubit encoding processes handled by the `inquanto.mappings` submodule in InQuanto v4.0.

Efficient classical pre-processing implementations such as this are a crucial step on the path to quantum advantage. As the number of physical qubits available on quantum computers increases, so will the size and complexity of the physical systems that can be simulated. To support this hardware upscaling, computational bottlenecks including those associated with the classical manipulation of operator objects must be alleviated. Aside from keeping pace with hardware advancements, it is important to enlarge the tractable system size in situations that do not involve quantum circuit execution, such as tensor network circuit simulation and resource estimation.

Leveraging Tensor Networks

Users with access to GPU capabilities can now take advantage of tensor networks to accelerate simulations in InQuanto v4.0. This is made possible by the `inquanto-cutensornet` extension, which interfaces InQuanto with the NVIDIA® cuTensorNet library. The `inquanto-cutensornet` extension leverages the `pytket-cutensornet` library, which facilitates the conversion of `pytket` circuits into tensor networks to be evaluated using the NVIDIA® cuTensorNet library. This extension increases the size limit of circuits that can be simulated for chemistry applications. Future work will seek to integrate this functionality with our Nexus platform, allowing InQuanto users to employ the extension without requiring access to their own local GPU resources.

Here we demonstrate the use of the `CuTensorNetProtocol` passed to a VQE experiment. For the sake of brevity, we use the `get_system` method of `inquanto.express` to swiftly define the system, in this case H2 using the STO-3G basis-set.

from inquanto.algorithms import AlgorithmVQE
from inquanto.ansatzes import FermionSpaceAnsatzUCCD
from inquanto.computables import ExpectationValue, ExpectationValueDerivative
from inquanto.express import get_system
from inquanto.mappings import QubitMappingJordanWigner
from inquanto.minimizers import MinimizerScipy
from inquanto.extensions.cutensornet import CuTensorNetProtocol


fermion_hamiltonian, space, state = get_system("h2_sto3g.h5")
qubit_hamiltonian = fermion_hamiltonian.qubit_encode()
ansatz = FermionSpaceAnsatzUCCD(space, state, QubitMappingJordanWigner())
expectation_value = ExpectationValue(ansatz, qubit_hamiltonian)
gradient_expression = ExpectationValueDerivative(
	ansatz, qubit_hamiltonian, ansatz.free_symbols_ordered()
)


protocol_tn = CuTensorNetProtocol()
vqe_tn = (
	AlgorithmVQE(
		objective_expression=expectation_value,
		gradient_expression=gradient_expression,
		minimizer=MinimizerScipy(),
		initial_parameters=ansatz.state_symbols.construct_zeros(),
	)		
	.build(protocol_objective=protocol_tn, protocol_gradient=protocol_tn)
	.run()
)
print(vqe_tn.generate_report()["final_value"])

# -1.136846575472054

The inherently modular design of InQuanto allows for the seamless integration of new extensions and functionality. For instance, a user can simply modify existing code using `SparseStatevectorProtocol` to enable GPU acceleration through `inquanto-cutensornet`. It is worth noting that the extension is also compatible with shot-based simulation via the `CuTensorNetShotsBackend` provided by `pytket-cutensornet`.

“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research,” said Tim Costa, Senior Director at NVIDIA®. “With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations.”

Classical Code Interface

As demonstrated by our `inquanto-pyscf` extension, we want InQuanto to easily interface with classical codes. In InQuanto v4.0, we have clarified integration with other classical codes such as Gaussian and Psi4. All that is required is an FCIDUMP file, which is a common output file for classical codes. An FCIDUMP file encodes all the one and two electron integrals required to set up a CI Hamiltonian. Users can bring their system from classical codes by passing an FCIDUMP file to the `FCIDumpRestricted` class and calling the `to_ChemistryRestrictedIntegralOperator` method or its unrestricted counterpart, depending on how they wish to treat spin. The resulting InQuanto operator object can be used within their workflow as they usually would.

Exposing TKET Compilation

Users can experiment with TKET’s latest circuit compilation tools in a straightforward manner with InQuanto v4.0. Circuit compilation now only occurs within the `inquanto.protocols` module. This allows users to define which optimization passes to run before and/or after the backend specific defaults, all in one line of code. Circuit compilation is a crucial step in all InQuanto workflows. As such, this structural change allows us to cleanly integrate new functionality through extensions such as `inquanto-nexus` and `inquanto-cutensornet`. Looking forward, beyond InQuanto v4.0, this change is a positive step towards bringing quantum error correction to InQuanto.

Conclusion

InQuanto v4.0 pushes the size of the chemical systems that a user can simulate on quantum computers. Users can import larger, carefully constructed systems from classical codes and encode them to optimized quantum circuits. They can then evaluate these circuits on quantum backends with `inquanto-nexus` or execute them as tensor networks using `inquanto-cutensornet`. We look forward to seeing how our users leverage InQuanto v4.0 to demonstrate the increasing power of quantum computational chemistry. If you are curious about InQuanto and want to read further, our initial release blogpost is very informative or visit the InQuanto website.

How to Access InQuanto

If you are interested in trying InQuanto, please request access or a demo at inquanto@quantinuum.com

About Quantinuum

Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents. 

Blog
December 5, 2024
Quantum computing is accelerating

Particle accelerator projects like the Large Hadron Collider (LHC) don’t just smash particles - they also power the invention of some of the world’s most impactful technologies. A favorite example is the world wide web, which was developed for particle physics experiments at CERN.

Tech designed to unlock the mysteries of the universe has brutally exacting requirements – and it is this boundary pushing, plus billion-dollar budgets, that has led to so much innovation. 

For example, X-rays are used in accelerators to measure the chemical composition of the accelerator products and to monitor radiation. The understanding developed to create those technologies was then applied to help us build better CT scanners, reducing the x-ray dosage while improving the image quality. 

Stories like this are common in accelerator physics, or High Energy Physics (HEP). Scientists and engineers working in HEP have been early adopters and/or key drivers of innovations in advanced cancer treatments (using proton beams), machine learning techniques, robots, new materials, cryogenics, data handling and analysis, and more. 

A key strand of HEP research aims to make accelerators simpler and cheaper. A key piece of infrastructure that could be improved is their computing environments. 

CERN itself has said: “CERN is one of the most highly demanding computing environments in the research world... From software development, to data processing and storage, networks, support for the LHC and non-LHC experimental programme, automation and controls, as well as services for the accelerator complex and for the whole laboratory and its users, computing is at the heart of CERN’s infrastructure.” 

With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the HEP community is interested in quantum computing, which offers real solutions to some of their hardest problems. 

As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable.”

The HEP community’s interest in quantum computing is growing. In recent years, their scientists have been looking carefully at how quantum computing could help them, publishing a number of papers discussing the challenges and requirements for quantum technology to make a dent (here’s one example, and here’s the arXiv version). 

In the past few months, what was previously theoretical is becoming a reality. Several groups published results using quantum machines to tackle something called “Lattice Gauge Theory”, which is a type of math used to describe a broad range of phenomena in HEP (and beyond). Two papers came from academic groups using quantum simulators, one using trapped ions and one using neutral atoms. Another group, including scientists from Google, tackled Lattice Gauge Theory using a superconducting quantum computer. Taken together, these papers indicate a growing interest in using quantum computing for High Energy Physics, beyond simple one-dimensional systems which are more easily accessible with classical methods such as tensor networks.

We have been working with DESY, one of the world’s leading accelerator centers, to help make quantum computing useful for their work. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center that operates, develops, and constructs particle accelerators, and is part of the worldwide computer network used to store and analyze the enormous flood of data that is produced by the LHC in Geneva.  

Our first publication from this partnership describes a quantum machine learning technique for untangling data from the LHC, finding that in some cases the quantum approach was indeed superior to the classical approach. More recently, we used Quantinuum System Model H1 to tackle Lattice Gauge Theory (LGT), as it’s a favorite contender for quantum advantage in HEP.

Lattice Gauge Theories are one approach to solving what are more broadly referred to as “quantum many-body problems”. Quantum many-body problems lie at the border of our knowledge in many different fields, such as the electronic structure problem which impacts chemistry and pharmaceuticals, or the quest for understanding and engineering new material properties such as light harvesting materials; to basic research such as high energy physics, which aims to understand the fundamental constituents of the universe,  or condensed matter physics where our understanding of things like high-temperature superconductivity is still incomplete.

The difficulty in solving problems like this – analytically or computationally – is that the problem complexity grows exponentially with the size of the system. For example, there are 36 possible configurations of two six-faced dice (1 and 1 or 1 and 2 or 1and 3... etc), while for ten dice there are more than sixty million configurations.

Quantum computing may be very well-suited to tackling problems like this, due to a quantum processor’s similar information density scaling – with the addition of a single qubit to a QPU, the information the system contains doubles. Our 56-qubit System Model H2, for example, can hold quantum states that require 128*(2^56) bits worth of information to describe (with double-precision numbers) on a classical supercomputer, which is more information than the biggest supercomputer in the world can hold in memory.

The joint team made significant progress in approaching the Lattice Gauge Theory corresponding to Quantum Electrodynamics, the theory of light and matter. For the first time, they were able study the full wavefunction of a two-dimensional confining system with gauge fields and dynamical matter fields on a quantum processor. They were also able to visualize the confining string and the string-breaking phenomenon at the level of the wavefunction, across a range of interaction strengths.

The team approached the problem starting with the definition of the Hamiltonian using the InQuanto software package, and utilized the reusable protocols of InQuanto to compute both projective measurements and expectation values. InQuanto allowed the easy integration of measurement reduction techniques and scalable error mitigation techniques. Moreover, the emulator and hardware experiments were orchestrated by the Nexus online platform.

In one section of the study, a circuit with 24 qubits and more than 250 two-qubit gates was reduced to a smaller width of 15 qubits thanks our unique qubit re-use and mid-circuit measurement automatic compilation implemented in TKET.

This work paves the way towards using quantum computers to study lattice gauge theories in higher dimensions, with the goal of one day simulating the full three-dimensional Quantum Chromodynamics theory underlying the nuclear sector of the Standard Model of particle physics. Being able to simulate full 3D quantum chromodynamics will undoubtedly unlock many of Nature’s mysteries, from the Big Bang to the interior of neutron stars, and is likely to lead to applications we haven’t yet dreamed of. 

technical
All
Blog
November 21, 2024
InQuanto Integrates NVIDIA cuQuantum for Native GPU Support and Prepares for the Era of Quantum Supercomputing

Chemistry plays a central role in the modern global economy, as it has for centuries. From Antoine Lavoisier to Alessandro Volta, Marie Curie to Venkatraman Ramakrishnan, pioneering chemists drove progress in fields such as combustion, electrochemistry, and biochemistry. They contributed to our mastery of critical 21st century materials such as biodegradable plastics, semiconductors, and life-saving pharmaceuticals. 

Advances in high-performance computing (HPC) and AI have brought fundamental and industrial science ever more within the scope of methods like data science and predictive analysis. In modern chemistry, it has become routine for research to be aided by computational models run in silico. Yet, due to their intrinsically quantum mechanical nature, “strongly correlated” chemical systems – those involving strongly interacting electrons or highly interdependent molecular behaviors – prove extremely hard to accurately simulate using classical computers alone. Quantum computers running quantum algorithms are designed to meet this need. Strongly correlated systems turn up in potential applications such as smart materials, high-temperature superconductors, next-generation electronic devices, batteries and fuel cells, revealing the economic potential of extending our understanding of these systems, and the motivation to apply quantum computing to computational chemistry. 

For senior business and research leaders driving value creation and scientific discovery, a critical question is how will the introduction of quantum computers affect the trajectory of computational approaches to fundamental and industrial science?

Introducing InQuanto v4.0

This is the exciting context for our announcement of InQuanto v4.0, the latest iteration of our computational chemistry platform for quantum computers. Developed over many years in close partnership with computational chemists and materials scientists, InQuanto has become an essential tool for teams using the most advanced methods for simulating molecular and material systems. InQuanto v4.0 is packed with powerful updates, including the capability to incorporate NVIDIA’s tensor network methods for large-scale classical simulations supported by graphical processing units (GPUs). 

When researching chemistry on quantum computers, we use classical HPC to perform tasks such as benchmarking, and for classical pre- and post-processing with computational chemistry methods such as density functional theory. This powerful hybrid quantum-classical combination with InQuanto accelerated our work with partners such as BMW Group, Airbus, and Honeywell. Global businesses and national governments alike are gearing up for the use of such hybrid “quantum supercomputers” to become standard practice. 

In a recent technical blog post, we explored the rapid development and deployment of InQuanto for research and enterprise users, offering insights for combining quantum and high-performance classical methods with only a few lines of code. Here, we provide a higher-level overview of the value InQuanto brings to fundamental and industrial research teams. 

InQuanto v4.0 – under the hood

InQuanto v4.0 is the most powerful version to date of our advanced quantum computational chemistry platform. It supports our users in applying quantum and classical computing methods to problems in chemistry and, increasingly, adjacent fields such as condensed matter physics.

Like previous versions of InQuanto, this one offers state-of-the-art algorithms, methods, and error handling techniques out of the box. Quantum error correction and detection have enabled rapid progress in quantum computing, such as groundbreaking demonstrations in partnership with Microsoft, in April and September 2024, of highly reliable “logical qubits”. Qubits are the core information-carrying components of a quantum computer and by forming them into an ensemble, they are more resistant to errors, allowing more complex problems to be tackled while producing accurate results. InQuanto continues to offer leading-edge quantum error detection protocols as standard and supports users to explore the potential of algorithms for fault-tolerant machines.

InQuanto v4.0 also marks the significant step of introducing native support for tensor networks using GPUs to accelerate simulations. In 2022, Quantinuum and NVIDIA teamed up on one of the quantum computing industry’s earliest quantum-classical collaborations. InQuanto v4.0 introduces classical tensor network methods via an interface with NVIDIA's cuQuantum SDK. Interfacing with cuQuantum enables the simulation of many quantum circuits via the use of GPUs for applications in chemistry that were previously inaccessible, particularly those with larger numbers of qubits.

“Hybrid quantum-classical supercomputing is accelerating quantum computational chemistry research. With Quantinuum’s InQuanto v4.0 platform and NVIDIA’s cuQuantum SDK, InQuanto users now have access to unique tensor-network-based methods, enabling large-scale and high-precision quantum chemistry simulations” - Tim Costa, Senior Director of HPC and Quantum Computing at NVIDIA

We are also responding to our users’ needs for more robust, enterprise-grade management of applications and data, by incorporating InQuanto into Quantinuum Nexus. This integration makes it far easier and more efficient to build hybrid workflows, decode and store data, and use powerful analytical methods to accelerate scientific and technical progress in critical fields in natural science.

Adding further capabilities, we recently announced our integration of InQuanto with Microsoft’s Azure Quantum Elements (AQE), allowing users to seamlessly combine AQE’s state-of-the-art HPC and AI methods with the enhanced quantum capabilities of InQuanto in a single workflow. The first end-to-end workflow using HPC, AI and quantum computing was demonstrated by Microsoft using AQE and Quantinuum Systems hardware, achieving chemical accuracy and demonstrating the advantage of logical qubits compared to physical qubits in modeling a catalytic reaction.

Where InQuanto takes us next

In the coming years, we expect to see scientific and economic progress using the powerful combination of quantum computing, HPC, and artificial intelligence. Each of these computing paradigms contributes to our ability to solve important problems. Together, their combined impact is far greater than the sum of their parts, and we recognize that these have the potential to drive valuable computational innovation in industrial use-cases that really matter, such as in energy generation, transmission and storage, and in chemical processes essential to agriculture, transport, and medicine.

Building on our recent hardware roadmap announcement, which supports scientific quantum advantage and a commercial tipping point in 2029, we are demonstrating the value of owning and building out the full quantum computing stack with a unified goal of accelerating quantum computing, integrating with HPC and AI resources where it shows promise, and using the power of the “quantum supercomputer” to make a positive difference in fundamental and industrial chemistry and related domains.

In close collaboration with our customers, we are driving towards systems capable of supporting quantum advantage and unlocking tangible and significant business value.

To access InQuanto today, including Quantinuum Systems and third-party hardware and emulators, visit: https://www.quantinuum.com/products-solutions/inquanto 

To get started with Quantinuum Nexus, which meets all your quantum computing needs across Quantinuum Systems and third-party backends, visit: https://www.quantinuum.com/products-solutions/nexus 

To find out more and access Quantinuum Systems, visit: https://www.quantinuum.com/products-solutions/quantinuum-systems 

corporate
All
Blog
November 19, 2024
Announcing the Launch of Quantinuum Nexus: Our All-in-One Quantum Computing Platform

In July, we proudly introduced the Beta version of Quantinuum Nexus, our comprehensive quantum computing platform. Designed to provide an exceptional experience for managing, storing, and executing quantum workflows, Nexus offers unparalleled integration with Quantinuum’s software and hardware.

What’s New?

Before July, Nexus was primarily available to our internal researchers and software developers, who leveraged it to drive groundbreaking work leading to notable publications such as:

Following our initial announcement, we invited external users to experience Nexus for the first time.

We selected quantum computing researchers and developers from both industry and academia to help accelerate their work and advance scientific discovery. Participants included teams from diverse sectors such as automotive and energy technology, as well as research groups from universities and national laboratories worldwide. We also welcomed scientists and software developers from other quantum computing companies to explore areas ranging from physical system simulation to the foundations of quantum mechanics.

The feedback and results from our trial users have been exceptional. But don’t just take our word for it—read on to hear directly from some of them:

Unitary Fund

At Unitary Fund, we leveraged Nexus to study a foundational question about quantum mechanics. The quantum platform allowed us to scale experimental violations of Local Friendliness to a more significant regime than had been previously tested. Using Nexus, we encoded Extended Wigner’s Friend Scenarios (EWFS) into quantum circuits, running them on state-of-the-art simulators and quantum processors. Nexus enabled us to scale the complexity of these circuits efficiently, helping us validate LF violations at larger and larger scales. The platform's reliability and advanced capabilities were crucial to extending our results, from simulating smaller systems to experimentally demonstrating LF violations on quantum hardware. Nexus has empowered us to deepen our research and contribute to foundational quantum science.

Read the publication here: Towards violations of Local Friendliness with quantum computers.

Phasecraft

At Phasecraft we are designing algorithms for near term quantum devices, identifying the most impactful experiments to run on the best available hardware. We recently implemented a series of circuits to simulate the time dynamics of a materials model with a novel layout, exploiting the all-to-all connectivity of the H series. Nexus integrated easily with our software stack, allowing us to easily deploy our circuits and collect data, with impressive results. We first tested that our in-house software could interface with Nexus smoothly using the syntax checker as well as the suite of functionality available through the Nexus API. We then tested our circuits on the H1 emulator, and it was straightforward to switch from the emulator to the hardware when we were ready. Overall, we found nexus a straightforward interface, especially when compared with alternative quantum hardware access models.

Quantum Software Lab, University of Edinburgh

In this project, we performed the largest verified measurement-based quantum computation to date, up to the size of 52 vertices, which was made possible by the Nexus system. The protocol requires complex operations intermingling classical and quantum information. In particular, Nexus allows us to demonstrate our protocol that requires complex decisions for every measurement shot on every node in the graph: circuit branching, mid-circuit measurement and reset, and incorporating fresh randomness. Such requirements are difficult to deliver on most quantum computer frameworks as they are far from conventional gate-based BQP computations; however, Nexus can!

Read the publication here: On-Chip Verified Quantum Computation with an Ion-Trap Quantum Processing Unit

Onward and Upward

We are thrilled to announce that after these successes, Nexus is coming out of beta access for full launch. We can’t wait to offer Nexus to our customers to enable ground-breaking scientific work, powered by Quantinuum.

Register your interest in gaining access to the best full-stack quantum computing platform, today!

corporate
All