Quantinuum is excited to introduce the beta availability of Quantinuum Nexus, our comprehensive quantum computing platform. Nexus is built to simplify quantum computing workflows with its expert design and full-stack support. We are inviting quantum users to apply for beta availability; accepted users can work closely with Quantinuum on how Nexus can be adopted and customized for you.
Nexus was developed by our in-house quantum experts to streamline the deployment of quantum algorithms. From tackling common tasks like installing packages and libraries to addressing pain points like setting up storage, Nexus seamlessly integrates thoughtful details to enhance user experience.
Nexus allows users to run, track, and manage resources across multiple quantum backends, making it easier for researchers to directly compare results and processes when using our H-Series hardware or other providers. Additionally, Nexus features a cloud-hosted and preconfigured JupyterHub environment and dedicated simulators - most notably, the Quantinuum H-Series emulator. Nexus’ emulator integration means that new users and organizations that don’t have access to H-Series hardware can start experimenting with H-Series capabilities right away.
Quantinuum Nexus is at the core of our full stack, integrated fully with our H-Series Quantum Processor, our software offerings such as InQuanto™, and our H-Series emulators. Nexus is also back-end inclusive, interfacing with multiple other hardware and simulation backends. In the future, we will be introducing new cutting-edge tools such as a more powerful cloud-based version of our compiler, powered by version 2 of TKET.
Nexus also stores everything you need to recreate your experiment in one place – meaning a full snapshot of the backend, the settings and variables you used, and more. Combined with easy data sharing and storage, you can stop worrying about the logistics of data management. You’re in control of how you structure your data, how you track what’s most important to you, and who gets to see it.
Administrators benefit from resource controls within Nexus, allowing them to manage user access, create user groups, and update usage quotas to match their priorities. With multiple backend support, administrators can track jobs and usage for all their quantum resource in one platform. Advanced usage visualization allows administrators to quickly gain insight from historical trends in usage. Nexus also features collaboration tools that give users the ability to share data, as well as access controls that allow administrators to ensure this is done securely.
Users, developers, and administrators have several options when it comes to selecting a platform for managing quantum resources. So why Nexus? Quantinuum Nexus was built by quantum experts, for quantum experts. Our experiment management and cataloging system makes us stand out as the best platform for collaborating between scientific teams. Our provision of the H-Series emulator in the cloud means you get more access to the emulator of one of the world's best devices with less time in the queue, so you can spend more time with your results. Our quantum chemistry package InQuanto™ is integrated into Nexus, meaning zero setup time with easy data storage in our managed environment.
Nexus provides a consistent API for working with a range of quantum devices & tools. This improves the experience of our end users, as scripts that work for one device can easily be ported to other devices with only a change to the config. The Nexus API interface also improves integration with 3rd party partners by providing them a programmatic way to access Quantinuum tools, alongside a pathway for integrating these resources into their own tools for redistribution.
With Nexus, Quantinuum is setting a new standard in quantum Platform-as-a-Service providers, empowering users with cutting-edge tools and seamless integration for quantum computing advancements.
Quantinuum, the world’s largest integrated quantum company, pioneers powerful quantum computers and advanced software solutions. Quantinuum’s technology drives breakthroughs in materials discovery, cybersecurity, and next-gen quantum AI. With over 500 employees, including 370+ scientists and engineers, Quantinuum leads the quantum computing revolution across continents.
For a novel technology to be successful, it must prove that it is both useful and works as described.
Checking that our computers “work as described” is called benchmarking and verification by the experts. We are proud to be leaders in this field, with the most benchmarked quantum processors in the world. We also work with National Laboratories in various countries to develop new benchmarking techniques and standards. Additionally, we have our own team of experts leading the field in benchmarking and verification.
Currently, a lot of verification (i.e. checking that you got the right answer) is done by classical computers – most quantum processors can still be simulated by a classical computer. As we move towards quantum processors that are hard (or impossible) to simulate, this introduces a problem: how can we keep checking that our technology is working correctly without simulating it?
We recently partnered with the UK’s Quantum Software Lab to develop a novel and scalable verification and benchmarking protocol that will help us as we make the transition to quantum processors that cannot be simulated.
This new protocol does not require classical simulation, or the transfer of a qubit between two parties. The team’s “on-chip” verification protocol eliminates the need for a physically separated verifier and makes no assumptions about the processor’s noise. To top it all off, this new protocol is qubit-efficient.
The team’s protocol is application-agnostic, benefiting all users. Further, the protocol is optimized to our QCCD hardware, meaning that we have a path towards verified quantum advantage – as we compute more things that cannot be classically simulated, we will be able to check that what we are doing is right.
Running the protocol on Quantinuum System Model H1, the team ended up performing the largest verified Measurement Based Quantum Computing (MBQC) circuit to date. This was enabled by our System Model H1’s low cross-talk gate zones, mid-circuit measurement and reset, and long coherence times. By performing the largest verified MBQC computation to date, and by verifying computations significantly larger than any others to be verified before, we reaffirm the Quantinuum Systems as best-in-class.
Particle accelerators like the LHC take serious computing power. Often on the bleeding-edge of computing technology, accelerator projects sometimes even drive innovations in computing. In fact, while there is some controversy over exactly where the world wide web was created, it is often attributed to Tim Berners-Lee at CERN, who developed it to meet the demand for automated information-sharing between scientists in universities and institutes around the world.
With annual data generated by accelerators in excess of exabytes (a billion gigabytes), tens of millions of lines of code written to support the experiments, and incredibly demanding hardware requirements, it’s no surprise that the High Energy Physics community is interested in quantum computing, which offers real solutions to some of their hardest problems. Furthermore, the HEP community is well-positioned to support the early stages of technological development: with budgets in the 10s of billions per year and tens of thousands of scientists and engineers working on accelerator and computational physics, this is a ripe industry for quantum computing to tap.
As the authors of this paper stated: “[Quantum Computing] encompasses several defining characteristics that are of particular interest to experimental HEP: the potential for quantum speed-up in processing time, sensitivity to sources of correlations in data, and increased expressivity of quantum systems... Experiments running on high-luminosity accelerators need faster algorithms; identification and reconstruction algorithms need to capture correlations in signals; simulation and inference tools need to express and calculate functions that are classically intractable”
The authors go on to state: “Within the existing data reconstruction and analysis paradigm, access to algorithms that exhibit quantum speed-ups would revolutionize the simulation of large-scale quantum systems and the processing of data from complex experimental set-ups. This would enable a new generation of precision measurements to probe deeper into the nature of the universe. Existing measurements may contain the signatures of underlying quantum correlations or other sources of new physics that are inaccessible to classical analysis techniques. Quantum algorithms that leverage these properties could potentially extract more information from a given dataset than classical algorithms.”
Our scientists have been working with a team at DESY, one of the world’s leading accelerator centers, to bring the power of quantum computing to particle physics. DESY, short for Deutsches Elektronen-Synchrotron, is a national research center for fundamental science located in Hamburg and Zeuthen, where the Center for Quantum Technologies and Applications (CQTA) is based. DESY operates, develops, and constructs particle accelerators used to investigate the structure, dynamics and function of matter, and conducts a broad spectrum of interdisciplinary scientific research. DESY employs about 3,000 staff members from more than 60 nations, and is part of the worldwide computer network to store and analyze the enormous flood of data that is produced by the LHC in Geneva.
In a recent paper, our scientists collaborated with scientists from DESY, the Leiden Institute of Advanced Computer Science (LIACS), and Northeastern University to explore using a generative quantum machine learning model, called a “quantum Boltzmann machine” to untangle data from CERN’s LHC.
The goal was to learn probability distributions relevant to high energy physics better than the corresponding classical models. The data specifically contains “particle jet events”, which describe how colliders collect data about the subatomic particles generated during the experiments.
In some cases the quantum Boltzmann machine was indeed better, compared to a classical Boltzmann machine. The team is analyzed when and why this happens, understanding better how to apply these new quantum tools in this research setting. The team also studied the effect of the data encoding into a quantum state, noting that it can have a decisive effect on the training performance. Especially enticing is that the quantum Boltzmann machine is efficiently trainable, which our scientists showed in a recent paper published in Nature Communications Physics.
Find the Quantinuum team at this year’s SC24 conference from November 17th – 22nd in Atlanta, Georgia. Meet our team at Booth #4351 to discover how Quantinuum is bridging the gap between quantum computing and high-performance compute with leading industry partners.
The Quantinuum team will be participating in the below panel and poster sessions to showcase our quantum computing technologies.
Panel: KAUST booth 1031
Nash Palaniswamy, Quantinuum’s CCO, will join a panel discussion with quantum vendors and KAUST partners to discuss advancements in quantum technology.
Panel: Educating for a Hybrid Future: Bridging the Gap between High-Performance and Quantum Computing
Vincent Anandraj, Quantinuum’s Director of Global Ecosystem and Strategic Alliances, will moderate this panel which brings together experts from leading supercomputing centers and the quantum computing industry, including PSC, Leibniz Supercomputing Centre, IQM Quantum Computers, NVIDIA, and National Research Foundation.
Presentation: Realizing Quantum Kernel Models at Scale with Matrix Product State Simulation
Pablo Andres-Martinez, Research Scientist at Quantinuum, will present research done in collaboration with HSBC, where the team applied quantum methods to fraud detection.