The marriage of AI and quantum computing holds big promise: the computational power of quantum computers could lead to huge breakthroughs in this next-gen tech. A team led by Stephen Clark, our Head of AI, has just helped us move towards unlocking this incredible potential.
A key ingredient in contemporary classical AI is the “transformer”, which is so important it is actually the “T” in ChatGPT. Transformers are machine learning models that do things like predict the next word in a sentence, or determine if a movie review is positive or negative. Transformers are incredibly well-suited to classical computers, taking advantage of the massive parallelism afforded by GPUs. These advantages are not necessarily present on quantum computers in the same way, so successfully implementing a transformer on quantum hardware is no easy task.
Until recently, most attempts to implement transformers on quantum computers took a sort of “copy-paste” approach – taking the math from a classical implementation and directly implementing it on quantum circuits. This “copy-paste” approach fails to account for the considerable differences between quantum and classical architectures, leading to inefficiencies. In fact, they are not really taking advantage of the ‘quantum’ paradigm at all.
This has now changed. In a new paper on the arXiv, our team introduces an explicitly quantum transformer, which they call “Quixer” (short for quantum mixer). Using quantum algorithmic primitives, the team created a transformer implementation that is specially tailored for quantum circuits, making it qubit efficient and providing the potential to offer speedups over classical implementations.
Critically, the team then applied it to a practical language modelling task (by simulating the process on a classical computer), obtaining results that are competitive with an equivalent classical baseline. This is an incredible milestone achievement in and of itself.
This paper also marks the first quantum machine learning model applied to language on a realistic rather than toy dataset. This is a truly exciting advance for anyone interested in the union of quantum computing and artificial intelligence. About a week ago when we announced that our System Model H2 has bested the quantum supremacy experiments first benchmarked by Google, we promised a summer of important advances in quantum computing. Stay tuned for more disclosures!