Technological innovation that enables scaling of quantum computing underpins the Microsoft Azure Quantum program. In March of this year, we announced our demonstration of the underlying physics required to create a topological qubit—qubits that are theorized to be inherently more stable than existing ones without sacrificing size or speed. However, our quest to deliver a general-purpose quantum computer capable of addressing industrial-scale problems will require innovation across every layer of the quantum stack, from materials at the nanoscale to algorithms and applications. At Azure Quantum, our full-stack approach and broad expertise across all areas of quantum computation allows us to drive innovation in this space through tight collaboration across theory, hardware, software and systems teams.
One of the greatest challenges in building a quantum computer is that quantum states are intrinsically fragile and are quickly destroyed when a qubit couples to its environment, leading to noise. A crucial technology to overcome this fragility, which is also used in classical digital computing, is error correction. By encoding the state of a single logical qubit into many physical qubits, quantum error correction (QEC) has the ability to detect and correct most errors that occur on the physical qubits. Indeed, such error correction needs to be at the heart of any scalable quantum system. Without it, no known qubit technology can protect quantum states sufficiently long enough to perform a calculation that can deliver real-world impact. However, quantum error correction also comes at a significant cost: depending on the quality of the physical qubits, error correction can increase the space requirements of a computation by a factor of several thousand and the time requirements more than tenfold. Therefore, any improvements on error correction have enormous positive ripple effects across the entire stack.
In this post, we’ll share some exciting implications from our recent innovations toward scale—specifically how to perform quantum error correction in our topological quantum computation stack— published in the series of papers listed below. Topological qubits promise lower error rates than conventional qubits, and as such can perform scalable quantum computation at lower overhead. On top of that, in these papers we introduce a new class of quantum error correction codes, called Floquet codes, which are particularly suited to topological qubits. Our new approaches culminate in an additional tenfold or more reduction to the overhead needed for error correction on topological qubits compared to previous state of the art, opening a viable path toward scaling to a million qubits and beyond.
Explore More
-
Publication
Dynamically Generated Logical Qubits
-
Publication
Boundaries for the Honeycomb Code
Unlocking a new class of quantum codes
To optimize performance on any quantum computing platform, the circuits must be adapted to the capabilities of the hardware. This is particularly true for error correction schemes, which must be tailor-made to exploit the strengths of a given hardware platform. Unlike most other qubits, our topological qubits employ a measurement-based scheme, where direct measurements between adjacent qubits are the native set of operations. While all quantum error correction schemes use frequent measurements to identify errors, the state-of-the-art schemes require complex multi-qubit measurements that can’t be implemented directly in the hardware and must be compiled into native operations at the expense of additional auxiliary qubits and additional timesteps. The outcomes of these measurements are used to infer the occurrence of errors without destroying the encoded quantum state.
Our recent breakthroughs overcome this issue through a conceptually new perspective on quantum codes (put forward in “Dynamically Generated Logical Qubits” and “Boundaries for the Honeycomb code”), where the encoding of the quantum information is not static but rather allowed to periodically evolve in time. Many examples of physical systems are known where such periodic evolution allows new phenomena to occur (see, for example, the well-known Kapitza pendulum). The study of such systems falls under the term Floquet systems, which gives this new class of codes its name.
These codes are built entirely from two-qubit measurements referred to as “check measurements.” Just like measurements in a conventional code, these are used to check for errors. The simplicity of these checks, however, means that each time we measure a check, we change the encoding of the quantum information, leading to the Floquet nature of the code. As a consequence, the outcomes of these measurements cannot be used directly to infer which errors have occurred, but rather the full history of measurement outcomes over time must be taken into account.
The physical qubits are arranged in a lattice (such as that shown in Figure 1), represented as black dots on the vertices of this graph. Each check is associated with an edge of the graph, and one sequentially measures checks of different colors. The code state changes as the different checks are measured. There are several possible lattice arrangements of the qubits that allow for a natural implementation of a Floquet code. The lattices should have the following two properties: 1) each vertex should be attached to three edges and 2) using only three colors, it should be possible to color the plaquettes in such a way that no adjacent plaquettes have the same color (that is, the plaquettes should be “three-colorable”). While many such arrangements remain to be explored and the optimal choice will depend on details of the physical hardware, Figure 1 shows two possible Floquet-code arrangements.
Error correction tailor-made for topological qubits
In the realm of our measurement-based topological architecture, we have identified the two arrangements shown in Figure 1 as particularly appealing when combined with a particular design of topological qubit—a “tetron” qubit—which is also a scalable design. The connectivity of these two layouts can be naturally mapped onto the connectivity of an array of such tetrons, which is shown in Figure 2. Furthermore, the majority of the two-qubit check operators that are used to construct these codes are exactly those native operations between tetrons that can be implemented with minimal error, as shown in the lower panel of Figure 2. The details of these codes, their implementation with topological qubits, and numerical studies of their performance are discussed in “Performance of planar Floquet codes with Majorana-based qubits.”
Our numerical simulations show that our Floquet codes and architecture implemented with topological “tetron” qubits help secure the path to a scalable quantum system in several ways. First, the very favorable threshold of these codes, which we estimate to be close to 1 percent, allows us to achieve quantum error correction earlier and demonstrate tangible steps on our journey toward quantum advantage. Second, in the longer run, we find that these codes reduce the overhead required for quantum error correction on topological qubits roughly tenfold compared to the previous state-of-the-art approach, which means that our scalable system can be built from fewer physical qubits and can run at a faster clock speed (see Figure 3 below).
Approaching quantum computation from the unique topological perspective requires synchronized advancements across the entire Azure Quantum stack. Along with our recent demonstration of the building blocks for topological qubits, optimizing quantum error correction using Floquet codes represents a critical piece of the scientific foundation needed to achieve scaled quantum computation. These breakthroughs help establish a path and architecture for the industrial quantum machine.
The post Azure Quantum innovation: Efficient error correction of topological qubits with Floquet codes appeared first on Microsoft Research.