Quantum Computer Error Correction Is Getting Practical
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Quantum computers are gaining traction across fields from logistics to finance. But as most users know, they remain research experiments, limited by imperfections in hardware. Today's machines tend to suffer hardware failures-errors in the underlying quantum information carriers called qubits-in times much shorter than a second. Compare that with the approximately one billion years of continuous operation before a transistor in a conventional computer fails, and it becomes obvious that we have a long way to go.
Companies building quantum computers like IBM and Google have highlighted that their roadmaps include the use of quantum error correction" to achieve what is known as fault-tolerant quantum computing as they scale to machines with 1,000 or more qubits.
Quantum error correction-or QEC for short-is an algorithm designed to identify and fix errors in quantum computers. It's able to draw from validated mathematical approaches used to engineer special radiation hardened" classical microprocessors deployed in space or other extreme environments where errors are much more likely to occur.
QEC is real and has seen many partial demonstrations in laboratories around the world-initial steps that make it clear it's a viable approach. 2021 may just be the year when it is convincingly demonstrated to give a net benefit in real quantum-computing hardware.
Unfortunately, with corporate roadmaps and complex scientific literature highlighting an increasing number of relevant experiments, an emerging narrative is falsely painting QEC as a future panacea for the life-threatening ills of quantum computing.
QEC, in combination with the theory of fault-tolerant quantum computing, suggests that engineers can in principle build an arbitrarily large quantum computer that if operated correctly would be capable of arbitrarily long computations. This would be a stunningly powerful achievement. The prospect that it can be realized underpins the entire field of quantum computer science: Replace all quantum computing hardware with logical" qubits running QEC, and even the most complex algorithms come into reach. For instance, Shor's algorithm could be deployed to render Bitcoin insecure with just a few thousand error-corrected logical qubits. On its face, that doesn't seem far from the 1,000+ qubit machines promised by 2023. (Spoiler alert: this is the wrong way to interpret these numbers).
The challenge comes when we look at the implementation of QEC in practice. The algorithm by which QEC is performed itself consumes resources-more qubits and many operations.
Returning to the promise of 1,000-qubit machines in industry, so many resources might be required that those 1,000 qubits yield only, say, 5 useful logical qubits.
Even worse, the amount of extra work that must be done to apply QEC currently introduces more error than correction. QEC research has made great strides from the earliest efforts in the late 1990s, introducing mathematical tricks that relax the associated overheads or enable computations on logical qubits to be conducted more easily, without interfering with the computations being performed. And the gains have been enormous, bringing the break-even point, where it's actually better to perform QEC than not, at least 1,000 times closer than original predictions. Still, the most advanced experimental demonstrations show it's at least 10 times better to do nothing than to apply QEC in most cases.
This is why a major public-sector research program run by the U.S. intelligence community has spent the last four years seeking to finally cross the break-even point in experimental hardware, for just one logical qubit. We may well unambiguously achieve this goal in 2021-but that's the beginning of the journey, not the end.
Crossing the break-even point and achieving useful, functioning QEC doesn't mean we suddenly enter an era with no hardware errors-it just means we'll have fewer. QEC only totally suppresses errors if we dedicate infinite resources to the process, an obviously untenable proposition. Moreover, even forgetting those theoretical limits, QEC is imperfect and relies on many assumptions about the properties of the errors it's tasked with correcting. Small deviations from these mathematical models (which happen all the time in real labs) can reduce QEC's effectiveness further.
Instead of thinking of QEC as a single medicine capable of curing everything that goes wrong in a quantum computer, we should instead consider it an important part of a drug cocktail.
As special as QEC is for abstract quantum computing mathematically, in practice it's really just a form of what's known as feedback stabilization. Feedback is the same well-studied technique used to regulate your speed while driving with cruise control or to keep walking robots from tipping over. This realization opens new opportunities to attack the problem of error in quantum computing holistically and may ultimately help us move closer to what we actually want: real quantum computers with far fewer errors.
Fortunately, there are signs that within the research community a view to practicality is emerging. For instance, there is greater emphasis on approximate approaches to QEC that help deal with the most nefarious errors in a particular system, at the expense of being a bit less effective for others.
The combination of hardware-level, open-loop quantum control with feedback-based QEC may also be particularly effective. Quantum control permits a form of error virtualization" in which the overall properties of the hardware with respect to errors are transformed before implementation of QEC encoding. These include reduced overall error rates, better error uniformity between devices, better hardware stability against slow variations, and a greater compatibility of the error statistics with the assumptions of QEC. Each of these benefits can reduce the resource overheads needed to implement QEC efficiently. Such a holistic view of the problem of error in quantum computing-from quantum control at the hardware level through to algorithmic QEC encoding-can improve net quantum computational performance with fixed hardware resources.
None of this discussion means that QEC is somehow unimportant for quantum computing. And there will always remain a central role for exploratory research into the mathematics of QEC, because you never know what a clever colleague might discover. Still, a drive to practical outcomes might even lead us to totally abandon the abstract notion of fault-tolerant quantum computing and replace it with something more like fault-tolerant-enough quantum computing. That might be just what the doctor ordered.
Michael J. Biercuk, a professor of quantum physics and quantum technology at the University of Sydney, is the founder and CEO of Q-CTRL.