Review

Superconducting Circuits for Quantum Information: An Outlook

See allHide authors and affiliations

Science  08 Mar 2013:
Vol. 339, Issue 6124, pp. 1169-1174
DOI: 10.1126/science.1231930

Abstract

The performance of superconducting qubits has improved by several orders of magnitude in the past decade. These circuits benefit from the robustness of superconductivity and the Josephson effect, and at present they have not encountered any hard physical limits. However, building an error-corrected information processor with many such qubits will require solving specific architecture problems that constitute a new field of research. For the first time, physicists will have to master quantum error correction to design and operate complex active systems that are dissipative in nature, yet remain coherent indefinitely. We offer a view on some directions for the field and speculate on its future.

Quantum mini-toc

The concept of solving problems with the use of quantum algorithms, introduced in the early 1990s (1, 2), was welcomed as a revolutionary change in the theory of computational complexity, but the feat of actually building a quantum computer was then thought to be impossible. The invention of quantum error correction (QEC) (36) introduced hope that a quantum computer might one day be built, most likely by future generations of physicists and engineers. However, less than 20 years later, we have witnessed so many advances that successful quantum computations, and other applications of quantum information processing (QIP) such as quantum simulation (7, 8) and long-distance quantum communication (9), appear reachable within our lifetime, even if many discoveries and technological innovations are still to be made.

Below, we discuss the specific physical implementation of general-purpose QIP with superconducting qubits (10). A comprehensive review of the history and current status of the field is beyond the scope of this article. Several detailed reviews on the principles and operations of these circuits already exist (1114). Here, we raise only a few important aspects needed for the discussion before proceeding to some speculations on future directions.

Toward a Quantum Computer

Developing a quantum computer involves several overlapping and interconnecting stages (Fig. 1). First, a quantum system has to be controlled sufficiently to hold one bit of quantum information long enough for it to be written, manipulated, and read. In the second stage, small quantum algorithms can be performed; these two stages require that the first five DiVincenzo criteria be satisfied (15). The following, more complex stages, however, introduce and require QEC (36). In the third stage, some errors can be corrected by quantum nondemolition readout of error syndromes such as parity. It also becomes possible to stabilize the qubit by feedback into any arbitrary state (16, 17), including dynamical ones (1821). This stage was reached first by trapped ions (22), by Rydberg atoms (16), and most recently by superconducting qubits (2325). In the next (fourth) stage, the goal is to realize a quantum memory, where QEC realizes a coherence time that is longer than any of the individual components. This goal is as yet unfulfilled in any system. The final two stages in reaching the ultimate goal of fault-tolerant quantum information processing (26) require the ability to do all single-qubit operations on one logical qubit (which is an effective qubit protected by active error correction mechanisms), and the ability to perform gate operations between several logical qubits; in both stages the enhanced coherence lifetime of the qubits should be preserved.

Fig. 1

Seven stages in the development of quantum information processing. Each advancement requires mastery of the preceding stages, but each also represents a continuing task that must be perfected in parallel with the others. Superconducting qubits are the only solid-state implementation at the third stage, and they now aim at reaching the fourth stage (green arrow). In the domain of atomic physics and quantum optics, the third stage had been previously attained by trapped ions and by Rydberg atoms. No implementation has yet reached the fourth stage, where a logical qubit can be stored, via error correction, for a time substantially longer than the decoherence time of its physical qubit components.

Superconducting Circuits: Hamiltonians by Design

Unlike microscopic entities—electrons, atoms, ions, and photons—on which other qubits are based, superconducting quantum circuits are based on the electrical (LC) oscillator (Fig. 2A) and are macroscopic systems with a large number of (usually aluminum) atoms assembled in the shape of metallic wires and plates. The operation of superconducting qubits is based on two robust phenomena: superconductivity, which is the frictionless flow of electrical fluid through the metal at low temperature (below the superconducting phase transition), and the Josephson effect, which endows the circuit with nonlinearity without introducing dissipation or dephasing.

Fig. 2

(A) Superconducting qubits consist of simple circuits that can be described as the parallel combination of a Josephson tunnel element (cross) with inductance LJ, a capacitance C, and an inductance L. The flux Φ threads the loop formed by both inductances. (B) Their quantum energy levels can be sharp and long-lived if the circuit is sufficiently decoupled from its environment. The shape of the potential seen by the flux Φ and the resulting level structure can be varied by changing the values of the electrical elements. This example shows the fluxonium parameters, with an imposed external flux of ¼ flux quantum. Only two of three corrugations are shown fully. (C) A Mendeleev-like but continuous "table" of artificial atom types: Cooper pair box (29), flux qubit (33), phase qubit (35), quantronium (37), transmon (39), fluxonium (40), and hybrid qubit (41). The horizontal and vertical coordinates correspond to fabrication parameters that determine the inverse of the number of corrugations in the potential and the number of levels per well, respectively.

The collective motion of the electron fluid around the circuit is described by the flux Φ threading the inductor, which plays the role of the center-of-mass position in a mass-spring mechanical oscillator (27). A Josephson tunnel junction transforms the circuit into a true artificial atom, for which the transition from the ground state to the excited state (|g〉-|e〉) can be selectively excited and used as a qubit, unlike in the pure LC harmonic oscillator (Fig. 2B). The Josephson junction can be placed in parallel with the inductor, or can even replace the inductor completely, as in the case of the so-called "charge" qubits. Potential energy functions of various shapes can be obtained by varying the relative strengths of three characteristic circuit energies associated with the inductance, capacitance, and tunnel element (Fig. 2, B and C). Originally, the three basic types were known as charge (28, 29), flux (3033), and phase (34, 35). The performance of all types of qubits has markedly improved as the fabrication, measurement, and materials issues affecting coherence have been tested, understood, and improved. In addition, there has been a diversification of other design variations, such as the quantronium (36, 37), transmon (38, 39), fluxonium (40), and "hybrid" (41) qubits; all of these are constructed from the same elements but seek to improve performance by reducing their sensitivity to decoherence mechanisms encountered in earlier designs. The continuing evolution of designs is a sign of the robustness and future potential of the field.

When several of these qubits, which are nonlinear oscillators behaving as artificial atoms, are coupled to true oscillators (photons in a microwave cavity), one obtains, for low-lying excitations, an effective multiqubit, multicavity system Hamiltonian of the formEmbedded Image (1)describing anharmonic qubit mode amplitudes indexed by j coupled to harmonic cavity modes indexed by m (42). The symbols a, b, and ω refer to the mode amplitudes and frequency, respectively. When driven with appropriate microwave signals, this system can perform arbitrary quantum operations at speeds determined by the nonlinear interaction strengths α and χ, typically (43, 44) resulting in single-qubit gate times within 5 to 50 ns (α/2π ≈ 200 MHz) and two-qubit entangling gate times within 50 to 500 ns (χ/2π ≈ 20 MHz). We have neglected here the weak induced anharmonicity of the cavity modes

Proper design of the qubit circuit to minimize dissipation coming from the dielectrics surrounding the metal of the qubit, and to minimize radiation of energy into other electromagnetic modes or the circuit environment, led to qubit transition quality factors Q exceeding 1 million or coherence times on the order of 100 μs, which in turn make possible hundreds or even thousands of operations in one coherence lifetime (see Table 1). One example of this progression, for the case of the Cooper-pair box (28) and its descendants, is shown in Fig. 3A. Spectacular improvements have also been accomplished for transmission line resonators (45) and the other types of qubits, such the phase qubit (35) or the flux qubit (46). Rather stringent limits can now be placed on the intrinsic capacitive (47) or inductive (43) losses of the junction, and we construe this to mean that junction quality is not yet the limiting factor in the further development of superconducting qubits.

Table 1

Superconducting qubits: Desired parameter margins for scalability and the corresponding demonstrated values. Desired capability margins are numbers of successful operations or realizations of a component before failure. For the stability of the Hamiltonian, capability is the number of Ramsey shots that meaningfully would provide one bit of information on a parameter (e.g., the qubit frequency) during the time when this parameter has not drifted. Estimated current capability is expressed as number of superconducting qubits, given best decoherence times and success probabilities. Demonstrated successful performance is given in terms of the main performance characteristic of successful operation or Hamiltonian control (various units). A reset qubit operation forces a qubit to take a particular state. A Rabi flop denotes a single-qubit π rotation. A swap to bus is an operation to make a two-qubit entanglement between distant qubits. In a readout qubit operation, the readout must be QND or must operate on an ancilla without demolishing any memory qubit of the computer. Stability refers to the time scale during which a Hamiltonian parameter drifts by an amount corresponding to one bit of information, or the time scale it would take to find all such parameters in a complex system to this precision. Accuracy can refer to the degree to which a certain Hamiltonian symmetry or property can be designed and known in advance, the ratio by which a certain coupling can be turned on and off during operation, or the ratio of desired to undesired couplings. Yield is the number of quantum objects with one degree of freedom that can be made without failing or being out of specification to the degree that the function of the whole is compromised. Complexity is the overall number of interacting, but separately controllable, entangled degrees of freedom in a device. Question marks indicate that more experiments are needed for a conclusive result. Values given in rightmost column are compiled from recently published data and improve on a yearly basis.

View this table:
Fig. 3

Examples of the "Moore's law" type of exponential scaling in performance of superconducting qubits during recent years. All types have progressed, but we focus here only on those in the leftmost part of Fig. 2C. (A) Improvement of coherence times for the "typical best" results associated with the first versions of major design changes. The blue, red, and green symbols refer to qubit relaxation, qubit decoherence, and cavity lifetimes, respectively. Innovations were introduced to avoid the dominant decoherence channel found in earlier generations. So far an ultimate limit on coherence seems not to have been encountered. Devices other than those in Fig. 2C: charge echo (63), circuit QED (44), 3D transmon (43), and improved 3D transmon (64, 65). For comparison, superconducting cavity lifetimes are given for a 3D transmon and separate 3D cavities (66). Even longer times in excess of 0.1 s have been achieved in similar 3D cavities for Rydberg atom experiments [e.g., (67)]. (B) Evolution of superconducting qubit QND readout. We plot versus time the main figure of merit, the number of bits that can be extracted from the qubit during its T1 lifetime (this number combines signal-to-noise ratio and speed). This quantity can also be understood as the number of measurements, each with one bit of precision, that would be possible before an error occurs. Data points correspond to the following innovations in design: a Cooper-pair box read by off-resonance coupling to a cavity whose frequency is monitored by a microwave pulse analyzed using a semiconductor high–electron mobility transistor amplifier (CPB+HEMT) [also called dispersive circuit QED (68)], an improved amplification chain reading a transmon using a superconductor preamplifier derived from the Josephson bifurcation amplifier (transmon+JBA) (49), and further improvement with another superconductor preamplifier derived from the Josephson parametric converter (51) combined with filter in 3D transmon cavity eliminating Purcell effect (3D-transmon+JPC+P-filter). Better amplifier efficiency, optimal signal processing, and longer qubit lifetimes are expected to maintain the rapid upward trend.

Nonetheless, it is not possible to reduce dissipation in a qubit independently of its readout and control systems (39). Here, we focus on the most useful and powerful type of readout, which is called a "quantum nondemolition" (QND) measurement. This type of measurement allows a continuous monitoring of the qubit state (48, 49). After a strong QND measurement, the qubit is left in one of two computational states, |g〉 or |e〉, depending on the result of the measurement, which has a classical binary value indicating g or e. There are three figures of merit that characterize this type of readout. The first is QND-ness, the probability that the qubit remains in the same state after the measurement, given that the qubit is initially in a definite state |g〉 or |e〉. The second is the intrinsic fidelity, the difference between the probabilities—given that the qubit is initially in a definite state |g〉 or |e〉—that the readout gives the correct and wrong answers (with this definition, the fidelity is zero when the readout value is uncorrelated with the qubit state). The last and most subtle readout figure of merit is efficiency, which characterizes the ratio of the number of controlled and uncontrolled information channels in the readout. Maximizing this ratio is of utmost importance for performing remote entanglement by measurement (50).

Like qubit coherence, and benefiting from it, progress in QND performance has been spectacular (Fig. 3B). It is now possible to acquire more than N = 2000 bits of information from a qubit before it decays through dissipation (Fig. 3A), or, to phrase it more crudely, read a qubit once in a time that is a small fraction (1/N) of its lifetime. This is a crucial capability for undertaking QEC in the fourth stage of Fig. 1, because in order to fight errors, one has to monitor qubits at a pace faster than the rate at which they occur. Efficiencies in QND superconducting qubit readout are also progressing rapidly and will soon routinely exceed 0.5, as indicated by recent experiments (25, 51).

Is It Just About Scaling Up?

Up to now, most of the experiments have been relatively small scale (only a handful of interacting qubits or degrees of freedom; see Table 1). Furthermore, almost all the experiments so far are "passive"—they seek to maintain coherence only long enough to entangle quantum bits or demonstrate some rudimentary capability before, inevitably, decoherence sets in. The next stages of QIP require one to realize an actual increase in the coherence time via error correction, first only during an idle "memory" state, but later also in the midst of a functioning algorithm. This requires building new systems that are "active," using continuous measurements and real-time feedback to preserve the quantum information through the startling process of correcting qubit errors without actually learning what the computer is calculating. Given the fragility of quantum information, it is commonly believed that the continual task of error correction will occupy the vast majority of the effort and the resources in any large quantum computer.

Using the current approaches to error correction, the next stages of development unfortunately demand a substantial increase in complexity, requiring dozens or even thousands of physical qubits per bit of usable quantum information, and challenging our currently limited abilities to design, fabricate, and control a complex Hamiltonian (second part of Table 1). Furthermore, all of the DiVincenzo engineering margins on each piece of additional hardware still need to be maintained or improved while scaling up. So is advancing to the next stage just a straightforward engineering exercise of mass-producing large numbers of exactly the same kinds of circuits and qubits that have already been demonstrated? And will this mean the end of the scientific innovations that have so far driven progress forward?

We argue that the answers to both questions will probably be "No." The work by the community during the past decade and a half, leading up to the capabilities summarized in the first part of Table 1, may indeed constitute an existence proof that building a large-scale quantum computer is not physically impossible. However, identifying the best, most efficient, and most robust path forward in a technology's development is a task very different from merely satisfying oneself that it should be possible. So far, we have yet to see a dramatic "Moore's law" growth in the complexity of quantum hardware. What, then, are the main challenges to be overcome?

Simply fabricating a wafer with a large number of elements used today is probably not the hard part. After all, some of the biggest advantages of superconducting qubits are that they are merely circuit elements, which are fabricated in clean rooms, interact with each other via connections that are wired up by their designer, and are controlled and measured from the outside with electronic signals. The current fabrication requirements for superconducting qubits are not particularly daunting, especially in comparison to modern semiconductor integrated circuits (ICs). A typical qubit or resonant cavity is a few millimeters in overall size, with features that are mostly a few micrometers (even the smallest Josephson junction sizes are typically 0.2 μm on a side in a qubit). There is successful experience with fabricating and operating superconducting ICs with hundreds to thousands of elements on a chip, such as the transition-edge sensors with SQUID (superconducting quantum interference device) readout amplifiers, each containing several Josephson junctions (52), or microwave kinetic inductance detectors composed of arrays of high-Q (>106) linear resonators without Josephson junctions, which are being developed (53) and used to great benefit in the astrophysics community.

Nonetheless, designing, building, and operating a superconducting quantum computer presents substantial and distinct challenges relative to semiconductor ICs or the other existing versions of superconducting electronics. Conventional microprocessors use overdamped logic, which provides a sort of built-in error correction. They do not require high-Q resonances, and clocks or narrow-band filters are in fact off-chip and provided by special elements such as quartz crystals. Therefore, small interactions between circuit elements may cause heating or offsets but do not lead to actual bit errors or circuit failures. In contrast, an integrated quantum computer will be essentially a very large collection of very high-Q, phase-stable oscillators, which need to interact only in the ways we program. It is no surprise that the leading quantum information technology has been and today remains the trapped ions, which are the best clocks ever built. In contrast with the ions, however, the artificially made qubits of a superconducting quantum computer will never be perfectly identical (see Table 1). Because operations on the qubits need to be controlled accurately to several significant digits, the properties of each part of the computer would first need to be characterized with some precision, have control signals tailored to match, and remain stable while the rest of the system is tuned up and then operated. The need for high absolute accuracy might therefore be circumvented if we can obtain a very high stability of qubit parameters (Table 1); recent results (43) are encouraging and exceed expectations, but more information is needed. The power of electronic control circuitry to tailor waveforms, such as composite pulse sequence techniques well known from nuclear magnetic resonance (54), can remove first-order sensitivity to variations in qubit parameters or in control signals, at the expense of some increase in gate time and a requirement for a concomitant increase in coherence time.

Even if the problem of stability is solved, unwanted interactions or cross-talk between the parts of these complex circuits will still cause problems. In the future, we must know and control the Hamiltonian to several digits, and for many qubits. This is beyond the current capability (~1 to 10%; see Table 1). Moreover, the number of measurements and the amount of data required to characterize a system of entangled qubits appears to grow exponentially with their number, so the new techniques for "debugging" quantum circuits (55) will have to be further developed. In the stages ahead, one must design, build, and operate systems with more than a few dozen degrees of freedom, which, as a corollary to the power of quantum computation, are not even possible to simulate classically. This suggests that large quantum processors should perhaps consist of smaller modules whose operation and functionality can be separately tested and characterized.

A second challenge in Hamiltonian control (or circuit cross-talk) is posed by the need to combine long-lived qubits with the fast readout, qubit reset or state initialization, and high-speed controls necessary to perform error correction. This means that modes with much lower Q (~1000 for a 50-ns measurement channel) will need to be intimately mixed with the long-lived qubits with very high Q (~106 to 109), which requires exquisite isolation and shielding between the parts of our high-frequency integrated circuit. If interactions between a qubit and its surroundings cause even 0.1% of the energy of a qubit to leak into a low-Q mode, we completely spoil its lifetime. Although the required levels of isolation are probably feasible, these challenges have not yet been faced or solved by conventional superconducting or semiconducting circuit designers. In our view, the next stages of development will require appreciable advances, both practical and conceptual, in all aspects of Hamiltonian design and control.

What Will We Learn About Active Architectures During the Next Stage?

How long might it take to realize robust and practical error correction with superconducting circuits? This will depend on how rapidly the experimental techniques and capabilities (Fig. 3, A and B) continue to advance, but also on the architectural approach to QEC, which might considerably modify both the necessary circuit complexity and the performance limits (elements of Table 1) that are required. Several different approaches exist that are theoretically well developed (1, 2) but remain relatively untested in the real world.

The canonical models for QEC are the stabilizer codes (36). Here, information is redundantly encoded in a register of entangled physical qubits (typically, at least seven) to create a single logical qubit. Assuming that errors occur singly, one detects them by measuring a set of certain collective properties (known as stabilizer operators) of the qubits, and then applies appropriate additional gates to undo the errors before the desired information is irreversibly corrupted. Thus, an experiment to perform gates between a pair of logically encoded qubits might take a few dozen qubits, with hundreds to thousands of individual operations. To reach a kind of "break-even" point and perform correctly, it is required that there should be less than one error on average during a single pass of the QEC. For a large calculation, the codes must then be concatenated, with each qubit again being replaced by a redundant register, in a treelike hierarchy. The so-called error-correction threshold, where the resources required for this process of expansion begin to converge, is usually estimated (26) to lie in the range of error rates of 10−3 to 10−4, requiring values of 103 to 104 for the elements of Table 1. Although these performance levels and complexity requirements might no longer be inconceivable, they are nonetheless beyond the current state of the art, and rather daunting.

A newer approach (5658) is the "surface code" model of quantum computing, where a large number of identical physical qubits are connected in a type of rectangular grid (or "fabric"). By having specific linkages between groups of four adjacent qubits, and fast QND measurements of their parity, the entire fabric is protected against errors. One appeal of this strategy is that it requires a minimum number of different types of elements, and once the development of the elementary cell is successful, the subsequent stages of development (the fourth, fifth, and sixth stages in Fig. 1) might simply be achieved by brute-force scaling. The second advantage is that the allowable error rates are appreciably higher, even on the order of current performance levels (a couple of percent). However, there are two drawbacks: (i) the resource requirements (between 100 and 10,000 physical qubits per logical qubit) are perhaps even higher than in the QEC codes (58), and (ii) the desired emergent properties of this fabric are obtained only after hundreds, if not thousands, of qubits have been assembled and tested.

A third strategy is based on modules of nested complexity. The basic element is a small register (50) consisting of a logical memory qubit, which stores quantum information while performing the usual kind of local error correction, and some additional communication qubits that can interact with the memory and with other modules. By entangling the communication qubits, one can distribute the entanglement and eventually perform a general computation between modules. Here, the operations between the communication bits can have relatively high error rates, or even be probabilistic and sometimes fail entirely, provided that the communication schemes have some modest redundancy and robustness. The adoption of techniques from cavity quantum electrodynamics (QED) (59) and the advantages for routing microwave photons with transmission lines [now known as circuit QED (12, 44, 60)] might make on-chip versions of these schemes with superconducting circuits an attractive alternative. Although this strategy can be viewed as less direct and requires a variety of differing parts, its advantage is that stringent quality tests are easier to perform at the level of each module, and hidden design flaws might be recognized at earlier stages. Finally, once modules with sufficient performance are in hand, they can then be programmed to realize any of the other schemes in an additional "software layer" of error correction.

Finally, the best strategy might include ideas that are radically different from those considered standard fare in quantum information science. Much may be gained by looking for shortcuts that are hardware-specific and optimized for the particular strengths and weaknesses of a particular technology. For instance, all of the schemes described above are based on a "qubit register model," where one builds the larger Hilbert space and the required redundancy from a collection of many individual two-level systems. But for superconducting circuits, the "natural units" are oscillators with varying degrees of nonlinearity, rather than true two-level systems. The use of noncomputational states beyond the first two levels is of course known in atomic physics, and has already been used as a shortcut to two- and three-qubit gates in superconducting circuits (23, 61). Under the right conditions, the use of nonlinear oscillators with many accessible energy levels could replace the function of several qubits without introducing new error mechanisms. As a concrete example of the power of this approach, a recent proposal (62) for using a cavity as a protected memory requires only one ancilla and one readout channel—a real decrease in complexity.

How architectural choices like these affect our ability to perform error-corrected information processing will be a key scientific question occupying this field in the near future, and will probably take several years to resolve. The knowledge garnered in this process has the potential to substantially change the resources required for building quantum computers, quantum simulators, or quantum communication systems that are actually useful.

The Path Forward

The field of QIP with superconducting circuits has made dramatic progress, and has already demonstrated most of the basic functionality with reasonable (or even surprising) levels of performance. Remarkably, we have not yet encountered any fundamental physical principles that would prohibit the building of quite large quantum processors. The demonstrated capabilities of superconducting circuits, as in trapped ions and cold atoms, mean that QIP is beginning what may be one of its most interesting phases of development. Here, one enters a true terra incognita for complex quantum systems, as QEC becomes more than a theoretical discipline. As in the past, this era will include new scientific innovations and basic questions to be answered. Even if this stage is successful, there will remain many further stages of development and technical challenges to be mastered before useful quantum information processing could become a reality. However, we think it is unlikely to become a purely technological enterprise, like sending a man to the Moon, in the foreseeable future. After all, even the Moore's law progression of CMOS integrated circuits over the past four decades has not brought the end of such fields as semiconductor physics or nanoscience, but rather enabled, accelerated, and steered them in unanticipated directions. We feel that future progress in quantum computation will always require the robust, continual development of both scientific understanding and engineering skill within this new and fascinating arena.

References and Notes

  1. Direct quantum spin simulations, such as aimed at by the machine constructed by D-Wave Systems Inc., are outside the scope of this article.
  2. Acknowledgments: We thank L. Frunzio, S. Girvin, L. Glazman, and L. Jiang for their contributions. Supported by the U.S. Army Research Office, U.S. National Security Agency Laboratory for Physical Science, U.S. Intelligence Advanced Research Projects Activity, NSF, and Yale University.
View Abstract

Navigate This Article