Report

A blueprint for demonstrating quantum supremacy with superconducting qubits

See allHide authors and affiliations

Science  13 Apr 2018:
Vol. 360, Issue 6385, pp. 195-199
DOI: 10.1126/science.aao4309

Scaling up to supremacy

Quantum information scientists are getting closer to building a quantum computer that can perform calculations that a classical computer cannot. It has been estimated that such a computer would need around 50 qubits, but scaling up existing architectures to this number is tricky. Neill et al. explore how increasing the number of qubits from five to nine affects the quality of the output of their superconducting qubit device. If, as the number of qubits grows further, the error continues to increase at the same rate, a quantum computer with about 60 qubits and reasonable fidelity might be achievable with current technologies.

Science, this issue p. 195

Abstract

A key step toward demonstrating a quantum system that can address difficult problems in physics and chemistry will be performing a computation beyond the capabilities of any classical computer, thus achieving so-called quantum supremacy. In this study, we used nine superconducting qubits to demonstrate a promising path toward quantum supremacy. By individually tuning the qubit parameters, we were able to generate thousands of distinct Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert space. As the number of qubits increases, the system continues to explore the exponentially growing number of states. Extending these results to a system of 50 qubits has the potential to address scientific questions that are beyond the capabilities of any classical computer.

A programmable quantum system consisting of merely 50 to 100 qubits could have a marked impact on scientific research. Although such a platform is naturally suited to address problems in quantum chemistry and materials science (14), applications extend to fields as diverse as classical dynamics (5) and computer science (69). An important milestone on the path toward realizing these applications will be the demonstration of an algorithm that exceeds the capabilities of any classical computer, thus achieving quantum supremacy (10). Sampling problems are an iconic example of algorithms designed specifically for this purpose (1114). A successful demonstration of quantum supremacy would prove that engineered quantum systems, although still in their infancy, can outperform the most advanced classical computers.

Consider a system of coupled qubits whose dynamics uniformly explore all accessible states over time. The complexity of simulating this evolution on a classical computer is easy to understand and quantify. Because every state is equally important, it is not possible to simplify the problem by using a smaller truncated state-space. The complexity is then simply given by how much classical memory it takes to store the state vector. Storing the state of a 46-qubit system requires nearly a petabyte of memory and is at the limit of the most powerful computers (14, 15). Sampling from the output probabilities of such a system would therefore constitute a clear demonstration of quantum supremacy. Note that this is an upper bound on only the number of qubits required—other constraints, such as computation time, may place practical limitations on even smaller system sizes.

In this study, we experimentally illustrate a blueprint for demonstrating quantum supremacy. We present data characterizing two basic ingredients required for any supremacy experiment: complexity and fidelity. First, we show that the qubits can quasi-uniformly explore the Hilbert space, providing an experimental indication of algorithm complexity [see (16) for a formal discussion of computational complexity]. Next, we compare the measurement results with the expected behavior and show that the algorithm can be implemented with high fidelity. Experiments probing complexity and fidelity provide a foundation for demonstrating quantum supremacy.

The more control a quantum platform offers, the easier it is to embed diverse applications. For this reason, we have developed superconducting gmon qubits, which are based on transmon qubits but have tunable frequencies and tunable interactions (Fig. 1A). The nine-qubit device consists of three distinct sections: control (bottom), qubits (center) and readout (top). A detailed circuit diagram is provided in (16).

Fig. 1 Device and experimental protocol.

(A) Optical micrograph of the nine-qubit array. Gray regions are aluminum; dark regions are where the aluminum has been etched away to define features. Colors have been added to distinguish readout circuitry, qubits, couplers, and control wiring. (B) Five-qubit example of the pulse sequences used in these experiments. First, the qubits are initialized using microwave pulses (red). Three of the qubits start in the ground state Embedded Image and the rest start in the excited state Embedded Image. Next, the qubit frequencies are set using rectangular pulses (orange). During this time, all couplings are simultaneously pulsed (green); each pulse has a randomly selected duration. Last, we measure the state of every qubit. The measurement is repeated many times to estimate the probability of each output state. (C) We repeat this pulse sequence for randomly selected control parameters. Each instance corresponds to a different set of qubit frequencies, coupling pulse heights and lengths. Here we plot the measured probabilities for two instances after 10 coupler pulses (cycles). Error bars (±3 SD) represent the statistical uncertainty from 50,000 samples. Predictions from a control model are overlaid as red circles.

Each of our gmon qubits can be thought of as a nonlinear oscillator. The Hamiltonian for the device is given byEmbedded Image(1)where Embedded Image is the number operator and Embedded Image (Embedded Image) is the raising (lowering) operator. The qubit frequency sets the coefficient δi, the nonlinearity sets ηi, and the nearest-neighbor coupling sets gi. The two lowest energy levels (Embedded Image and Embedded Image) form the qubit subspace. The higher energy levels of the qubits, although only virtually occupied, substantially modify the dynamics. In the absence of higher levels, this model maps to free particles and can be simulated efficiently (16). The inclusion of higher levels effectively introduces an interaction and allows for the occurrence of complex dynamics.

In Fig. 1, B and C, we outline the experimental procedure and provide two instances of the raw output data. Figure 1B shows a five-qubit example of the pulses used to control the qubits. First, the system is initialized (red) by placing two of the qubits in the excited state; e.g., Embedded Image. The dynamics result from fixing the qubit frequencies (orange) and simultaneously ramping all of the nearest-neighbor interactions on and then off (green). The shape of the coupling pulse is chosen to minimize leakage out of the qubit subspace (17). After the evolution, we simultaneously measure the state of every qubit. Each measurement results in a single output state, such as Embedded Image; the experiment is repeated many times to estimate the probability of every possible output state. We then carry out this procedure for randomly chosen values of the qubit frequencies, the coupler pulse lengths, and the coupler pulse heights. The probabilities of the various output states are shown in Fig. 1C for two instances of the evolution after 10 coupler pulses (cycles). The height of each bar represent the probability with which that output state appeared in the experiments.

The Hamiltonian in Eq. 1 conserves the total number of excitations. This means that if we start in a state with half of the qubits excited, we should also end in a state with half of the qubits excited. However, most experimental errors do not obey this symmetry, allowing us to identify and remove erroneous outcomes. Although this symmetry helps to reduce the impact of errors, it slightly reduces the size of the Hilbert space. For N qubits, the number of states is given by the permutations of N/2 excitations in N qubits and is approximately Embedded Image. As an example, a 64-qubit system would access ~261 states under our protocol.

Although the measured probabilities appear largely random, they provide important insight into the quantum dynamics of the system. A key feature of these data sets are the rare, taller-than-average peaks, which are analogous to the high-intensity regions of a laser’s speckle pattern. These highly likely states serve as a fingerprint of the underlying evolution and provide a means for verifying that the desired evolution was properly generated. The distribution of these probabilities provides evidence that the dynamics coherently and uniformly explore the Hilbert space.

In Fig. 2, we use the measured probabilities to show that the dynamics uniformly explore the Hilbert space for experiments carried out with five to nine qubits. We begin by measuring the output probabilities after five cycles for between 500 and 5000 distinct instances. To compare experiments with different numbers of qubits, the probabilities are weighted by the number of states in the Hilbert space. Figure 2A shows a histogram of the weighted probabilities where we find nearly universal behavior. Small probabilities (<1/Nstates) appear most often, and probabilities as large as 4/Nstates show up with a frequency of ~1%. In stark contrast to this, we observe a tall, narrow peak centered at 1.0 for longer evolutions whose duration is comparable to the coherence time of the qubits.

Fig. 2 Complexity: uniform sampling of an exponentially growing state-space.

(A) Histogram of the raw probabilities (see Fig. 1C) for five- to nine-qubit experiments after five cycles of evolution. Before making the histogram, probabilities were weighted by the number of states in the Hilbert space, with all curves placed on a universal axis. The width of the bars represents the size of the bins used to construct the histogram. The data are taken from more than 29.7 million experiments. For dynamics that uniformly explore all states, this histogram decays exponentially; an exponential decay is shown as a solid line for comparison. A histogram of the probabilities for seven qubits after 100 cycles is shown for contrast. In this plot, decoherence dominates and we observe a tall narrow peak around 1. (B) To measure convergence of the measured histograms to an exponential distribution, we compute their distance as a function of the number of cycles. Distance is measured using the Kullback-Leibler divergence (Eq. 2). We find that a maximum overlap occurs after just two cycles, and decoherence subsequently increases the distance between the distributions.

A quantum system that uniformly explores all states is expected to have an exponential distribution of weighted probabilities. The solid line in Fig. 2A corresponds to such a distribution and is simply given by Embedded Image; this is also referred to as a Porter-Thomas distribution (14, 18). Although, in principle, approaching the universal form of the distribution takes exponential time, the nonuniversal deviations become small on a much shorter time scale that is linear in the number of qubits (14, 16). Note that the deviations from a purely exponential distribution are consistent with decoherence. The deviations scale with the number of qubits, and the histogram appears to be converging to the incoherent distribution shown in green.

A measure of algorithm complexity is a key ingredient for demonstrating quantum supremacy. We argue that evolution under the Hamiltonian in Eq. 1 cannot be efficiently simulated on a classical computer under plausible assumptions (16). The experimental results in Fig. 2A suggest that we can coherently evolve the system for long enough to realize this computationally notable regime.

In Fig. 2B, we illustrate the number of cycles necessary for the system to uniformly explore all states by comparing the measured probabilities to an exponential distribution. After each cycle, we compare the measured histogram to an exponential decay. The distance between these two distributions is measured using the Kullback-Leibler divergence DKLEmbedded Image(2)where the first term is the cross-entropy between the measured distribution ρmeasured and an exponential distribution ρexponential, and the second term is the self-entropy of the measured distribution. The entropy of a set of probabilities is given by Embedded Image and the cross-entropy of two sets of probabilities is given by Embedded Image. Their difference, the Kullback-Leibler divergence, is zero if and only if the two distributions are equivalent.

We find that the experimental probabilities closely resemble an exponential distribution after just two cycles. For longer evolutions, decoherence reduces this overlap. These results suggest that we can generate very complex dynamics with only two pulses, a surprisingly small number. However, rather than breaking up the evolution into two-qubit gates, we allow the entire system to interact at once. Therefore, one of our pulses corresponds to roughly eight simultaneous two-qubit gates. Additionally, each of our pulses lasts long enough to effectively implement five square-root-of-swap gates. So, although the evolution is only two cycles, this translates to ~80 two-qubit gates.

In addition to demonstrating an exponential scaling of complexity, it is necessary to characterize the algorithm fidelity. Determining the fidelity requires a means for comparing the measured probabilities (Pmeasured) with the probabilities expected from the desired evolution (Pexpected). On the basis of the proposal outlined in (14), we use the cross-entropy to quantify the fidelityEmbedded Image(3)where Pincoherent represents an incoherent mixture with each output state given equal likelihood—this is the behavior that we observe after many cycles. When the distances between the measured and expected probabilities are small, the fidelity approaches 1. When the measured probabilities approach an incoherent mixture, the fidelity approaches 0.

In Fig. 3A, we show that the desired evolution can be implemented with high fidelity. We find that at short times the fidelity decays linearly with an increasing number of cycles (fits to the data are shown as dashed lines). The slope of these lines measures the error per cycle; this slope is shown in the inset for each number of qubits. We find that the error scales with the number of qubits at a rate of ~0.4% error per qubit per cycle. If such an error rate extends to larger systems, we will be able to perform 60-qubit experiments of depth = 2 with a fidelity >50%. These results provide promising evidence that quantum supremacy may be achievable with the use of existing technology.

Fig. 3 Fidelity: learning a better control model.

(A) Average fidelity decay versus number of cycles for five- to nine-qubit experiments (circles). The fidelity is computed from Eq. 3. The error per cycle, presented in the inset, is the slope of the dashed line that best fits the data. (B) Using the fidelity as a cost function, we learn optimal parameters for our control model. We take half of the experimental data to train our model. The other half of the data is used to verify this new model; the optimizer does not have access to these data. The corresponding improvement in fidelity of the verification set provides evidence that we are indeed learning a better control model.

Predicting the expected probabilities is a major challenge. First, substantial effort has been taken to accurately map the control currents to Hamiltonian parameters; the detailed procedure for constructing this map is outlined in (16). Second, we model the Hamiltonian using only single-qubit calibrations, which we find to be accurate even when all of the couplers are used simultaneously. This is a scalable approach to calibration. Third, when truncating the Hamiltonian to two levels, we find poor agreement with both an exact theoretical model and experimental results. We find that a three-level description must be used to account for virtual transitions to the second excited state during the evolution. When including these states, truncating to a fixed number of excitations lowers the size of the computational Hilbert space from 3N to approximately 0.15 × 2.42N (table S1): Thus, a nine-qubit experiment requires accurately modeling a 414-dimensional unitary operation. Determining how many of these states are needed for sufficient accuracy depends on the magnitude of the coupling and is an open question, but the number should scale somewhere between 2.0N and 2.5N. The predictions are overlaid onto the data in Fig. 1C and show excellent agreement.

In Fig. 3B, we show how techniques from machine learning were used to achieve low error rates. To set the matrix elements of the Hamiltonian, we built a physical model for our gmon qubits. This model is parameterized in terms of capacitances, inductances, and control currents. The parameters in this model were calibrated using simple single-qubit experiments (16). We used a search algorithm to find offsets in the control model that minimize the error (1 - Fidelity). Figure 3B shows the error, averaged over cycles, versus the number of optimization steps. Before training the model, the data were split into two halves: a training set (red) and a verification set (black). The optimization algorithm was used only to access the training set, whereas the verification set was used only to verify the optimal parameters.

We find that the error in both the training set and the verification set fall considerably by the end of the optimization procedure. The high degree of correlation between the training and verification data suggests that we are genuinely learning a better physical model. Optimizing over more parameters does not further reduce the error. This suggests that the remaining error is not the product of an inaccurate control model but rather results from decoherence. Using the cross-entropy as a cost function for optimizing the parameters of a physical model was the key to achieving high-fidelity control in this experiment.

It is important to note how these experiments might change at the level of a few tens of qubits. At this level, it becomes exponentially unlikely that any state will appear twice, making it impractical to measure probabilities in an experiment. However, even for these large systems, sampling from the output states is sufficient to determine the fidelity (14). Therefore, the distribution of probabilities can be inferred from the classical computations, and a high-fidelity experimental result indicates that we are likely solving a difficult computational problem.

Ideally, in addition to exponential complexity and high fidelity, a quantum platform should offer valuable applications. In Fig. 4, we illustrate applications of our algorithms to many-body physics where the exponential growth in complexity is a substantial barrier to ongoing research (1924). By varying the amount of disorder in the system, we are able to study disorder-induced localization. This is done using two-body correlations Embedded Image(4)which we average over qubit pairs, cycles (number of coupler pulses), and instances (choice of randomly selected pulse parameters). In Fig. 4A, we plot the average two-body correlations against the separation between qubits. This experiment is performed for both low and high disorder in the qubit frequencies (shown in gold and blue, respectively). Figure 4B depicts the results of our experiment as we continuously vary the amount of disorder.

Fig. 4 Applications: localization and delocalization.

(A) Average two-body correlations (Eq. 4) as a function of the separation between qubits. Data are shown for two values of disorder strength. At low disorder, the qubit frequencies are set over a range of ±5 MHz, and the two-body correlations are independent of separation (i.e., qubits at the ends of the chain are just as correlated as nearest neighbors). At high disorder, the qubit frequencies are set over a range of ±30 MHz, and we find an exponential decay in correlations as a function of separation. (B) Map of correlations as a continuous function of frequency disorder. Arrows indicate the location of line cuts used in (A). We observe a clear transition from long-range to short-range correlations.

At low disorder, we find that the correlations are independent of separation: qubits at opposite ends of the chain are as correlated as nearest neighbors. At high disorder, the correlations fall off exponentially with separation. The rate at which this exponential decays allows us to determine the correlation length. A fit to the data is shown in Fig. 4A as a solid blue line where we find a correlation length of roughly four qubits. The study of localization and delocalization in interacting systems provides a promising application of our algorithms.

Supplementary Materials

www.sciencemag.org/content/360/6385/195/suppl/DC1

Supplementary Text

Figs. S1 to S27

Tables S1 and S2

References (2539)

Data S1

References and Notes

  1. See supplementary materials.
Acknowledgments: We thank E. Kapit and J. Fitzsimons for discussions. Funding: This work was supported by Google. C.Q. and Z.C. acknowledge support from the NSF Graduate Research Fellowship under grant DGE-1144085. Devices were made at the UCSB Nanofabrication Facility, a part of the NSF-funded National Nanotechnology Infrastructure Network. K.K. acknowledges support from NASA Academic Mission Services, under contract number NNA16BD14C. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for governmental purposes, notwithstanding any copyright annotation thereon. Author contributions: C.N. designed and fabricated the device. C.N. and P.R. designed the experiment. C.N. performed the experiment and analyzed the data. C.N., K.K., and V.S. developed the physical control model. S.B. and S.V.I. numerically validated the protocol for large qubit arrays. All members of the UCSB and Google teams contributed to the experimental setup and manuscript preparation. Competing interests: None declared. Data and materials availability: The data that support the plots presented in this paper and other findings of this study are available in the supplementary materials.
View Abstract

Navigate This Article