## Abstract

Quantum computers are unnecessary for exponentially efficient computation or simulation if the Extended Church-Turing thesis is correct. The thesis would be strongly contradicted by physical devices that efficiently perform tasks believed to be intractable for classical computers. Such a task is boson sampling: sampling the output distributions of *n* bosons scattered by some passive, linear unitary process. We tested the central premise of boson sampling, experimentally verifying that three-photon scattering amplitudes are given by the permanents of submatrices generated from a unitary describing a six-mode integrated optical circuit. We find the protocol to be robust, working even with the unavoidable effects of photon loss, non-ideal sources, and imperfect detection. Scaling this to large numbers of photons should be a much simpler task than building a universal quantum computer.

A major motivation for scalable quantum computing is Shor's algorithm (*1*), which enables the efficient factoring of large composite numbers into their constituent primes. The presumed difficulty of this task is the basis of the majority of today's public-key encryption schemes. It may be that scalable quantum computers are not realistic if, for example, quantum mechanics breaks down for large numbers of qubits (*2*). If, however, quantum computers are realistic physical devices, then the Extended Church-Turing (ECT) thesis—that any function efficiently computed on a realistic physical device can be efficiently computed on a probabilistic Turing machine—means that a classical efficient factoring algorithm exists. Such an algorithm, long sought after, would enable us to break public-key cryptosystems such as RSA. A third possibility is that the ECT thesis itself is wrong.

How do we answer this trilemma? As yet there is no evidence that large-scale quantum computers are inherently impossible (that will need to be tested directly via experiment), and there is no efficient classical factoring algorithm or mathematical proof of its impossibility. This leaves examining the validity of the ECT thesis, which would be contradicted, for example, by building a physical device that efficiently performs a task thought to be intractable for classical computers.

One such task is boson sampling: sampling from the probability distribution of *n* identical bosons scattered by some linear unitary process, *U*. The probabilities are defined in terms of permanents of *n *× *n* submatrices of *U* [in general, calculating these is exponentially difficult, because calculating the permanent is a so-called "#P-complete" problem (*3*); a class above even "NP-complete" in complexity] and is therefore strongly believed to be intractable. This does not mean that boson sampling is itself #P-complete: The ability to sample from a distribution need not imply the ability to calculate the permanents that gave rise to it. However, by using the fact that the permanent is #P-complete, (*4*) recently showed that the existence of a fast classical algorithm for this "easier" sampling task leads to drastic consequences in classical computational complexity theory, notably collapse of the polynomial hierarchy.

We tested the central premise of boson sampling, experimentally verifying that the amplitudes of *n* = 2 and *n* = 3 photon scattering events are given by the permanents of *n *× *n* submatrices of the operator *U* describing the physical device. We find the protocol to be robust, working even with imperfect sources, optics, and detectors.

Consider a race between two participants: Alice, who only possesses classical resources, and Bob, who in addition possesses quantum resources. They are given some physical operation, described by an evolution operator, *U*, and agree on a specific *n*-boson input configuration. Alice calculates an output sample distribution with a classical computer; Bob either builds or programs an existing linear photonic network, sending *n* single photons through it and obtaining his sample by measuring the output distribution (Fig. 1A). The race ends when both return samples from the distribution: The winner is whoever returns a sample fastest. As *n* becomes large, it is conjectured that Bob will always win, because Alice's computation runtime increases exponentially, whereas Bob's experimental runtime does not. It becomes intractable to verify Bob's output against Alice's, and, unlike for Shor's algorithm, there is no known efficient algorithm to verify the result (*4*). However, one can take a large instance—large enough for verification via a classical computer—and show that Bob's quantum computer solves the problem much faster, thereby strongly suggesting that the same behavior will continue for larger systems, casting serious doubt on the ECT thesis. In a fair race, Bob must verify that his device actually implements the target unitary; an alternative fair version is to give both Alice and Bob the same physical device instead of a mathematical description and have Alice characterize it before she predicts output samples via classical computation. Alice can use a characterization method that neither requires nonclassical resources nor adds to the complexity of the task (*5*).

We tested boson sampling using an optical network with *m* = 6 input and output modes and *n* = 2 and *n* = 3 photon inputs. We implemented a randomly chosen operator so that the permanents could not be efficiently calculated (*6*); that is, the elements are complex-valued and the operator *U* is fully connected, with every input distributed to every output. The 6-input × 6-output modes of *U* are represented by two orthogonal polarizations in 3 × 3 spatial modes of a fused-fiber beamsplitter (FBS), an intrinsically stable and low-loss device. The mode mapping is {1,...,6} = {|*H*〉_{1},|*V*〉_{1},|*H*〉_{2},|*V*〉_{2},|*H*〉_{3},|*V*〉_{3}}, where |*H*〉_{1} is the horizontally polarized mode for spatial mode 1. We can use polarization controllers at the inputs and outputs of the central 3 × 3 FBS to modify the evolution (see the equivalent circuit diagram in Fig. 1B).

Alice calculates the probability of bosonic scattering events in the following way (*4**, **7*). Having characterized the evolution *U* using the method detailed in supplementary materials section S1 (*8*), and given the input and output configurations *S* = (*s*_{1},...,*s _{m}*) and

*T*= (

*t*

_{1},...,

*t*) with boson occupation numbers

_{m}*s*and

_{i}*t*respectively, she produces an

_{j}*n*×

*m*submatrix

*U*by taking

_{T}*t*copies of the

_{j}*j*

^{th}column of

*U*. Then, she forms the

*n*×

*n*submatrix

*U*by taking

_{ST}*s*copies of the

_{i}*i*

^{th}row of

*U*. The probability for the scattering event

_{T}*T*, for indistinguishable input photons

*S*, is given by

Bob, on the other hand, experimentally prepares the *n*-photon Fock state |*t*_{1},...,*t _{m}*〉. After injecting the desired input to the circuit, he determines the probability of the scattering event

*T*by projecting onto its corresponding state using single-photon detectors connected to a coincidence-counting logic. We prepared near–single-photon Fock states via spontaneous parametric downconversion in a nonlinear crystal [Fig. 1C, and for further details, see section S2 (

*8*)]. Once the photons pass through the network, they are detected by single-photon avalanche diodes. The boson sampling protocol measures the frequency of output events; i.e., raw coincident photon counts. These, however, are strongly affected by differences in efficiency between photon counters, an effect that can be removed by measuring nonclassical interference visibility instead

where *T* measured for completely indistinguishable and distinguishable photons, respectively. Distinguishable statistics are obtained by introducing a temporal delay, ∆τ, between the input photons. When all photons are delayed by significantly more than their respective coherence lengths, *L*, true two-photon quantum interference cannot occur. Figure 2A outlines the technique Alice uses to predict the visibility from the unitary evolution *U*. For *n* = 2, high count rates mean that 27 samples of the output *T* were taken as the temporal delay was changed between the two input photons (*9*). For *n* = 3, where we used three of the photons from a four-photon state, low count rates mean that only three measurements were taken to avoid optical misalignment and the signal drift that occurs over necessarily long experimental runtimes. Therefore, for *n* = 2, the visibilities are calculated from the fitted Gaussian curves (Fig. 2B); for *n* = 3, the probabilities _{∞},0,∆τ_{∞}} and (ii) _{∞},0,−∆τ_{∞}}, where {τ_{1},τ_{2},τ_{3}} are the temporal delays of photons 1, 2, and 3 with respect to photon 2, and ∆τ_{∞} >> *L*/*c*.

Figure 2C shows Alice's predictions and Bob's measurements for *n* = 2. We compare their results using the average *L*_{1}-norm distance per output configuration, *C*(*m*,*n*) is the binomial coefficient [section S3 (*8*)]. We find excellent agreement between Alice and Bob, with the average across these three configurations being *8*)], to perform an analogous experiment to Bob's, she will not obtain the same results. Her classical predictions, given by the yellow circles in Fig. 2C, are markedly different from Bob's quantum measurements, with

Figure 3 shows the results for *n* = 3: There is a larger average distance between Alice and Bob's distributions, *8*)]. Having tested all possible noncolliding output configurations (that is, one photon per output-mode), we also tested colliding configurations, with two photons per output-mode. This requires photon-number resolution (*10*, *11*), using the method shown in Fig. 4A. The results in Fig. 4B show agreement between Alice's predictions and Bob's measurements similar to the noncolliding case, = 0.153 ± 0.012, and a much larger distance between Alice's classical predictions and Bob's measurements,

*= 0.995 ± 0.045. The latter is expected because two-photon outputs are correspondingly rarer in the classical distribution.*L 1

These results confirm that the *n* = 2 and *n* = 3 photon scattering amplitudes are indeed given by the permanents of submatrices generated from *U*. The small differences, larger for *n* = 3 than *n* = 2, between Alice's Fock-state predictions and Bob's measurement results are expected, because Alice's calculations are for indistinguishable Fock-state inputs, and Bob does not actually have these. The conditioned outputs from downconversion are known to have higher-order terms; that is, a small probability of producing more than one photon per mode [section S5 and fig. S1 (*8*)], and are also spectrally entangled, leading to further distinguishability. Spectrally mismatched detector responses can alter the observed signals because of contributions from the immanent (*12*), of which the determinant and permanent are special cases. Because of flat spectral responses, we can rule this out in our experiment.

Strong evidence against the ECT thesis will come from demonstrating boson sampling with a larger-sized system, in which Bob's experimental sampling is far faster than Alice's calculation and classical verification is still barely possible; according to (*4*), this regime is on the order of *n* = 20 to *n* = 30 photons in a network with *m* >> *n* modes. This is beyond current technologies, but rapid improvements in efficient photon detection (*13*, *14*), low-loss (*15*, *16*) and reconfigurable (*17*, *18*) integrated circuits, and improved photon sources (*19*) are highly promising. Boson sampling has also been proposed using the phononic modes of an ion trap (*20*).

An important open question remains as to the practical robustness of large implementations. Unlike the case of universal quantum computation, there are no known error correction protocols for boson sampling, or indeed for any of the models of intermediate quantum computation, such as deterministic quantum computing with one qubit (DQC1) (*21*, *22*), temporally unstructured quantum computation (IQP) (*23*), or permutational quantum computing (PQC) (*24*). These intermediate models have garnered much attention in recent years, both because of the inherent questions they raise about quantum advantage in computing and because some of them can efficiently solve problems believed to be classically intractable; for example, DQC1 has been applied in fields that range from knot theory (*25*) to quantum metrology (*26*). A recent theoretical study posits that photonic boson sampling retains its computational advantage even in the presence of loss (*27*). Our experimental results are highly promising in regard to the robustness of boson sampling, finding good agreement even with clearly imperfect experimental resources.

## Supplementary Materials

www.sciencemag.org/cgi/content/full/science.1231440/DC1

Supplementary Text

Fig. S1

Eqs. S1 to S12

## References and Notes

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
Materials and methods are available as supplementary materials on
*Science*Online. - ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
**Acknowledgements:**We thank R. Fickler for help with characterization; M. de Almeida, D. Biggerstaff, and G. Gillett for experimental assistance; and A. Arkhipov, M. Bremner, and T. Rudolph for discussions. This work was supported in part by the Australian Research Council's Federation Fellow program (grant FF0668810), the Centre for Engineered Quantum Systems (grant CE110001013), and the Centre for Quantum Computation and Communication Technology (grant CE110001027); the University of Queensland Vice-Chancellor's Senior Research Fellowship program; NSF grant no. 0844626 and a Science and Technology Centre grant; a Defense Advanced Research Projects Agency Young Faculty Award grant; a TIBCO Chair; and a Sloan Fellowship.