## Abstract

Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (ϕ^{4} theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

The question whether quantum field theories can be efficiently simulated by quantum computers was first posed by Feynman three decades ago when he introduced the notion of quantum computers (*1*). Since then, efficient quantum algorithms for simulating the dynamics of quantum many-body systems have been developed theoretically (*2*–*4*) and demonstrated experimentally (*5*–*7*). Quantum field theory, which applies quantum mechanics to functions of space and time, presents additional technical challenges, because the number of degrees of freedom per unit volume is formally infinite.

We show that quantum computers can efficiently calculate scattering probabilities in continuum ϕ^{4} theory to an arbitrary degree of precision. We have chosen ϕ^{4} theory, a scalar theory with quartic self-interactions, because it is among the simplest interacting quantum field theories and thus illustrates essential issues without unnecessary complications. Our work introduces several new techniques, including creation of the initial state by a generalization of adiabatic state preparation and the use of effective field theory to analyze spatial discretization errors.

In complexity theory, the efficiency of an algorithm is judged by how its computational demands scale with the problem size or some other quantity associated with the problem’s intrinsic difficulty. An algorithm with polynomial-time asymptotic scaling is considered to be feasible, whereas one with superpolynomial (typically, exponential) scaling is considered infeasible. This classification has proved to be a useful guide in practice.

Traditional calculations of quantum field theory scattering amplitudes rely on perturbation theory—namely, a series expansion in powers of the coupling (the coefficient of the interaction term), which is taken to be small. A powerful and intuitive way of organizing this perturbative expansion is through Feynman diagrams, in which the number of loops is associated with the power of the coupling. A reasonable measure of the computational complexity of perturbative calculations is therefore the number of Feynman diagrams, which is determined by combinatorics and grows factorially with the number of loops and the number of external particles.

If the coupling constant is insufficiently small, the perturbation series does not yield correct results. In ϕ^{4} theory, for *D* = 2, 3 spacetime dimensions, by increasing the coupling λ_{0}, one eventually reaches a quantum phase transition at some critical coupling λ_{c}* *(*8*–*10*). In the parameter space near this phase transition, perturbative methods become unreliable; this region is referred to as the strong-coupling regime. There are then no known feasible classical methods for calculating scattering amplitudes, although lattice field theory can be used to obtain static quantities such as mass ratios. Even at weak coupling, the perturbation series is not convergent, although it is asymptotic (*11*–*13*). Including higher-order contributions beyond a certain point makes the approximation worse. There is thus a maximum possible precision achievable perturbatively.

We simulate a process in which initially well-separated massive particles with well-defined momenta scatter off each other. The input to our algorithm is a list of the momenta of the incoming particles, and the output is a list of the momenta of the outgoing particles produced by the physical scattering process. At relativistic energies, the number of outgoing particles may differ from the number of incoming particles. In accordance with quantum mechanics, the incoming momenta do not uniquely determine the outgoing momenta, but rather a probability distribution over possible outcomes. Upon repeated runs, our quantum algorithm samples from this distribution. The asymptotic scaling of the algorithm is given in Eq. 9 and Table 1. The simulated scattering processes closely match experiments in particle accelerators, which are the standard tools to probe quantum field-theoretical effects.

The issue of gauge symmetries in quantum simulation of lattice field theories has been addressed in (*14*). There is an extensive literature on analog simulation of interacting quantum field theories using ultracold atoms (*15*–*26*), trapped ions (*27*, *28*), and Josephson-junction arrays (*29*). Much work has also been done on analog simulation of special-relativistic quantum mechanical effects such as zitterbewegung and the Klein paradox, as well as general-relativistic quantum effects such as Hawking radiation [for recent reviews, see (*30*, *31*)]. Our work, in contrast to these studies, addresses digital quantum simulation, with explicit consideration of convergence to the continuum limit and efficient preparation of wave packet states for the computation of dynamical quantities such as scattering probabilities. Our analysis includes error estimates of all parts of our algorithm.

**Representing fields with qubits.** Although quantum field theory is typically expressed in terms of Lagrangians and within the interaction picture, our algorithm is more naturally described in the formalism of Hamiltonians and within the Schrödinger picture. We start by defining a lattice ϕ^{4} theory and subsequently address convergence to the continuum theory. (In *D* = 4, the continuum limit is believed to be the free theory. Nonetheless, because the coupling shrinks only logarithmically, scattering processes for particles with small momenta in lattice units are interesting to compute.) Let *d* = *D *− 1 spatial dimensions with periodic boundary conditions and lattice spacing *a*. The number of lattice sites is **x** ∈ Ω, let ϕ(**x**) be a continuous, real degree of freedom—interpreted as the field at **x**—and let π(**x**) be the corresponding canonically conjugate variable. In canonical quantization, these degrees of freedom are promoted to Hermitian operators with the commutation relation*ħ* = *c* = 1. ϕ^{4} theory on the lattice Ω is defined by the Hamiltonian* _{a}*ϕ denotes a discretized derivative (that is, a finite-difference operator) and

*m*

_{0}is the particle mass of the corresponding noninteracting (λ

_{0}= 0) theory.

We represent the state of the lattice field theory by devoting one register of qubits to store the value of the field at each lattice point. In principle, each ϕ(**x**) is an unbounded continuous variable. To represent the field at a given site with finitely many qubits, we cut off the field at a maximum magnitude ϕ_{max} and discretize it in increments of δ_{ϕ}. This requires *n*_{b} = *O*(log_{2}(ϕ_{max}/δ_{ϕ})) qubits per site. Note that this field discretization is a separate issue from the spatial discretization via the lattice Ω.

Let |ψ〉 be any state such that 〈ψ|*H*|ψ〉 ≤ *E*. The probability distribution over ϕ(**x**) defined by |ψ〉 (for any **x** ∈ Ω) yields a very low probability (*32*, *33*) for |ϕ(**x**)| to be much larger than *O*(*E*^{1/2}). Thus, a cutoff **x**)|ψ〉 and 〈ψ|ϕ^{2}(**x**)|ψ〉 as functions of *E* and applying Chebyshev’s inequality (*33*). To choose δ_{ϕ}, note that the eigenbasis of *a ^{d}*π(

**x**) is the Fourier transform of the eigenbasis of ϕ(

**x**). Hence, discretizing ϕ(

**x**) in units of δ

_{ϕ}is equivalent to introducing the cutoff −π

_{max}≤ π(

**x**) ≤ π

_{max}, where π

_{max}= 1/(

*a*δ

^{d}_{ϕ}). By bounding the expectations of π(

**x**) and π

^{2}(

**x**), one finds that it suffices to choose π

_{max}=

*O*((

*𝒱E*/ε

*a*)

^{d}^{1/2}), and thus

*n*=

_{b}*O*(log

_{2}(

*𝒱E*/

*m*

_{0}ε)).

**The algorithm.** We now turn to the main three tasks of quantum simulation: preparing an initial state, simulating the time evolution, and measuring final observables. We first discuss the simulation of time evolution because it is used in all three tasks. The unitary operator representing time evolution, exp(−*iHt*), can be approximated by a quantum circuit of *O*((*t𝒱*)^{1+(1/2}^{k}^{)}) gates implementing a *k*th-order Suzuki-Trotter formula (*34*, *35*). For *n* time steps, the error is ε ~ *n*^{−2}* ^{k}*. The near-linear scaling with

*t*has long been known. The scaling with

*𝒱*is a consequence of the locality of

*H*(

*33*) and appears not to have been noted previously in the quantum algorithms literature.

To simulate scattering, one needs to prepare an initial state of particles in well-separated wave packets. We do so by preparing the vacuum of the λ_{0} = 0 theory, exciting wave packets, and then adiabatically turning on the coupling λ_{0}. Let *H*^{(0)} be the Hamiltonian obtained by setting λ_{0} = 0 in *H*. *H*^{(0)} defines an exactly solvable model in which the particles are noninteracting. The vacuum (ground) state |vac(0)〉 of *H*^{(0)} is a multivariate Gaussian wave function in the variables {ϕ(**x**)|**x** ∈ Ω} and can therefore be prepared via the method of Kitaev and Webb (*36*). The asymptotic scaling of the Kitaev-Webb method is dictated by the computation of the **LDL*** ^{T}* decomposition of the inverse covariance matrix, which can be done classically in

*O*(

*𝒱*

^{2.376}) time (

*37*,

*38*).

In analogy with the harmonic oscillator, one can define creation and annihilation operators *a*** _{p}** and

*E*

^{(0)}is an irrelevant zero-point energy. The operator

**p**and energy ω(

**p**).

The (unnormalized) state ϕ(**x**)|vac(0)〉 is interpreted as a single particle localized at **x** [see, e.g., (*39*)]. Because *a*** _{p}**|vac(0)〉 = 0,

*34*,

*35*), we can construct an efficient quantum circuit approximating the unitary transformation exp(–

*iH*

_{ψ}π/2). Applied to |vac(0)〉, this circuit yields the desired state up to an irrelevant global phase and an unentangled ancillary qubit, which can be discarded. We repeat this process for each incoming particle desired.

Because we wish to create localized wave packets, we choose ψ(**x**) to have bounded support. Expanding *f*(**x**) and *g*(**x**) are exponentially decaying with characteristic length scale 1/*m*_{0} outside the support of ψ. Thus, *a*_{ψ} and *iH*_{ψ}π/2) does not scale with the volume *V*. Furthermore, provided the initial wave packets are separated by a distance δ that is large relative to 1/*m*_{0}, the preparation of each additional wave packet leaves the existing wave packets almost perfectly undisturbed [with errors on the order of exp(–*m*_{0}δ)].

At this point, we have finished constructing wave packets of the noninteracting theory. We next use a Suzuki-Trotter formula to construct a quantum circuit simulating the unitary transformation induced by a time-dependent Hamiltonian in which the coupling constant is gradually increased from zero to its final value, λ_{0}, over time τ. A perfectly adiabatic turn-on would ensure that no stray particles are created during this process, provided particle creation costs energy, that is, the particles have nonzero mass. Imperfect adiabaticity causes errors, namely, particle creation from the vacuum and the splitting of one particle into three. The probabilities of such errors depend significantly on the particle mass.

In the *D* = 2, 3 interacting theory, with fixed *m*_{0} and sufficiently large λ_{0}, the mass vanishes. This marks the location of the ϕ → −ϕ symmetry-breaking transition. Here, we restrict our attention to simulations within the symmetric phase, but we do consider systems arbitrarily close to the phase transition, because these should be particularly hard to simulate classically. In this regime, the mass exhibits known power-law behavior.

As Eq. 4 shows, wave packets are not eigenstates of *H*^{(0)}. During the adiabatic turn-on, the different eigenstates acquire different dynamical phases. Thus, as the wave packet time evolves, it propagates and broadens. This behavior is undesirable in our simulation, because we do not wish the particles to collide and scatter before the coupling reaches its final value. We therefore introduce backward time evolutions governed by time-independent Hamiltonians into the adiabatic state-preparation process to undo the dynamical phases. Specifically, let *H*(*s*) parameterize the adiabatic time evolution, with *H*(0) = *H*^{(0)} and *H*(1) = *H*. We divide the adiabatic preparation into *J* steps, with *U _{j}* denoting the unitary time evolution induced by the time-dependent Hamiltonian linearly interpolating between

*H*((

*j*− 1)/

*J*) and

*H*(

*j*/

*J*) over a period of τ/

*J*. Let

*M*consist of backward, forward, and backward evolutions, namely,

_{j}*J*→ ∞, while the adiabatic change of eigenbasis is undisturbed (

*33*).

What is the complexity of adiabatic state preparation? At weak coupling for *d* = 1, 2, a wave packet travels a distance *O*(τ/*J*^{2}) as the coupling is turned on; therefore, choosing *J* = *O*(τ^{1/2}) ensures that this distance is a suitably small constant. Diabatic transitions occur with probability *O*((*J*/τγ^{2})^{2}), where γ is the energy gap of the transition; hence, the state preparation error is ε = *O*(1/τ). The number of quantum gates needed to simulate this process is *O*((τ*𝒱*)^{1+}^{o}^{(1)}) = *O*((τ*a*^{−}* ^{d}*)

^{1+}

^{o}^{(1)}) in a fixed physical volume; setting τ =

*O*(ε

^{−1}) and

*a*=

*O*(ε

^{1/2}) (see below), we conclude that the complexity scales as

*O*((1/ε)

^{1+(}

^{d}^{/2)}). For

*d*= 3, the scaling with ε is further enhanced because the perturbative corrections to the physical mass are highly sensitive to the value of

*a*. Complexity scalings at strong coupling can be obtained by analogous reasoning (

*33*).

After the system has evolved for a period in which scattering occurs, measurement can be performed in one of two ways. The interaction can be adiabatically turned off through the time-reversed version of the turn-on described above; in the free theory, we can then measure the number operators of the momentum modes, using the method of phase estimation (*40*), that is, by simulating *t* and Fourier-transforming the results. Alternatively, one can divide space into small regions and measure the total energy and momentum operators in each region using phase estimation. This is a reasonable idealization of real detectors and can yield information about bound-state energies.

**Discretization errors.** Having described how to simulate scattering processes in a discretized quantum field theory, we now consider discretization errors. To analyze these errors, we use methods of effective field theory (EFT), a well-developed formalism underlying our modern understanding of quantum field theory.

In its regime of validity, typically below a particular energy scale, an EFT reproduces the behavior of the full (that is, fundamental) theory under consideration: It can be regarded as the low-energy limit of that theory. An EFT for a full theory is thus somewhat analogous to a Taylor series for a function. It involves an expansion in some suitable small parameter, so that, although it consists of infinitely many terms, higher-order terms are increasingly suppressed. Thus, the series can be truncated, with corresponding finite and controllable errors.

We apply this framework to analyze the effect of discretizing the spatial dimensions of the continuum ϕ^{4} quantum field theory. The discretized Lagrangian can be thought of as the leading contribution (denoted by *ℒ*^{(0)}) to an EFT. From the leading operators left out, we can thus infer the scaling of the error associated with a nonzero lattice spacing *a*.

The full (untruncated) effective Lagrangian will have every coupling respecting the ϕ → −ϕ symmetry and so will take the form* ^{n}*∂

^{2}ϕ. For example, ϕ

^{2}∂

_{μ}ϕ∂

^{μ}ϕ = (1/3)∂

_{μ}(ϕ

^{3})∂

^{μ}ϕ → −(1/3)ϕ

^{3}∂

^{2}ϕ. Such an operator can then be simplified via the equation of motion (

*41*,

*42*). If this were the equation of motion of the continuum theory, any derivative operator would then be completely eliminated. In the discretized theory, however, the equation of motion is modified and there are residual, Lorentz-violating operators. In fact, because the difference operators in the discretized theory are only approximately equal to the derivatives in the continuum theory, the simplest Lorentz-violating operators are induced purely by discretization.

In units where *ħ* = *c* = 1, all quantities have units of some power of mass. The mass dimensions (denoted by square brackets) of the field and coupling in *D* = *d *+ 1 spacetime dimensions are [ϕ] = (*D *− 2)/2 and [λ] = 4 − *D*, implying that*D* = 4 dimensions, [*c*] = −2 and [*c*′′] = −4, and the only pertinent dimensionful parameter is the lattice spacing; therefore, *c* ~ *a*^{2} and *c*′′ ~ *a*^{4}. Thus, of the operators not included in the Lagrangian *ℒ*^{(0)}, ϕ^{6} is more significant than ϕ^{2}* ^{n}*, for

*n*> 3.

In *D* = 2, 3, the scaling of the coefficients with *a* is somewhat less obvious, because now the coupling λ provides another dimensionful parameter. To obtain the scaling of *c*, one should consider the Feynman diagram that generates the corresponding operator. This involves three ϕ^{4} vertices, so (Other diagrams involve higher powers of λ; hence their contributions to the operator are suppressed by higher powers of *a*.) Likewise, the coefficient of ϕ^{8} will scale as λ^{4}*a*^{8−}* ^{D}*, which means that it is suppressed by

*a*

^{2}relative to the coefficient of ϕ

^{6}.

The EFT thus consists of three different classes of operators: operators of the form ϕ^{2}* ^{n}*, Lorentz-violating operators arising solely from discretization effects, and Lorentz-violating operators resulting from discretization and quantum effects. These are shown with the scaling of their coefficients in Table 2, which reveals that the dominant discretization errors scale as

*a*

^{2}in

*D*= 2, 3, 4. (In

*D*= 2, 3, errors of type II dominate. In

*D*= 4, errors of types I and II each scale as

*a*

^{2}.)

At strong coupling, the operators and their scaling remain the same at the scale of the matching of the full theory onto the EFT, although the explicit coefficients are no longer calculable. However, the running of the coefficients down to lower energies is determined by their so-called anomalous dimensions, which depend on the coupling strength. These anomalous dimensions modify the scaling; at weak coupling the modification is small, but at strong coupling it could be larger. (Still, the scaling will remain polynomial.)

**Concluding remarks.** The most computationally costly part of the algorithm is either adiabatic state preparation or preparation of the free vacuum, depending on the asymptotic scaling considered and the number of spatial dimensions (*33*). Because *a* ~ ε^{1/2}, preparation of the free vacuum has complexity ~ *𝒱*^{2.376} ~ *a*^{−2.376}* ^{d}* ~ ε

^{2.376}

^{d}^{/2}. The number of gates,

*G*

_{weak}, needed to simulate weakly coupled, (

*d*+ 1)-dimensional ϕ

^{4}theory with accuracy ±ε scales as follows:

*f*(

*n*) =

*o*(

*g*(

*n*)) means lim

_{n}_{→∞}

*f*(

*n*)/

*g*(

*n*) = 0. The asymptotic scaling of the number of gates used to simulate the strongly coupled theory is summarized in Table 1. The minimum number of (perfect) qubits required for a nontrivial simulation in

*D*= 2 is estimated—necessarily crudely—to be on the order of 1000 to 10,000, corresponding to ε ranging from 10% to 1% (

*33*).

We have shown that quantum computers can efficiently calculate scattering probabilities in ϕ^{4} theory to arbitrary precision at both weak and strong coupling. Known classical algorithms take exponential time to do this in the strong-coupling and high-precision regimes. In addition to establishing a new exponential quantum speedup, our results lead the way toward a quantum algorithm for simulating the Standard Model of particle physics, which has new features (not exhibited by ϕ^{4} theory) such as chiral fermions and gauge interactions. Such an algorithm would establish that, except for quantum-gravity effects, the standard quantum circuit model suffices to capture completely the computational power of our universe.

## Supplementary Materials

www.sciencemag.org/cgi/content/full/336/6085/1130/DC1

Texts S1 to S7

Figs. S1 and S2

## References and Notes

- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
For λ
_{0}> 0 one has a tighter bound. In this case it is unlikely for |ϕ(**x**)| to be much larger than*O*(*E*^{1/4}). - ↵
See supplementary materials on
*Science*Online. - ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
**Acknowledgments:**We thank A. Gorshkov for helpful discussions. Supported by NSF grant PHY-0803371, U.S. Department of Energy grant DE-FG03-92-ER40701, and NSA/Army Research Office grant W911NF-09-1-0442. Much of this work was done while S.P.J. was at the Institute for Quantum Information (IQI), Caltech, supported by the Sherman Fairchild Foundation. K.S.M.L. was supported in part by NSF grant PHY-0854782. He is grateful for the hospitality of the IQI, Caltech, during parts of this work.