Research Article

Quantum Algorithms for Quantum Field Theories

See allHide authors and affiliations

Science  01 Jun 2012:
Vol. 336, Issue 6085, pp. 1130-1133
DOI: 10.1126/science.1217069

Abstract

Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (ϕ4 theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.

The question whether quantum field theories can be efficiently simulated by quantum computers was first posed by Feynman three decades ago when he introduced the notion of quantum computers (1). Since then, efficient quantum algorithms for simulating the dynamics of quantum many-body systems have been developed theoretically (24) and demonstrated experimentally (57). Quantum field theory, which applies quantum mechanics to functions of space and time, presents additional technical challenges, because the number of degrees of freedom per unit volume is formally infinite.

We show that quantum computers can efficiently calculate scattering probabilities in continuum ϕ4 theory to an arbitrary degree of precision. We have chosen ϕ4 theory, a scalar theory with quartic self-interactions, because it is among the simplest interacting quantum field theories and thus illustrates essential issues without unnecessary complications. Our work introduces several new techniques, including creation of the initial state by a generalization of adiabatic state preparation and the use of effective field theory to analyze spatial discretization errors.

In complexity theory, the efficiency of an algorithm is judged by how its computational demands scale with the problem size or some other quantity associated with the problem’s intrinsic difficulty. An algorithm with polynomial-time asymptotic scaling is considered to be feasible, whereas one with superpolynomial (typically, exponential) scaling is considered infeasible. This classification has proved to be a useful guide in practice.

Traditional calculations of quantum field theory scattering amplitudes rely on perturbation theory—namely, a series expansion in powers of the coupling (the coefficient of the interaction term), which is taken to be small. A powerful and intuitive way of organizing this perturbative expansion is through Feynman diagrams, in which the number of loops is associated with the power of the coupling. A reasonable measure of the computational complexity of perturbative calculations is therefore the number of Feynman diagrams, which is determined by combinatorics and grows factorially with the number of loops and the number of external particles.

If the coupling constant is insufficiently small, the perturbation series does not yield correct results. In ϕ4 theory, for D = 2, 3 spacetime dimensions, by increasing the coupling λ0, one eventually reaches a quantum phase transition at some critical coupling λc (810). In the parameter space near this phase transition, perturbative methods become unreliable; this region is referred to as the strong-coupling regime. There are then no known feasible classical methods for calculating scattering amplitudes, although lattice field theory can be used to obtain static quantities such as mass ratios. Even at weak coupling, the perturbation series is not convergent, although it is asymptotic (1113). Including higher-order contributions beyond a certain point makes the approximation worse. There is thus a maximum possible precision achievable perturbatively.

We simulate a process in which initially well-separated massive particles with well-defined momenta scatter off each other. The input to our algorithm is a list of the momenta of the incoming particles, and the output is a list of the momenta of the outgoing particles produced by the physical scattering process. At relativistic energies, the number of outgoing particles may differ from the number of incoming particles. In accordance with quantum mechanics, the incoming momenta do not uniquely determine the outgoing momenta, but rather a probability distribution over possible outcomes. Upon repeated runs, our quantum algorithm samples from this distribution. The asymptotic scaling of the algorithm is given in Eq. 9 and Table 1. The simulated scattering processes closely match experiments in particle accelerators, which are the standard tools to probe quantum field-theoretical effects.

Table 1

The asymptotic scaling of the number of quantum gates needed to simulate scattering in the strong-coupling regime in d = 1, 2 spatial dimensions is polynomial in p (the momentum of the incoming pair of particles), λc − λ0 (the distance from the phase transition), and nout (the maximum kinematically allowed number of outgoing particles). The notation Embedded Image means f(n) = O(g(n) logc(n)) for some constant c.

View this table:

The issue of gauge symmetries in quantum simulation of lattice field theories has been addressed in (14). There is an extensive literature on analog simulation of interacting quantum field theories using ultracold atoms (1526), trapped ions (27, 28), and Josephson-junction arrays (29). Much work has also been done on analog simulation of special-relativistic quantum mechanical effects such as zitterbewegung and the Klein paradox, as well as general-relativistic quantum effects such as Hawking radiation [for recent reviews, see (30, 31)]. Our work, in contrast to these studies, addresses digital quantum simulation, with explicit consideration of convergence to the continuum limit and efficient preparation of wave packet states for the computation of dynamical quantities such as scattering probabilities. Our analysis includes error estimates of all parts of our algorithm.

Representing fields with qubits. Although quantum field theory is typically expressed in terms of Lagrangians and within the interaction picture, our algorithm is more naturally described in the formalism of Hamiltonians and within the Schrödinger picture. We start by defining a lattice ϕ4 theory and subsequently address convergence to the continuum theory. (In D = 4, the continuum limit is believed to be the free theory. Nonetheless, because the coupling shrinks only logarithmically, scattering processes for particles with small momenta in lattice units are interesting to compute.) Let Ω=aL^d, that is, an L^×...×L^ lattice in d = D − 1 spatial dimensions with periodic boundary conditions and lattice spacing a. The number of lattice sites is V=L^d. For each x ∈ Ω, let ϕ(x) be a continuous, real degree of freedom—interpreted as the field at x—and let π(x) be the corresponding canonically conjugate variable. In canonical quantization, these degrees of freedom are promoted to Hermitian operators with the commutation relation[ϕ(x),π(y)]=iadδx,y1 (1)We use units with ħ = c = 1. ϕ4 theory on the lattice Ω is defined by the HamiltonianH=xΩad[12π(x)2+12(aϕ)2(x)+12m02ϕ(x)2+λ04!ϕ(x)4] (2)where ∇aϕ denotes a discretized derivative (that is, a finite-difference operator) and m0 is the particle mass of the corresponding noninteracting (λ0 = 0) theory.

We represent the state of the lattice field theory by devoting one register of qubits to store the value of the field at each lattice point. In principle, each ϕ(x) is an unbounded continuous variable. To represent the field at a given site with finitely many qubits, we cut off the field at a maximum magnitude ϕmax and discretize it in increments of δϕ. This requires nb = O(log2maxϕ)) qubits per site. Note that this field discretization is a separate issue from the spatial discretization via the lattice Ω.

Let |ψ〉 be any state such that 〈ψ|H|ψ〉 ≤ E. The probability distribution over ϕ(x) defined by |ψ〉 (for any x ∈ Ω) yields a very low probability (32, 33) for |ϕ(x)| to be much larger than O(E1/2). Thus, a cutoff ϕmax=O((VE/adm02ε)1/2) suffices to ensure fidelity 1 − ε to the original state |ψ〉. One can prove this by bounding 〈ψ|ϕ(x)|ψ〉 and 〈ψ|ϕ2(x)|ψ〉 as functions of E and applying Chebyshev’s inequality (33). To choose δϕ, note that the eigenbasis of adπ(x) is the Fourier transform of the eigenbasis of ϕ(x). Hence, discretizing ϕ(x) in units of δϕ is equivalent to introducing the cutoff −πmax ≤ π(x) ≤ πmax, where πmax = 1/(adδϕ). By bounding the expectations of π(x) and π2(x), one finds that it suffices to choose πmax = O((𝒱Ead)1/2), and thus nb = O(log2(𝒱E/m0ε)).

The algorithm. We now turn to the main three tasks of quantum simulation: preparing an initial state, simulating the time evolution, and measuring final observables. We first discuss the simulation of time evolution because it is used in all three tasks. The unitary operator representing time evolution, exp(−iHt), can be approximated by a quantum circuit of O((t𝒱)1+(1/2k)) gates implementing a kth-order Suzuki-Trotter formula (34, 35). For n time steps, the error is ε ~ n−2k. The near-linear scaling with t has long been known. The scaling with 𝒱 is a consequence of the locality of H (33) and appears not to have been noted previously in the quantum algorithms literature.

To simulate scattering, one needs to prepare an initial state of particles in well-separated wave packets. We do so by preparing the vacuum of the λ0 = 0 theory, exciting wave packets, and then adiabatically turning on the coupling λ0. Let H(0) be the Hamiltonian obtained by setting λ0 = 0 in H. H(0) defines an exactly solvable model in which the particles are noninteracting. The vacuum (ground) state |vac(0)〉 of H(0) is a multivariate Gaussian wave function in the variables {ϕ(x)|x ∈ Ω} and can therefore be prepared via the method of Kitaev and Webb (36). The asymptotic scaling of the Kitaev-Webb method is dictated by the computation of the LDLT decomposition of the inverse covariance matrix, which can be done classically in O(𝒱2.376) time (37, 38).

In analogy with the harmonic oscillator, one can define creation and annihilation operators ap and ap such that H(0)=pΓLdω(p)apap+E(0)1, where Γ=(2π/L^a)L^d is the momentum-space lattice corresponding to Ω, ω(p)=[m02+(4/a2)j=1dsin2(apj/2)]1/2, and E(0) is an irrelevant zero-point energy. The operator ap can be interpreted as creating a (completely delocalized) particle of the noninteracting theory with momentum p and energy ω(p).

The (unnormalized) state ϕ(x)|vac(0)〉 is interpreted as a single particle localized at x [see, e.g., (39)]. Because ap|vac(0)〉 = 0, ϕ(x)|vac(0)=ax|vac(0), whereax=pΓLdexp(ipx)[12ω(p)]1/2ap (3)The operatoraψ=η(ψ)xΩadψ(x)ax (4)creates a wave packet with position-space wave function ψ; η(ψ) is a normalization constant, chosen so that [aψ,aψ]=1. aψ is not unitary, so it cannot be directly implemented by a quantum circuit. Instead, we introduce an ancillary qubit and letHψ=aψ|10|+aψ|01| (5)One can verify that exp(iHψπ/2)|vac(0)|0=iaψ|vac(0)|1. Using a high-order Suzuki-Trotter formula (34, 35), we can construct an efficient quantum circuit approximating the unitary transformation exp(–iHψπ/2). Applied to |vac(0)〉, this circuit yields the desired state up to an irrelevant global phase and an unentangled ancillary qubit, which can be discarded. We repeat this process for each incoming particle desired.

Because we wish to create localized wave packets, we choose ψ(x) to have bounded support. Expanding aψ in terms of the operators ϕ and π yields an expression of the form aψ=xΩ[f(x)ϕ(x)+g(x)π(x)], where f(x) and g(x) are exponentially decaying with characteristic length scale 1/m0 outside the support of ψ. Thus, aψ and aψ can be exponentially well approximated by linear combinations of the operators ϕ and π on a local region of space, and the complexity of simulating exp(–iHψπ/2) does not scale with the volume V. Furthermore, provided the initial wave packets are separated by a distance δ that is large relative to 1/m0, the preparation of each additional wave packet leaves the existing wave packets almost perfectly undisturbed [with errors on the order of exp(–m0δ)].

At this point, we have finished constructing wave packets of the noninteracting theory. We next use a Suzuki-Trotter formula to construct a quantum circuit simulating the unitary transformation induced by a time-dependent Hamiltonian in which the coupling constant is gradually increased from zero to its final value, λ0, over time τ. A perfectly adiabatic turn-on would ensure that no stray particles are created during this process, provided particle creation costs energy, that is, the particles have nonzero mass. Imperfect adiabaticity causes errors, namely, particle creation from the vacuum and the splitting of one particle into three. The probabilities of such errors depend significantly on the particle mass.

In the D = 2, 3 interacting theory, with fixed m0 and sufficiently large λ0, the mass vanishes. This marks the location of the ϕ → −ϕ symmetry-breaking transition. Here, we restrict our attention to simulations within the symmetric phase, but we do consider systems arbitrarily close to the phase transition, because these should be particularly hard to simulate classically. In this regime, the mass exhibits known power-law behavior.

As Eq. 4 shows, wave packets are not eigenstates of H(0). During the adiabatic turn-on, the different eigenstates acquire different dynamical phases. Thus, as the wave packet time evolves, it propagates and broadens. This behavior is undesirable in our simulation, because we do not wish the particles to collide and scatter before the coupling reaches its final value. We therefore introduce backward time evolutions governed by time-independent Hamiltonians into the adiabatic state-preparation process to undo the dynamical phases. Specifically, let H(s) parameterize the adiabatic time evolution, with H(0) = H(0) and H(1) = H. We divide the adiabatic preparation into J steps, with Uj denoting the unitary time evolution induced by the time-dependent Hamiltonian linearly interpolating between H((j − 1)/J) and H(j/J) over a period of τ/J. Let Mj consist of backward, forward, and backward evolutions, namely,Mj=exp[iH(j+1J)τ2J]Ujexp[iH(jJ)τ2J] (6)Our full state-preparation process is j=1JMj. The dynamical phases are undone, converging to zero as J → ∞, while the adiabatic change of eigenbasis is undisturbed (33).

What is the complexity of adiabatic state preparation? At weak coupling for d = 1, 2, a wave packet travels a distance O(τ/J2) as the coupling is turned on; therefore, choosing J = O1/2) ensures that this distance is a suitably small constant. Diabatic transitions occur with probability O((J/τγ2)2), where γ is the energy gap of the transition; hence, the state preparation error is ε = O(1/τ). The number of quantum gates needed to simulate this process is O((τ𝒱)1+o(1)) = O((τad)1+o(1)) in a fixed physical volume; setting τ = O−1) and a = O1/2) (see below), we conclude that the complexity scales as O((1/ε)1+(d/2)). For d = 3, the scaling with ε is further enhanced because the perturbative corrections to the physical mass are highly sensitive to the value of a. Complexity scalings at strong coupling can be obtained by analogous reasoning (33).

After the system has evolved for a period in which scattering occurs, measurement can be performed in one of two ways. The interaction can be adiabatically turned off through the time-reversed version of the turn-on described above; in the free theory, we can then measure the number operators of the momentum modes, using the method of phase estimation (40), that is, by simulating exp(iLdapapt) for various values of t and Fourier-transforming the results. Alternatively, one can divide space into small regions and measure the total energy and momentum operators in each region using phase estimation. This is a reasonable idealization of real detectors and can yield information about bound-state energies.

Discretization errors. Having described how to simulate scattering processes in a discretized quantum field theory, we now consider discretization errors. To analyze these errors, we use methods of effective field theory (EFT), a well-developed formalism underlying our modern understanding of quantum field theory.

In its regime of validity, typically below a particular energy scale, an EFT reproduces the behavior of the full (that is, fundamental) theory under consideration: It can be regarded as the low-energy limit of that theory. An EFT for a full theory is thus somewhat analogous to a Taylor series for a function. It involves an expansion in some suitable small parameter, so that, although it consists of infinitely many terms, higher-order terms are increasingly suppressed. Thus, the series can be truncated, with corresponding finite and controllable errors.

We apply this framework to analyze the effect of discretizing the spatial dimensions of the continuum ϕ4 quantum field theory. The discretized Lagrangian can be thought of as the leading contribution (denoted by (0)) to an EFT. From the leading operators left out, we can thus infer the scaling of the error associated with a nonzero lattice spacing a.

The full (untruncated) effective Lagrangian will have every coupling respecting the ϕ → −ϕ symmetry and so will take the formLeff=L(0)+c6!ϕ6+cϕ32ϕ+c8!ϕ8+ (7)This can be simplified. First, the chain rule and integration by parts can be used to write any operator with two derivatives acting on different fields in the form ϕn2ϕ. For example, ϕ2μϕ∂μϕ = (1/3)∂μ3)∂μϕ → −(1/3)ϕ32ϕ. Such an operator can then be simplified via the equation of motion (41, 42). If this were the equation of motion of the continuum theory, any derivative operator would then be completely eliminated. In the discretized theory, however, the equation of motion is modified and there are residual, Lorentz-violating operators. In fact, because the difference operators in the discretized theory are only approximately equal to the derivatives in the continuum theory, the simplest Lorentz-violating operators are induced purely by discretization.

In units where ħ = c = 1, all quantities have units of some power of mass. The mass dimensions (denoted by square brackets) of the field and coupling in D = d + 1 spacetime dimensions are [ϕ] = (D − 2)/2 and [λ] = 4 − D, implying that[c]=62D,[c]=83D (8)In D = 4 dimensions, [c] = −2 and [c′′] = −4, and the only pertinent dimensionful parameter is the lattice spacing; therefore, c ~ a2 and c′′ ~ a4. Thus, of the operators not included in the Lagrangian (0), ϕ6 is more significant than ϕ2n, for n > 3.

In D = 2, 3, the scaling of the coefficients with a is somewhat less obvious, because now the coupling λ provides another dimensionful parameter. To obtain the scaling of c, one should consider the Feynman diagram that generates the corresponding operator. This involves three ϕ4 vertices, so Embedded Image (Other diagrams involve higher powers of λ; hence their contributions to the operator are suppressed by higher powers of a.) Likewise, the coefficient of ϕ8 will scale as λ4a8−D, which means that it is suppressed by a2 relative to the coefficient of ϕ6.

The EFT thus consists of three different classes of operators: operators of the form ϕ2n, Lorentz-violating operators arising solely from discretization effects, and Lorentz-violating operators resulting from discretization and quantum effects. These are shown with the scaling of their coefficients in Table 2, which reveals that the dominant discretization errors scale as a2 in D = 2, 3, 4. (In D = 2, 3, errors of type II dominate. In D = 4, errors of types I and II each scale as a2.)

Table 2

Effective field theory operators fall into three classes. The general operator in each class is shown, with the canonical scaling of its coefficient in D spacetime dimensions. Embedded Image.

View this table:

At strong coupling, the operators and their scaling remain the same at the scale of the matching of the full theory onto the EFT, although the explicit coefficients are no longer calculable. However, the running of the coefficients down to lower energies is determined by their so-called anomalous dimensions, which depend on the coupling strength. These anomalous dimensions modify the scaling; at weak coupling the modification is small, but at strong coupling it could be larger. (Still, the scaling will remain polynomial.)

Concluding remarks. The most computationally costly part of the algorithm is either adiabatic state preparation or preparation of the free vacuum, depending on the asymptotic scaling considered and the number of spatial dimensions (33). Because a ~ ε1/2, preparation of the free vacuum has complexity ~ 𝒱2.376 ~ a−2.376d ~ ε2.376d/2. The number of gates, Gweak, needed to simulate weakly coupled, (d + 1)-dimensional ϕ4 theory with accuracy ±ε scales as follows:Gweak(1ε)1.5+o(1),d=1(1ε)2.376+o(1),d=2(1ε)5.5+o(1),d=3 (9)The notation f(n) = o(g(n)) means limn→∞f(n)/g(n) = 0. The asymptotic scaling of the number of gates used to simulate the strongly coupled theory is summarized in Table 1. The minimum number of (perfect) qubits required for a nontrivial simulation in D = 2 is estimated—necessarily crudely—to be on the order of 1000 to 10,000, corresponding to ε ranging from 10% to 1% (33).

We have shown that quantum computers can efficiently calculate scattering probabilities in ϕ4 theory to arbitrary precision at both weak and strong coupling. Known classical algorithms take exponential time to do this in the strong-coupling and high-precision regimes. In addition to establishing a new exponential quantum speedup, our results lead the way toward a quantum algorithm for simulating the Standard Model of particle physics, which has new features (not exhibited by ϕ4 theory) such as chiral fermions and gauge interactions. Such an algorithm would establish that, except for quantum-gravity effects, the standard quantum circuit model suffices to capture completely the computational power of our universe.

Supplementary Materials

www.sciencemag.org/cgi/content/full/336/6085/1130/DC1

Texts S1 to S7

Figs. S1 and S2

References (4347)

References and Notes

  1. For λ0 > 0 one has a tighter bound. In this case it is unlikely for |ϕ(x)| to be much larger than O(E1/4).
  2. See supplementary materials on Science Online.
  3. Acknowledgments: We thank A. Gorshkov for helpful discussions. Supported by NSF grant PHY-0803371, U.S. Department of Energy grant DE-FG03-92-ER40701, and NSA/Army Research Office grant W911NF-09-1-0442. Much of this work was done while S.P.J. was at the Institute for Quantum Information (IQI), Caltech, supported by the Sherman Fairchild Foundation. K.S.M.L. was supported in part by NSF grant PHY-0854782. He is grateful for the hospitality of the IQI, Caltech, during parts of this work.
View Abstract

Stay Connected to Science

Navigate This Article