Special Reviews

Quantum-Enhanced Measurements: Beating the Standard Quantum Limit

See allHide authors and affiliations

Science  19 Nov 2004:
Vol. 306, Issue 5700, pp. 1330-1336
DOI: 10.1126/science.1104149

Abstract

Quantum mechanics, through the Heisenberg uncertainty principle, imposes limits on the precision of measurement. Conventional measurement techniques typically fail to reach these limits. Conventional bounds to the precision of measurements such as the shot noise limit or the standard quantum limit are not as fundamental as the Heisenberg limits and can be beaten using quantum strategies that employ “quantum tricks” such as squeezing and entanglement.

Measurement is a physical process, and the accuracy to which measurements can be performed is governed by the laws of physics. In particular, the behavior of systems at small scales is governed by the laws of quantum mechanics, which place limits on the accuracy to which measurements can be performed. These limits to accuracy take two forms. First, the Heisenberg uncertainty relation (1) imposes an intrinsic uncertainty on the values of measurement results of complementary observables such as position and momentum, or the different components of the angular momentum of a rotating object (Fig. 1). Second, every measurement apparatus is itself a quantum system: As a result, the uncertainty relations together with other quantum constraints on the speed of evolution [such as the Margolus-Levitin theorem (2)] impose limits on how accurately we can measure quantities, given the amount of physical resources, such as energy, at hand to perform the measurement.

Fig. 1.

The Heisenberg uncertainty relation. In quantum mechanics, the outcomes x1, x2, etc. of the measurements of a physical quantity x are statistical variables; that is, they are randomly distributed according to a probability determined by the state of the system. A measure of the “sharpness” of a measurement is given by the spread Δx of the outcomes: An example is given in (A), where the outcomes (tiny triangles at the bottom of the graph) are distributed according to a Gaussian probability with standard deviation Δx. The Heisenberg uncertainty relation states that when simultaneously measuring incompatible observables such as position x and momentum p, the product of the spreads is lower-bounded: ΔxΔp ≥ ℏ/2, where ℏ is Planck's constant. The same is true when measuring one of the observables (say x) on a set of particles prepared with a spread Δp on the other observable. [In the general case, when we are measuring two observables A and B the lower bound is given by the expectation value of the commutator between the quantum operators associated to A and B.] In (B) we see a coherent state (depicted through its Wigner function). It has the same spreads in position and momentum Δx = Δp. In (C and D), squeezed states are shown; they have reduced fluctuations in one of the two incompatible observables [x for (C) and p for (D)] at the expense of increased fluctuations in the other. The Heisenberg relation states that the red areas in the plots (given by the product ΔxΔp) must have a surface larger than ℏ/2. In quantum optics, the observables x and p are replaced by the in-phase and out-of-phase amplitudes of the electromagnetic field; that is, by its “quadratures.” Note that the Heisenberg principle is so called only for historical reasons; it is not a principle in modern quantum mechanics, because it is a consequence of the measurement postulate (1). Moreover, Heisenberg's formulation of a dynamical disturbance necessarily induced on a system by a measurement was experimentally proven wrong (81). It is possible to devise experiments where the disturbance is totally negligible, but where the Heisenberg relations are still valid. They are enforced by the complementarity of quantum mechanics.

One important consequence of the physical nature of measurement is the so-called quantum back action: The extraction of information from a system can give rise to a feedback effect in which the system configuration after the measurement is determined by the measurement outcome. For example, the most extreme case (the so-called von Neumann or projective measurement) produces a complete determination of the post-measurement state. When performing successive measurements, quantum back action can be detrimental, because earlier measurements can negatively influence successive ones. A common strategy to get around the negative effect of back action and of Heisenberg uncertainty is to design an experimental apparatus that monitors only one out of a set of incompatible observables: “less is more” (3). This strategy, called quantum nondemolition measurement (36), is not as simple as it sounds. One has to account for the system's interaction with the external environment, which tends to extract and disperse information, and for the system dynamics, which can combine the measured observable with incompatible ones. Another strategy to get around the Heisenberg uncertainty is to employ a quantum state in which the uncertainty in the observable to be monitored is very small (at the cost of a very large uncertainty in the complementary observable). The research on quantum-enhanced measurements was spawned by the invention of such techniques (3, 7, 8) and by the birth of more rigorous treatments of quantum measurements (9).

Most standard measurement techniques do not account for these quantum subtleties, so that their precision is limited by otherwise avoidable sources of errors. Typical examples are the environment-induced noise from vacuum fluctuations (the so-called shot noise) that affects the measurement of the electromagnetic field amplitude, and the dynamically induced noise in the position measurement of a free mass [the so-called standard quantum limit (10)]. These sources of imprecision are not as fundamental as the unavoidable Heisenberg uncertainty relations, because they originate only from a non-optimal choice of measurement strategy. However, the shot noise and standard quantum limits set important benchmarks for the quality of a measurement, and they provide an interesting challenge to devise quantum strategies that can defeat them. It is intriguing that almost 30 years after its introduction (10), the standard quantum limit has not yet been beaten experimentally in a repeated measurement of a test mass. In the meantime, a paradigm shift has occurred: Quantum mechanics, which used to be just the object of investigation, is now viewed as a tool, a source of exotic and funky effects that can be used to our benefit. In measurement and elsewhere, we are witnessing the birth of quantum technology.

Here we describe some of the techniques that have recently been developed to overcome the limitations of classical measurement strategies. We start with a brief overview of some methods to beat the shot noise limit in interferometry. In the process, we provide a simple example that explains the idea behind many quantum-enhanced measurement strategies. We then give an overview of some of the most promising quantum technology proposals and analyze the standard quantum limit on repeated position measurements. Finally, we show the ultimate resolution achievable in measuring time and space according to the known physical laws. A caveat is in order: This review cannot in any way be viewed as complete, because the improvement of interferometry and measurements through nonclassical light is at the heart of modern quantum optics. Many more ideas and experiments have been devised than can be possibly reported here.

Interferometry: Beating the Shot Noise Limit

In this section, we focus on the issues arising in ultraprecise interferometric measurements. A prototypical apparatus is the Mach-Zehnder interferometer (Fig. 2). It acts in the following way. A light beam impinges on a semitransparent mirror (a beam splitter), which divides it into a reflected and a transmitted part. These two components travel along different paths and then are recombined by a second beam splitter. Information on the phase difference φ between the two optical paths of the interferometer can be extracted by monitoring the two output beams, typically by measuring their intensity (the photon number). To see how this works, suppose that a classical coherent beam with N average photons enters the interferometer through the input A. If there is no phase difference φ, all the photons will exit the apparatus at output D. On the other hand, if φ = π radians, all the photons will exit at output C. In the intermediate situations, a fraction cos2(φ/2) of the photons will exit at the output D and a fraction sin2(φ/2) at the output C. By measuring the intensity at the two output ports, one can estimate the value of φ with a statistical error proportional to Embedded Image. This is a consequence of the quantized nature of the electromagnetic field and of the Poissonian statistics of classical light, which in some sense prevents any cooperative behavior among the photons. In fact, the quantity cos2(φ/2) can be experimentally obtained as the statistical average Embedded Image xj/N, where xj takes the value 0 or 1 depending on whether the jth photon in the beam was detected at output C or D, respectively. Because the xjs are independent stochastic variables (photons in the classical beam are uncorrelated), the variance associated with their average is the average of the variances (central limit theorem): The error associated with the measurement of cos2(φ/2) is given by Embedded Image, where Δxj is the spread of the jth measurement (the spreads Δxjs are all equal to Δx; they refer to the same experiment). Notice that the same Embedded Image dependence can be obtained if, instead of using a classical beam with N average photons, we use N separate single-photon beams. In this case, cos2(φ/2) is the probability of the photon exiting at output C, and sin2(φ/2) is the probability of the photon exiting at output D. The Embedded Image bound on the precision (N being the number of photons used) is referred to as the shot noise limit. It is not fundamental and is only a consequence of the employed classical detection strategy, where neither the state preparation nor the readout takes advantage of quantum correlations.

Fig. 2.

A Mach-Zehnder interferometer. The light field enters the apparatus through the input ports A and B of the first beam splitter and leaves it through the output ports C and D of the second beam splitter. By measuring the intensities (photon number per second) of the output beams, one can recover the phase difference φ between the two internal optical paths A′ and B′. Formally, the input-output relation of the apparatus is completely characterized by assigning the transformations of the annihilation operators a, b, c, and d associated with the fields at A, B, C, and D, respectively. These are Embedded Image and Embedded Image, with a′ ≡ (a + ib)/√2 and Embedded Image the annihilation operators associated with the internal paths A′ and B′, respectively.

Carefully designed quantum procedures can beat the Embedded Image limit. For example, injecting squeezed vacuum in the normally unused port B of the interferometer allows one to achieve a sensitivity of 1/N¾ (7, 11). Other strategies can do even better, reaching an 1/N sensitivity with a Embedded Image improvement over the classical strategies detailed above. The simplest example employs as the input to the interferometer the entangled state (8, 12) Embedded Image, where N± ≡ (N ± 1)/2 and where the subscripts A and B label the input ports. This is a highly nonclassical signal, where the correlations between the inputs at A and B cannot be described by a local statistical model. As before, the phase φ can be evaluated by measuring the photon number difference between the two interferometer outputs; that is, by evaluating the expectation value of the operator Mddcc = (aabb)cosφ + (ab + ba)sinφ, where a, b, c, and d are the annihilation operators of the optical modes at the interferometer ports A, B, C, and D respectively (Fig. 2). This scheme allows a sensitivity on the order of 1/N for the measurements of small phase differences: φ ≅ 0. In fact, the expectation value of the output photon number difference is equal to 〈M〉 = –N+sinφ, and its variance is Δ2M = cos(2φ) + N+2 sin2φ. The error Δφ on the estimated phase can be obtained from error propagation, Embedded Image and for φ ≅ 0, it is easy to see that it scales as 1/N (12). Even though this procedure achieves good precision only for small values of φ, other schemes exist that show the same high sensitivity for all values of this parameter (13). Many quantum procedures that achieve the same 1/N sensitivity have been proposed that do not make explicit use of entangled inputs. For example, one can inject squeezed states into both interferometer inputs A and B and then measure the intensity difference at C and D (14, 15), or inject Fock states at A and B and then evaluate the photon-counting probability at the output (16), or, finally, measure the de Broglie wavelength of the radiation (17). One may wonder whether this 1/N precision can be further increased, but in line with the time/energy Heisenberg relation (18) and the Margolus-Levitin theorem (2), it appears that this is a true quantum limit, and there is no way that it can be beaten (19, 20). It is customarily referred to as the Heisenberg limit to interferometry.

Quantum-Enhanced Parameter Estimation

Some of the above interferometric techniques have also found applications outside the context of optics, such as in spectroscopy (19) or in atomic interferometry (21). In this section, we point out a general aspect of the quantum estimation theory on which most of the quantum strategies presented in this review are based: the fact that typically a highly correlated input is used and a collective measurement is performed (Fig. 3). A simple example (22) may help. Consider a qubit, a two-level quantum system that is described by the two states |0〉 and |1〉 and their superpositions. Suppose that the dynamics leaves the state |0〉 unchanged and adds a phase φ to |1〉, so that |1〉 → eiφ|1〉. If we want to estimate this phase, we can use a strategy analogous to Ramsey interferometry by preparing the system in the quantum superposition Embedded Image, which is transformed by the system dynamics into Embedded Image. The probability p(φ) that the output state |ψout〉 is equal to the input |ψin〉 allows us to evaluate φ as Embedded Image. This quantity can be estimated with a statistical error Embedded Image. If we evaluate a parameter φ from a quantity p(φ), error propagation theory tells us that the error associated with the former is given by Embedded Image, which in this case gives Δφ = 1. We can improve such an error by repeating the experiment N times. This introduces a factor Embedded Image in the standard deviation (again as an effect of the central limit theorem), and we find an overall error Embedded Image. (It is the same sensitivity achieved by the experiment of a single photon in the interferometer described above; these two procedures are essentially equivalent.) As in the case of the interferometer, a more sensitive quantum strategy exists. In fact, instead of using N times the state |ψin〉, we can use the following entangled state that still uses N qubits Embedded Image Now the tensor product structure of quantum mechanics helps us, as the eiφ phase factors gained by the |1〉s combine so that the corresponding output state is Embedded Image The probability q(φ) that |φout〉 equals |φin〉 is q(φ) = cos2(Nφ/2), which, as before, can be estimated with an error Δ2q(φ) = q(φ) – q2(φ). This means that φ will have an error Embedded Image. This is a Embedded Image enhancement over the precision of N measurements on unentangled qubits, which has been achieved by using an entangled input and performing a collective nonlocal measurement on the output: the measurement of the probability q(φ).

Fig. 3.

Comparison between classical and quantum strategies. In conventional measurement schemes (upper panel), N independent physical systems are separately prepared and separately detected. The final result comes from a statistical average of the N outcomes. In quantum-enhanced measurement schemes (lower panel), the N physical systems are typically prepared in a highly correlated configuration (an entangled or a squeezed state) and are measured collectively with a single “nonlocal” measurement that encompasses all the systems.

A generalization of the parameter estimation presented here is the estimation of the input-output relations of an unknown quantum device. A simple strategy would be to feed the device with a “complete” collection of independent states and measure the resulting outputs. More efficiently, one can use entangled inputs: One-half of the entangled state is fed into the device, and a collective measurement is performed on the other half and on the device's output (23, 24). As in the case discussed above, the quantum correlations between the components of the entangled state increase the precision and hence reduce the number of measurements required. A similar strategy permits us to improve the precision in the estimation of a parameter of an apparatus or to increase the stability of measurements (25). Part of an entangled state is fed into the apparatus to be probed, and an appropriate collective measurement is performed on the output together with the other part of the entangled state. This permits one, for example, to discriminate among the four Pauli unitary transformations, applying the transformation on only a single qubit probe. It would be impossible without entanglement.

Quantum Technology

The quantum-enhanced parameter estimation presented above has found applications in the most diverse fields. In this section, we give an overview of some of them, leaving aside all the applications that quantum mechanics has found in communication and computation (26), which are not directly connected with the subject of this review.

Quantum frequency standards (19, 27). A typical issue in metrology and spectroscopy is to measure time or frequency with very high accuracy. This requires a very precise clock: an oscillator. Atomic transitions are so useful to this aim that the very definition of a second is based on them. To measure time or frequency accurately, we can start with N cold ions in the ground state |0〉 and apply an electromagnetic pulse that creates independently in each ion an equally weighted superposition Embedded Image of the ground state and of an excited state |1〉. A subsequent free evolution of the ions for a time t introduces a phase factor between the two states that can be measured at the end of the interval by applying a second, identical electromagnetic pulse and measuring the probability that the final state is |0〉 (Ramsey interferometry). This procedure is just a physical implementation of the qubit example described above, but here the phase factor is time-dependent and is equal to φ = ωt, where ω is the frequency of the transition |0〉 ↔ |1〉. Hence, the same analysis applies: From the N independent ions we can recover the pursued frequency ω (from the phase factor φ) with an error Embedded Image; that is, Embedded Image.

Instead of acting independently on each ion, one can start from the entangled state |[phis]in〉 introduced above. In this case, the error in the determination of the frequency is Δω = 1/(N t). There is an enhancement of the square root of the number N of entangled ions over the previous strategy.

Quantum lithography and two-photon microscopy (28–32). When we try to resolve objects smaller than the wavelength of the employed light, the wave nature of radiation becomes important, because the light tends to scatter around the object, limiting the achievable resolution. This defines the Rayleigh diffraction bound, which restricts many optical techniques, as it is not always practical to reduce the wavelength. Quantum effects can help by decreasing the wavelength of the light while keeping the wavelength of the radiation field constant. How can this apparently paradoxical effect come about? The basic idea is to use physical devices that are sensitive to the de Broglie wavelength. In quantum mechanics, to every object we can associate wavelength λ = 2πℏ/p, where p is the object's momentum (for radiation, p is the energy E divided by the speed of light c). Obviously, the wavelength of a single photon λ = 2πℏc/E = 2pc/ω is the wavelength of its radiation field. But what happens if we are able to use a “biphoton” (a single entity constituted by two photons)? In that case, we find that its wavelength is 2πℏc/(2E) = λ/2: half the wavelength of a single photon or equivalently half the wavelength of its radiation field. Of course, using triphotons, quadriphotons, etc., would result in further decreases of wavelengths. Experimentalists are able to measure the de Broglie wavelengths of biphotons (17, 31, 33), so that theoreticians have concocted useful ways to employ them. The most important applications are quantum lithography (28, 30), in which smaller wavelengths help to etch smaller integrated circuit elements on a two-photon sensitive substrate; and two-photon microscopy (32), in which they produce less damage to the specimens. Also in this context, entanglement is a useful resource because it is instrumental in creating the required biphotons and in enhancing the cross section of two-photon absorption (29).

Quantum positioning and clock synchronization (34–36). To find out the position of an object, one can measure the time it takes for some light signals to travel from that object to some known reference points. The best classical strategy is to measure the travel times of the single photons in the beam and to calculate their average. This allows one to determine the travel time with an error proportional to Embedded Image, where Δω is the signal bandwidth, which induces a minimum time duration of 1/Δω for each photon (that is, the time of arrival of each of the photons will have a spread 1/Δω). The accuracy of the travel time measure thus depends on the spectral distribution of the employed signal. The reader will bet that a quantum strategy allows one to do better with the same resources. In fact, by entangling N photons in frequency, we can create a “superphoton” whose bandwidth is still Δω (it employs the same energetic resources as the N photon signal employed above), but whose mean effective frequency is N times higher, as the entanglement causes the N photons to have the same frequency. This means that the superphoton allows us to achieve N times the accuracy of a single photon with the same bandwidth. To be fair, we need to compare the performance of the superphoton with that of a classical signal of N photons, so that the overall gain of the quantum strategy is Embedded Image (34).

The problem of localization is intimately connected with the problem of synchronizing distant clocks. In fact, by measuring the time it takes for a signal to travel to known locations, it is possible to synchronize clocks at these locations. This immediately tells us that the above quantum protocol can give a quantum improvement in the precision of distant clocks' synchronization. Moreover, quantum effects can also be useful in avoiding the detrimental effects of dispersion (37). The speed of light in dispersive media has a frequency dependence, so that narrow signals (which are constituted by many frequencies) tend to spread out during their travel. This effect ruins the sharp timing signals transmitted. Using the nonlocal correlations of entangled signals, we can engineer frequency-entangled pulses that are not affected by dispersion and that allow clock synchronization (35).

Quantum imaging (38, 39). A large number of applications based on the use of quantum effects in spatially multimode light can be grouped under the common label of quantum imaging.

The most famous quantum imaging experiment is the reconstruction of the so-called ghost images (40), where nonlocal correlations between spatially entangled two-photon states are used to create the image of an object without directly looking at it. The basic idea is to illuminate an object with one of the twin photons which is then absorbed by a “bucket” detector (it has no spatial resolution, but is able to tell only whether the photon crossed the object or was absorbed by it). The other entangled photon is shone onto an imaging array, and the procedure is repeated many times. Correlating the image on the array with the coincidences between the arrival of one photon at the bucket detector and the other at the imaging array, the shape of the object can be determined. This is equivalent to the following scenario: Use a device that shoots two pebbles in random but exactly opposite directions. When one of the pebbles overshoots an object, it hits a bell. The other pebble instead hits a soft wall, where it remains glued to it. By shooting many pebbles and marking on the wall the pebble's position every time we hear the bell ring, we will project the outline of the object on the wall. The fact that such an intuitive description exists should make us queasy about the real quantum nature of the ghost image experiment, and in fact it was shown that, even if it makes use of highly nonclassical states of light, it is an essentially classical procedure (41). The quantum nature of such an experiment lies in the fact that, using the same apparatus, both the near-field and the far-field plane can be perfectly imaged. Classical correlations do not allow this, even though classical thermal light can approximate it (42). A related subject is the creation of noiseless images or noiseless image amplification (38, 39), which is the formation of optical images whose amplitude fluctuations are reduced below the shot noise and can be, in principle, suppressed completely.

Many applications require us to measure very accurately the direction in which a focused beam of light is shining. A typical example is atom force microscopy, in which the deflection of a light beam reflected from a cantilever that feels the atomic force can achieve nanometric resolution. Because a light beam is, ultimately, composed of photons, the best way to measure its direction is apparently to shine the beam on an infinitely resolving detector, to measure where each of the photons is inside the beam and to take the average of the positions. This strategy will estimate the position of the beam with an accuracy that scales as Embedded Image, where Δd is the beam width and N is the number of detected photons. As for the shot noise, this limit derives from the quantized nature of light and from the statistical distribution of the photons inside the beam. As in interferometry, here also quantum effects can boost the sensitivity up to 1/N (11, 43, 44). In fact, consider the following simple example, in which the beam shines along the z direction and is deflected only along the x direction. We can measure such a deflection by shining the beam exactly between two perfectly adjacent detectors and measuring the photon number difference between them. If we expand the spatial modes of the light beam into the sum of an “even” mode, which is symmetrical in the x direction, and an “odd” mode, which changes sign at x = 0 (between the two detectors), we see that the beam is perfectly centered when only the even mode is populated and the odd mode is in the vacuum. Borrowing from sub–shot noise interferometry, we see that we can achieve a 1/N¾ sensitivity by populating the odd mode with squeezed vacuum instead. Moreover, we can achieve the Heisenberg limit of 1/N by populating both modes with a Fock state |N/2〉 (11).

Any object that creates an image (such as a microscope or a telescope) is necessarily limited by diffraction, because of its finite transverse dimensions. Even though classical “super-resolution” techniques are known that can be used to beat the Rayleigh diffraction limit, these are ultimately limited by the quantum fluctuations that introduce undesired quantum noise in the reconstructed image. By illuminating the object with bright multimode squeezed light and by replacing with squeezed vacuum the part that the finite dimensions of the device cuts away, we can increase the resolution of the reconstructed image (45), at least in the case of weakly absorbing objects (opaque objects would degrade the squeezed light shining on them).

Coordinate transfer (46–49). A peculiar example of a quantum-enhanced strategy arises in the context of communicating a direction in space (49) or a reference frame (4648) (composed by three orthogonal directions). If there is no prior shared reference, it requires some sort of parallel transport such as exchanging gyroscopes (which in quantum mechanical jargon are called spins). Quantum mechanics imposes a bound on the precision with which the axis of a gyroscope can be measured, because the different components of the angular momentum are incompatible observables: Unless one knows the rotation axis a priori, it is impossible to exactly measure the total angular momentum. Gisin and Popescu found the baffling result that sending two gyroscopes pointing in the same direction is less efficient (it allows a less accurate determination of this direction) than sending two gyroscopes pointing in opposite directions (49). The reason is that the most efficient measurement for recovering an unknown direction from a couple of spins is an entangled measurement, which has operators with entangled eigenvectors associated to it. Such a detection strategy cannot be separated into different stages, so that it is not possible to rotate the apparatus before the measurement on the second spin, which would imply the equivalence of the two scenarios. The two scenarios could also be shown to be equivalent if it were possible to flip the direction of the second spin without knowing its rotation axis, but this is impossible (it is an anti-unitary transformation, whereas quantum mechanics is notably unitary). Elaborating on this idea, many quantum-enhanced coordinate transfer strategies (4648) have been found.

Repeated Position Measurements: Beating the Standard Quantum Limit

The continuous measure of the position of a free mass is a paradigmatic example of how classical strategies are limited in precision. This experiment is typical of gravitational wave detection, where the position of a test mass must be accurately monitored. The standard quantum limit (3, 6, 10, 50) arises in this context by directly applying the Heisenberg relation to two consecutive measurements of the position of the free mass, without taking into account the possibility that the first measurement can be tuned to appropriately change the position configuration of the mass. The original argument was the following: Suppose that we perform the first position measurement at time t = 0 with an uncertainty Δx(0). This corresponds [via the Heisenberg uncertainty relation (Fig. 1)] to an uncertainty in the initial momentum p at least equal to Δp(0) = ℏ/[2Δx(0)]. The dynamics of an unperturbed the free mass m is governed by the Hamiltonian H = p2/2m, which evolves at time t the position as x(t) = x(0) + p(0)t/m. This implies that the uncertainty in the initial momentum p(0) transfers into an uncertainty in the position x(t). The net effect appears to be that a small initial uncertainty Δ2x(0) produces a big final uncertainty Δ2x(t) ≅ Δ2x(0) + Δ2p(0) t2/m2 ≥ 2Δx(0)Δp(0) t/m ≥ ℏt/m. In this derivation, there is an implicit assumption that the final uncertainty Δx(t) cannot be decreased by the correlations between the position and the momentum that build up during the unitary evolution after the first measurement. This is unwarranted: Yuen showed that an exotic detection strategy exists which, after the first measurement, leaves the mass in a “contractive state” (51); that is, whose position uncertainty decreases for a certain period of time. [The time t for which a mass in such a state has a spread in position Δ2 x(t) below a level 2δ2ℏ/m satisfies t ≤ 4δ2.] The standard quantum limit is beaten, Δ2x(t) ≤ ℏ/m, if the second measurement is performed soon enough. The debate then evolved to ascertaining whether two successive measurements at times 0 and t can be performed, both of which beat the standard quantum limit (52). In fact, a simple application of the Heisenberg relation gives Embedded Image, from which it seems impossible that both measurements have a spread Embedded Image. However, Δx(0) is the variance of the state immediately after the first measurement, which does not necessarily coincide with the variance of the results of the first measurement. In fact, it is possible (53) to measure the position accurately and still leave the mass in a contractive state with initial variance Embedded Image, so that the standard quantum limit can be beaten repeatedly.

Notice that the back action introduced in the derivation of the standard quantum limit would not occur if one were to measure the momentum instead of the position, because the above Hamiltonian conserves the momentum, p(t) = p(0), which is independent of the position. The momentum measure is an example of a quantum nondemolition detection scheme (36), in which one removes any feedback in the detection by focusing on those observables that are not coupled by the dynamics to their incompatible counterparts.

The standard quantum limit arises also in the context of interferometric measurements of position (6, 50, 54), where the mass is typically one of the mirrors of the interferometer. The movement of the mirror introduces a phase difference between the arms of the interferometer (Fig. 2). To achieve high measurement precision, one is hence tempted to feed the interferometer electromagnetic signals that possess a well-defined phase. However, the phase and the intensity of the electromagnetic field are in some sense complementary, and a well-defined phase corresponds to a highly undetermined intensity. At first sight this seems without consequences, but any mirror feels a force dependent on the intensity of the light shining on it, through the mechanism of radiation pressure. Hence, the fluctuations in intensity of a signal with a well-defined phase induce a fluctuating random force on the mirror, which ultimately spoils the precision of the measurement setup. Using sufficiently intense coherent light and optimizing the phase and intensity fluctuations, one finds that the attainable precision is again the standard quantum limit (50, 54). Apparently, this derivation of the standard quantum limit is completely independent from the one given above, starting from the Heisenberg relation. However, also here there is an unwarranted assumption: the treatment of phase and intensity fluctuations as independent quantities. Caves showed that by dropping this premise, one can do better (7). In fact, a squeezed input signal (Fig. 1) where the amplitude quadrature has less quantum fluctuation than the phase quadrature produces a reduced radiation pressure noise at the expense of an increased photon-counting noise, and vice-versa. This balance allows one to fine-tune the parameters so that the standard quantum limit can be reached with much lower light intensity. Refinements of this technique allows one to beat the standard quantum limit by tailoring appropriate squeezed states (6, 14, 5557) or by using quantum nondemolition measurements (3).

The standard quantum limit is not a fundamental precision threshold. However, at present its conquest is still an open experimental challenge. In fact, on one hand, most of the above theoretical proposals are quite impractical and should be seen only as proofs of principle; and, on the other hand, many competing sources of noise become important when performing very precise measures. The most important is, of course, the thermal fluctuations in the mass to be monitored, but the shot noise at the detection stage or the dissipative part of the mirror response are also big limitations (3, 5860). Various techniques to beat this threshold have been proposed. Among others (by necessity the following list is incomplete), we can cite the use of feedback techniques to enforce a positive back action (6164), or the huge number of techniques used to perform quantum nondemolition measurements (3, 4, 6567), or build contractive states, or build speed meters (68). At the present stage, the most promising seem to be the use of nanotechnologies, where tiny mechanical oscillators are coupled to high-sensitivity electronics (59, 60), or the new generations of gravitational wave detectors (69).

Quantum Limits to the Measurement of Spacetime Geometry

Quantum effects can be used to increase the accuracy of many different kinds of measurements, but what are the ultimate limits to the resolution that physical laws allow? Attempts to derive quantum limits to the accuracy of measuring the geometry of spacetime date back at least to Wigner (70, 71). As the preceding discussions show, however, care must be taken in applying nonfundamental bounds such as the standard quantum limit. Fortunately, the Margolus-Levitin (2) theorem and techniques from the physics of computation (72, 73) can be used to derive limits to the accuracy with which quantum systems can be used to measure spacetime geometry.

The first question is that of minimum distance and time. One can increase the precision of clocks used to measure time by increasing their energy: The Margolus-Levitin theorem implies that the minimum “tick length” of a clock with energy E is Δt = πℏ/2E. Similarly, the wavelength of the particles used to map out space can be decreased by increasing their energy. There appears to be no fundamental physical limit to increasing the energy of the clocks used to measure time and the particles used to measure space, until one reaches the Planck scale, Embedded Image, lP = ctP. At this scale, the Compton wavelength 2πℏ/mc of the clocks and particles is on the same order of magnitude as their Schwarzs-child radius 2mG/c2, and quantum gravitational effects come into play (74). The second question is that of the accuracy to which one can map out the large-scale structure of spacetime. One way to measure the geometry of spacetime is to fill space with a swarm of clocks, all exchanging signals with the other clocks and measuring the signals' times of arrival. In this picture, the clocks could be as large as Global Positioning System satellites or as small as elementary particles.

Let's look at how accurately this swarm of clocks can map out a volume of spacetime with radius R over time T. Every tick of a clock or click of a detector is an elementary event in which a system goes from a state to an orthogonal state. Accordingly, the total number of ticks and clicks that can take place within the volume is a scalar quantity limited by the Margolus-Levitin theorem. It is less than 2ET/πℏ, where E is the energy of the clocks within the volume.

If we pack the clocks too densely, they will form a black hole and be useless for the measurement of spacetime outside their horizon. To prevent black hole formation, the energy of clocks within a spacelike region of radius R must be less than Rc4/2G. As a result, the total number of elementary events that can occur in the volume of spacetime is no greater than Embedded Image(1)

This quantum geometric limit can also be formulated in a covariant fashion. The maximum number of ticks and clicks in a volume is a scalar quantity proportional to the integral of the trace of the energy-momentum tensor over the four volume; and 2TR can be identified with the area of an extremal world sheet contained within the four volume. The quantum geometric limit of Eq. 1 was derived without any recourse to quantum gravity: The Planck scale makes its appearance simply from combining quantum limits to measurement with the requirement that a region not itself be a black hole. (If the region is at or above its critical density, then Eq. 1 still holds if R is the radius of the horizon of the region as measured by an external observer.)

The quantum geometric limit is consistent with and complementary to the Bekenstein bound, the holographic bound, and the covariant entropy bound (7578), all of which limit the number of bits that can be contained within a region. [It also confirms Ng's prediction (79) for the scale of spacetime foam.] For example, the argument that leads to Eq. 1 also implies that the maximum number of quanta of wavelength λ ≤ 2R that can be packed into a volume of radius R without turning that volume into a black hole is bounded by R2lP2 (80), in accordance with the Bekenstein bound and holography. Because it bounds the number of elementary events or “ops,” rather than the number of bits, the quantum geometric limit of Eq. 1 implies a trade-off between the accuracy with which one can measure time and the accuracy with which one can measure space: The maximum spatial resolution can only be obtained by relaxing the temporal resolution and having each clock tick only once in time T. This lack of temporal resolution is characteristic of systems, such as black holes, that attain the holographic bound (72). By contrast, if the events are spread out uniformly in space and time, the number of cells within the spatial volume goes as Embedded Image (less than the holographic bound), and the number of ticks of each clock over time T goes as (T/tP)½. This is the accuracy to which ordinary matter such as radiation and massive particles map out spacetime. Because it is at or close to its critical density, our own universe maps out the geometry of spacetime to an accuracy approaching the absolute limit given by Embedded Image: There have been no more than (T/tP)2 ≈ 10123 ticks and clicks since the Big Bang (73).

Conclusion

Quantum mechanics governs every aspect of the physical world, including the measuring devices we use to obtain information about that world. Quantum mechanics limits the accuracy of such devices via the Heisenberg uncertainty principle and the Margolus-Levitin theorem, but it also supplies quantum strategies for surpassing semiclassical limits such as the standard quantum limit and the shot noise limit. Starting from strategies to enhance the sensitivity of interferometers and position measurements, scientists and engineers have developed quantum technologies that use effects such as squeezing and entanglement to improve the accuracy of a wide variety of measurements. Some of these quantum techniques are still futuristic; at present, methods for creating and manipulating entangled states are still in their infancy. As we saw, quantum effects usually allow a precision enhancement equal to the square root of the number N of employed particles, but it is usually very complicated to entangle as few as N = 5 or 6 particles. In contrast, it is typically rather simple to employ millions of particles to use the classical strategy of plain averaging. As quantum technologies improve, however, the use of entanglement and squeezing to enhance precision measurements is likely to become more widespread. Meanwhile, as the example of quantum limits to measuring spacetime geometry shows, examining the quantum limits to measurement can give insight into the workings of the universe at its most fundamental levels.

References and Notes

View Abstract

Navigate This Article