Quantum mechanics has enjoyed many successes since its formulation in the early 20th century. It has explained the structure and interactions of atoms, nuclei, and subnuclear particles, and has given rise to revolutionary technologies, such as integrated circuit chips and magnetic resonance imaging. At the same time, it has generated puzzles that persist to this day.

These puzzles are largely connected with the role of measurements in quantum mechanics (*1*). According to the standard quantum postulates, given the total energy (the Hamiltonian) of a quantum system, the state of the system (the wave function) evolves with time in a predictable, deterministic way as described by Schrödinger's equation. However, when a physical quantity—the quantum mechanical spin, for example—is “measured,” the outcome is not predictable. If the wave function contains a superposition of components, such as spin-up and spin-down (each with a definite spin value, weighted by coefficients *c*_{up} and *c*_{down}), then each run gives a definite outcome, either spin-up or spin-down. But repeated experimental runs yield a probabilistic distribution of outcomes. The outcome probabilities are given by the absolute value squared of the corresponding coefficient in the initial wave function. This recipe is the Born rule.

How can we reconcile this probabilistic distribution of outcomes with the deterministic form of Schrödinger's equation? What precisely constitutes a “measurement?” At what point do superpositions break down, and definite outcomes appear? Is there a quantitative criterion, such as size of the measuring apparatus, governing the transition from coherent superpositions to definite outcomes? These puzzles have inspired a large literature in physics and philosophy.

There are two distinct approaches. One is to assume that quantum theory is exact, but that the interpretive postulates must be modified to eliminate apparent contradictions. The second approach is to assume that quantum mechanics is not exact, but instead is a very accurate approximation to a deeper-level theory that reconciles the deterministic and probabilistic aspects. This may seem radical, even heretical, but looking back in the history of physics, there are precedents. Newtonian mechanics was considered to be exact for several centuries, before it was supplanted by relativity and quantum theory. But apart from this history, there is another important motivation for considering modifications of quantum theory. Having an alternative theory, to which current and proposed experiments can be compared, allows a quantitative measure of the accuracy to which quantum theory can be tested.

We focus here on phenomenological approaches that modify the Schrödinger equation. A successful phenomenology must accomplish many things: It must explain why repetitions of the same measurement lead to definite, but differing, outcomes, and why the probability distribution of outcomes is given by the Born rule; it must permit quantum coherence to be maintained for atomic and mesoscopic systems, while predicting definite outcomes for measurements with realistic apparatus sizes in realistic measurement times; it should conserve overall probability, so that particles do not spontaneously disappear; and it should not allow superluminal transmission of signals.

Over the past two decades, a phenomenology has emerged that satisfies these requirements. One ingredient is the observation that rare modifications, or “hits,” acting on a system by localizing its wave function, do not alter coherent superpositions for microscopic systems, but when accumulated over a macroscopic apparatus can lead to definite outcomes that differ from run to run (*2*). A second ingredient is the observation that the classic “gambler's ruin” problem in probability theory gives a mechanism that can explain the Born rule governing outcome probabilities (*3*). Suppose that Alice and Bob each have a stack of pennies, and flip a fair coin. If the coin shows heads, Alice gives Bob a penny, while if the coin shows tails, Bob gives Alice a penny. The game ends when one player has all the pennies and the other has none. Mathematical analysis shows that the probability of each player winning is proportional to the size of their initial stack of pennies. By mapping the initial stack sizes into the modulus squared of the initial spin component coefficients (*c*_{up} and *c*_{down}), and the random flips of the fair coin into the random “hits” acting on the wave function, one then has a mechanism for obtaining the Born rule.

The combination of these two ideas leads to a definite model, called the continuous spontaneous localization (CSL) model (*4*), in which a Brownian motion noise term coupled nonlinearly to the local mass density is added to the Schrödinger equation. This noise is responsible for the spontaneous collapse of the wave function. At the same time, the standard form of this model has a linear evolution equation for the noise-averaged density matrix, forbidding superluminal communication. Other versions of the model exist (*5*, *6*), and an underlying dynamics has been proposed for which this model would be a natural phenomenology (*7*).

The CSL model has two intrinsic parameters. One is a rate parameter, λ, with dimensions of inverse time, governing the noise strength. The other is a length, *r*_{C}, which can be interpreted as the spatial correlation length of the noise-field. Conventionally, *r*_{C} is taken as 10^{−5} cm, but any length a few orders of magnitude larger than atomic dimensions ensures that the “hits” do not disrupt the internal structure of matter. The reduction rate in the CSL model is the product of the rate parameter, times the square of the number of nucleons (protons and neutrons) within a correlation length that are displaced by more than this length, times the number of such displaced groups. Applying this formula, and demanding that a minimal apparatus composed of ∼10^{15} nucleons should settle to a definite outcome in ∼10^{−7} s or less, with the conventional *r*_{C}, requires that λ should be greater than ∼10^{−17} s^{−1} (*4*, *5*). If one requires that latent image formation in photography, rather than subsequent development, constitutes a measurement, the fact that only 5000 or so nucleons move appreciable distances in a few hundredths of a second in latent image formation requires an enhanced lower bound for λ a factor of ∼10^{8} larger (*8*).

An upper bound on λ is placed by the requirement that apparent violations of energy conservation, taking the form of spontaneous heating produced by the noise, should not exceed empirical bounds, the strongest of which comes from heating of the intergalactic medium (*8*). Spontaneous radiation from atoms places another stringent bound (*9*), which can, however, be evaded if the noise is nonwhite, with a frequency cutoff (*10*–*12*). Laboratory and cosmological bounds on λ (for *r*_{C} = 10^{−5} cm) are summarized in the figure, which gives for each bound the order of magnitude improvement needed to confront the conventional CSL model value of λ.

Accurate tests of quantum mechanics that have been performed or proposed include diffraction of large molecules in fine mesh gratings (*13*) and a cantilever mirror incorporated into an interferometer (*14*). The figure shows the current limit on λ that has been obtained to date in fullerene diffraction and the limit that would be obtained if the proposed cantilever experiment attains full sensitivity (*15*). To confront the conventional (enhanced) value of λ, one would have to diffract molecules a factor of 10^{6} (10^{2}) larger than fullerenes.

Experiments do not yet tell us whether quantum theory is exact or approximate. Future lines of research include refining the sensitivity of current experiments to reach the capability of making this decision and achieving a deeper understanding of the origin of the CSL noise field.