Special Viewpoint

Simple Lessons from Complexity

See allHide authors and affiliations

Science  02 Apr 1999:
Vol. 284, Issue 5411, pp. 87-89
DOI: 10.1126/science.284.5411.87

Abstract

The complexity of the world is contrasted with the simplicity of the basic laws of physics. In recent years, considerable study has been devoted to systems that exhibit complex outcomes. This experience has not given us any new laws of physics, but has instead given us a set of lessons about appropriate ways of approaching complex systems.

One of the most striking aspects of physics is the simplicity of its laws. Maxwell's equations, Schrödinger's equation, and Hamiltonian mechanics can each be expressed in a few lines. The ideas that form the foundation of our worldview are also very simple indeed: The world is lawful, and the same basic laws hold everywhere. Everything is simple, neat, and expressible in terms of everyday mathematics, either partial differential or ordinary differential equations.

Everything is simple and neat—except, of course, the world.

Every place we look—outside the physics classroom—we see a world of amazing complexity. The world contains many examples of complex “ecologies” at all levels: huge mountain ranges, the delicate ridge on the surface of a sand dune, the salt spray coming off a wave, the interdependencies of financial markets, and the true ecologies formed by living things. Each situation is highly organized and distinctive, with biological systems forming a limiting case of exceptional complexity. So why, if the laws are so simple, is the world so complicated? Here, we try to give a partial answer to this question and summarize general lessons that can be drawn from recent work on complexity in physical systems.

To us, complexity means that we have structure with variations. Thus, a living organism is complex because it has many different working parts, each formed by variations in the working out of the same genetic coding. One look at ocean or sky gives the conviction that there is some natural tendency toward the formation of structure in the physical world. Chaos is also found very frequently. Chaos is the sensitive dependence of a final result upon the initial conditions that bring it about. In a chaotic world, it is hard to predict which variation will arise in a given place and time. Indeed, errors and uncertainties often grow exponentially with time.

A complex world is interesting because it is highly structured. A chaotic world is interesting because we do not know what is coming next. But the world contains regularities as well. For example, climate is very complex, but winter follows summer in a predictable pattern. Our world is both complex and chaotic. From this, an elementary lesson follows:

Nature can produce complex structures even in simple situations, and can obey simple laws even in complex situations.

Creating complexity. Fluids frequently produce complex behavior, which can be either highly organized (think of a tornado) or chaotic (like a highly turbulent flow). What is seen often depends on the size of the observer. A fly caught in a tornado would be surprised to learn that it is participating in a highly structured flow.

The equations that describe how the fluid velocity at one point in space affects the velocity at other points in space are derived from three basic ideas:

Locality. A fluid contains many particles in motion. A particle is influenced only by other particles in its immediate neighborhood.

Conservation. Some things are never lost, only moved around, such as particles and momentum.

Symmetry. A fluid is isotropic and rotationally invariant.

To make a computer fluid, construct (1) a kind of square dance in which particles move around, obeying the three basic ideas. In the simplest case, the dance is done on a regular hexagonal lattice (Fig. 1, upper panel). Each particle is characterized by a lattice position and by one of six directions of motion. These arrows are momentum vectors. The square dance starts when the caller says “Promenade”; this call instructs each dancer to proceed one step in the direction of its arrow (Fig. 1, middle panel). And then the caller says “Swing your partner.” This is an instruction to rotate all the arrows on a given site through 60°, if they happen to add up to zero total momentum (Fig. 1, lower panel). Notice that both particle number and momentum are conserved in each step. Take thousands of particles and thousands of steps, average a bit to smooth out the data, and thereby find a pattern of motion identical to fluid motion. The square dance behaves like a fluid simply because its steps obey the three fundamental laws of fluid motion (2).

Figure 1

Three stages in the update algorithm of a lattice gas. Between the upper and middle panels, each particle moves in the direction of its arrow to arrive at a nearest neighboring site. Next, particles “collide” whenever the total momentum on a site is zero; these collisions occur between the middle and lower panels.

Gradually, through examples like this, it has dawned on us that very simple ingredients can produce very beautiful, rich, and patterned outputs. Thus, our square dancers, through their simple hops and swings, produce the entire beautiful world of fluids in motion. For simple elementary actors to produce patterned and complex output, we require many events. Our example included many events because it had many actors and much time.

For physicists it is delightful, but not surprising, that the computer generates realistic fluid behavior, regardless of the precise details of how we do the coding. If this were not the case, then we would have extreme sensitivity to the microscopic modeling—what one might loosely call “model chaos”—and physics as a science could not exist: In order to model a bulldozer, we would need to be careful to model its constituent quarks! Nature has been kind enough to have provided us with a convenient separation of length, energy, and time scales, allowing us to excavate physical laws from well-defined strata, even though the consequences of these laws are very complex. But we might not be so lucky with complexity in biological or economic situations.

Understanding complexity. To extract physical knowledge from a complex system, one must focus on the right level of description. There are three modes of investigation of systems like this: experimental, computational, and theoretical. Experiment is best for exploration, because experimental techniques (combined with the human eye) can scan large ranges of data very efficiently.

Computer simulations are often used to check our understanding of a particular physical process or situation. In our fluid dynamics example, the large-scale structure is independent of detailed description of the motion on the small scales. We can exploit this kind of “universality” by designing the most convenient “minimal model.” For example, most fluid flow programs should not be modeled by molecular dynamics simulations. These simulations are so slow that they may not be able to reach a regime that will enable us to safely extrapolate to large systems. So we are likely to get the wrong answer. Instead, we should model at the macro level, using large time steps and large systems. For example, some computational biologists try to simulate protein dynamics by following each and every small part of the molecule. The result? Most of the computer cycles are spent watching little CH groups wiggling back and forth. Nothing biologically significant occurs in the time they can afford.

Use the right level of description to catch the phenomena of interest. Don't model bulldozers with quarks.

This lesson applies with equal strength to theoretical work aimed at understanding complex systems. Modeling complex systems by tractable closure schemes or complicated free-field theories in disguise does not work. These may yield a successful description of the small-scale structure, but this description is likely to be irrelevant for the large-scale features. To get these gross features, one should most often use a more phenomenological and aggregated description, aimed specifically at the higher level. Thus, financial markets should not be modeled by simple geometric Brownian motion–based models, all of which form the basis for modern treatments of derivative markets. These models were created to be analytically tractable and derive from very crude phenomenological modeling. They cannot reproduce the observed strongly non-Gaussian probability distributions in many markets, which exhibit a feature so generic that it even has a whimsical name, fat tails. Instead, the modeling should be driven by asking “What are the simplest nonlinearities or nonlocalities that should be present?”—that is, by trying to separate universal scaling features from market-specific features. The inclusion of too many processes and parameters will obscure the desired qualitative understanding.

Every good model starts from a question. The modeler should always choose the correct level of detail to answer the question.

Complexity and statistics. As a fluid moves around, it may carry with it some “passive” elements that do not themselves influence the flow. Both energy and the density of impurities undergo this kind of motion, in which they convect (go with the flow) and diffuse (move randomly). The convective motion tends to move initially distant regions of the fluid close to one another, thereby producing enhanced gradients. The diffusion tends to smooth out the gradients.

In many situations, these “passive scalars” are carried along by a rapid and turbulent flow, so that the convective mixing tends to dominate the diffusion. Computer simulations and experiments show that the density of the scalar soon develops a profile in which there are many flat regions surrounded by abrupt jumps. The flat regions are produced by the combined effects of convection and diffusion in well-mixed regions of the sample. However, because the density must generally follow the initial gradient, mixed regions must be separated by jumps.

This behavior, in which the system is dominated by really big events, is called intermittency. Intermittency seems to be a ubiquitous feature of dynamical systems. The weather turns stormy suddenly. There are ice ages. The stock market crashes. A plague takes hold. An airplane runs into turbulence. In every case, there is a big jump in the behavior of a dynamical system, and that big jump can have big human consequences.

These ubiquitous jumps come in all sizes, with the big jumps being less likely. Empirically, the size of the jumps is often given by a probability distribution, which for large jumps takes the formEmbedded Image(1)(3), where σ is the standard deviation. Contrast this with the usual Gaussian formEmbedded Image(2)which has been the usual guess in statistical problems since the time of Galton. Chaotic and turbulent systems often show exponential behaviors, like Eq. 1. Improbable (very bad) events are much more likely with the exponential form than with the Gaussian form (Eq. 2). For example, a 6σ event has a chance of 10–9 of occurring in the Gaussian case, whereas with the exponential form the chance is 0.0025. Estimates, particularly Gaussian estimates, formed by short time series will give an entirely incorrect picture of large-scale fluctuations. These considerations have important consequences in, for example, financial markets, as emphasized recently by Mandelbrot (4). Thus, we come to another lesson:

Complex systems form structures, and these structures vary widely in size and duration. Their probability distributions are rarely normal, so that exceptional events are not that rare.

The development of complexity in physics. Long ago, Katchalsky (5) and Prigogine (6) described the formation of complex structures in nonequilibrium systems. Their “dissipative structures” could have a degree of complication that could grow rapidly in time. It is believed that comparably complex structures do not exist in equilibrium. Turing (7) described a mechanism, involving reaction diffusion equations, for the development of organization in living things. As we have seen from the examples quoted here and many others, in nonequilibrium situations many-particle systems can get very complicated indeed (8).

It is likely that this tendency is the basis of life. A restricted version of this idea is given in Bak, Tang, and Wiesenfeld's “self-organized criticality” (9). In an essay entitled “More Is Different,” Anderson (10) described how features of organization may arise as an “emergent” property of systems. An example of this point of view is given by work on complexity “phase transitions” and accompanying speculations that various aspects of biological systems sit on a critical point between order and complexity (11).

The next few years are likely to lead to an increasing study of complexity in the context of statistical dynamics, with a view to better understanding physical, economic, social, and especially biological systems. It will be an exciting time. As science turns to complexity, one must realize that complexity demands attitudes quite different from those heretofore common in physics. Up to now, physicists looked for fundamental laws true for all times and all places. But each complex system is different; apparently there are no general laws for complexity. Instead, one must reach for “lessons” that might, with insight and understanding, be learned in one system and applied to another. Maybe physics studies will become more like human experience.

REFERENCES AND NOTES

View Abstract

Navigate This Article