**N**eural networks are complicated dynamical entities, whose properties are understood only in the simplest cases. When the complex biophysical properties of neurons and their connections (synapses) are combined with realistic connectivity rules and scales, network dynamics are usually difficult to predict. Yet, experimental neuroscience is often based on the implicit premise that the neural mechanisms underlying sensation, perception, and cognition are well approximated by steady-state measurements (of neuron activity) or by models in which the behavior of the network is simple (steady state or periodic). Transient states—ones in which no stable equilibrium is reached—may sometimes better describe neural network behavior. An intuition for such properties arises from mathematical and computational modeling of some appropriately simple experimental systems.

Computing with “attractors” is a concept familiar to the neural networks community. Upon some input signal, a model neural network will gradually change its pattern of activated nodes (neurons) until it settles into one pattern—an attractor state. Thus, the input—a voice, an odor, or something more abstract—is associated with properties of the entire network in a particular attractor state. Such patterns of neural activity might be established, learned, and recalled during perception, memorization, and retrieval, respectively.

Two ideas define the range of possible dynamics expressed by neural networks. The simplest emphasizes stable attractors (1), with memories as possible cognitive equivalents. The other, less intuitive, idea emphasizes non-classical, transient dynamics as in “liquid-state machines” (2). Liquid-state machines are networks in which computation is carried out over time without any need for a classical attractor state. Because neural phenomena often occur on very short time scales, classical attractor states—fixed points or limit cycles—cannot be realistically reached. Indeed, behavioral and neurophysiological experiments reveal the existence and functional relevance of dynamics that, while deterministic, do not require waiting to reach classical attractor states (3–6). Also, the conditions required to achieve such attractors in artificial neural networks are often implausible for known biological circuits. Finally, fixed-point attractor dynamics, despite their name, express no useful dynamics; only the state the network settles into, given by its initial conditions (and characterized mathematically by, for example, a minimum in an energy function), matters, not the path taken to reach that state.

An alternative theoretical framework may explain some forms of neural network dynamics that are consistent both with experiments and with transient dynamics. In this framework, transient dynamics have two main features. First, although they cannot be described by classical attractor dynamics, they are resistant to noise, and reliable even in the face of small variations in initial conditions; the succession of states visited by the system (its trajectory, or transient) is thus stable. Second, the transients are input-specific, and thus contain information about what caused them in the first place. Notably, systems with few degrees of freedom do not, as a rule, express transient dynamics with such properties. Therefore, they are not good models for developing the kind of intuition required here. Nevertheless, stable transient dynamics can possibly be understood from within the existing framework of nonlinear dynamical systems.

Experimental observations in the olfactory systems of locust (7) and zebrafish (8) support such an alternative framework. Odors generate distributed (in time and space), odor- and concentration-specific patterns of activity in principal neurons. Hence, odor representations can be described as successions of states, or trajectories, that each correspond to one stimulus and one concentration (9). Only when a stimulus is sustained does its corresponding trajectory reach a stable fixed-point attractor (10). However, stable transients are observed whether a stimulus is sustained or not—that is, even when a stimulus is sufficiently short-lived that no fixed-point attractor state is reached. When the responses to several stimuli are compared, the distances between the trajectories corresponding to each stimulus are greatest during the transients, not between the fixed points (10). Because transients and fixed points represent states of neuronal populations, and because these states are themselves read out or “decoded” by yet other neuronal populations, stimulus identification by such decoders should be more reliable with transient than with fixed-point states. This conclusion is supported by the observation that a population of neurons that receives signals from the principal neurons responds mostly during transients, when separation between inputs is optimized. In response to these observations, a theoretical framework needs to explain the system's sensitivity to incoming signals, its stability against noise (external noise and intrinsic pulsations of the system), and its minimal dependence on the initial conditions (reproducibility).

To understand such transient dynamics, a mathematical image is needed that is consistent with existing results, and its underlying model(s) must be used to generate testable predictions. One possible image is a stable heteroclinic channel (11, 12) (see the figure). A stable heteroclinic channel is defined by a sequence of successive metastable (“saddle”) states. Under the proper conditions, all the trajectories in the neighborhood of these saddle points remain in the channel, ensuring robustness and reproducibility in a wide range of control parameters. Such dynamical objects are rare in low-dimensional systems, but common in complex ones. A possible underlying model is a generalized Lotka-Volterra equation (see supporting online material), which expresses and predicts the fate of an ongoing competition between *n* interactive elements. When *n* is small (for example, two species competing for the same food source, or predator-prey interactions), limit cycles are often seen, consistent with observations (13). When *n* is large, the state portrait of the system often contains a heteroclinic sequence linking saddle points. These saddles can be pictured as successive and temporary winners in a nonending competitive game. In neural systems, because a representative model must produce sequences of connected neuron population states (the saddle points), neural connectivity must be asymmetric, as determined by theoretical examination of a basic “coarse grain” model (12). Although many connection statistics probably work for stable heteroclinic-type dynamics, it is likely that connectivity within biological networks is, to some extent at least, the result of optimization by evolution and synaptic plasticity.

What are the conditions necessary for transient stability? Consider a three-dimensional autonomous inhibitory circuit with asymmetric connections. Such a system displays stable, sequential, and cyclic activation of its components, the simplest variant of a “winner-less” competition (11). High-dimensional systems with asymmetric connections can generate structurally stable sequences—transients, each shaped by one input (14). A stable heteroclinic channel is the dynamical image of this behavior (see the figure).

Asymmetric inhibitory connectivity also helps to solve the apparent paradox that sensitivity and reliability in a network can coexist (12, 14, 15). To be reliable, a system must be both sensitive to the input and insensitive to perturbations and initial conditions. To solve this paradox, one must realize that the neurons participating in a stable heteroclinic channel are assigned by the stimulus, by virtue of their direct and/or indirect input from the neurons activated by that stimulus. The joint action of the external input and a stimulus-dependent connectivity matrix defines the stimulus-specific heteroclinic channel. In addition, asymmetric inhibition coordinates the sequential activity of the neurons and keeps a heteroclinic channel stable.

The idea behind a liquid-state machine is based on the proposals that the cerebral cortex is a nonequilibrium system and that brain computations can be thought of as unique patterns of transient activity, controlled by incoming input (2). The results of these computations must be reproducible, robust against noise, and easily decoded. Because a stable heteroclinic channel is possibly the only dynamical object that satisfies all required conditions, it is plausible that “liquid-state machines” are dynamical systems with stable heteroclinic channels, based on the principle of winner-less competition. Thus, using asymmetric inhibition appropriately, the space of possible states of large neural systems can be restricted to connected saddle points, forming stable heteroclinic channels. These channels can be thought of as underlying reliable transient brain dynamics. It will be interesting to see if extensions of these ideas can apply to large neural circuits, and to the perceptual and cognitive functions that they subserve.