Special Reviews

Neuronal Computations with Stochastic Network States

See allHide authors and affiliations

Science  06 Oct 2006:
Vol. 314, Issue 5796, pp. 85-90
DOI: 10.1126/science.1127241

Abstract

Neuronal networks in vivo are characterized by considerable spontaneous activity, which is highly complex and intrinsically generated by a combination of single-cell electrophysiological properties and recurrent circuits. As seen, for example, during waking compared with being asleep or under anesthesia, neuronal responsiveness differs, concomitant with the pattern of spontaneous brain activity. This pattern, which defines the state of the network, has a dramatic influence on how local networks are engaged by inputs and, therefore, on how information is represented. We review here experimental and theoretical evidence of the decisive role played by stochastic network states in sensory responsiveness with emphasis on activated states such as waking. From single cells to networks, experiments and computational models have addressed the relation between neuronal responsiveness and the complex spatiotemporal patterns of network activity. The understanding of the relation between network state dynamics and information representation is a major challenge that will require developing, in conjunction, specific experimental paradigms and theoretical frameworks.

Brain operations are embedded in a continuous stream of complex spontaneous activity that interacts nonlinearly with incoming sensory inputs, outgoing motor commands, and internal association processes. Spontaneous brain activity refers to ongoing network activity not dominated by any particular single sensory input. Spontaneous brain activity is generated by the combination of intrinsic electrophysiological properties of single neurons (1) and synaptic interactions in networks (2); it is dependent on the level of activation of neuromodulatory systems (3, 4) and is correlated with the functional state of the brain (2). Most of the existing knowledge about the relation between neuronal responsiveness and spontaneous brain activity comes from the comparison between waking and sleep states (5). However, even within the stable state of waking, subtle variations in the spatiotemporal pattern of network activation strongly influence information processing, and vice versa, sensory inputs modify ongoing activity. Such interplay between intrinsically generated activity and its modulation by external input is at the very core of the mechanisms by which the brain represents the external world and elaborates successful response strategies. The complexity of network dynamics is beyond the reach of current recording methods and requires appropriate computational methods carefully constrained by biological data. Predictions from current modeling efforts are a critical guide for designing new experimental approaches.

Experimental Characterization of Intrinsic Dynamics in Neocortex

Understanding the neuronal mechanisms of spontaneous brain activity is of critical importance in understanding its role in information processing. For example, the cellular mechanisms of synchronized oscillations during sleep and anesthesia explain why neural responsiveness is reduced during those states (2). However, much less is known about the complex intrinsic dynamics that characterize the spontaneous activity during the waking state. It is during the waking state that response variability and the spatiotemporal patterns of network activation are key elements of the brain operations that generate adaptive behavior.

The spontaneous activity recorded in the electroencephalogram (EEG) from cortex and thalamus varies greatly between waking and sleep states. During sleep, the EEG is dominated by large-amplitude waves with high temporal and spatial coherence (Fig. 1A), and most of its spectral power is below 15 Hz (4). Rhythmic components are prevalent, although they are highly aperiodic and interspersed with nonrhythmic large-amplitude waves. Intracellular recordings in vivo demonstrate large variations in membrane potential (Vm) occurring synchronously across large populations (5, 6).

Fig. 1.

Complex spatiotemporal patterns of ongoing network activity during wake and sleep states in neocortex. (A) Spatiotemporal map of activity computed from multiple extracellular local field potential (LFP) recordings in a naturally sleeping cat during slow-wave sleep (SWS). The activity consists of highly synchronized slow waves (in the δ frequency range, 1–4 Hz), which are irregular temporally but coherent spatially. (B) Same recording arrangement when the animal was awake. In this case, the β frequency–dominated LFPs (15–30 Hz) are weakly synchronized and very irregular both spatially and temporally. [(A) and (B) modified from (73)] (C) Intracellular recordings during these two states show slow oscillations during slow-wave sleep (SWS, left), and a sustained depolarized state with intense fluctuations during wakefulness (Awake, right). [Courtesy of Igor Timofeev, Laval University] (D) Network state–dependent responsiveness in visual cortex. Cortical receptive fields obtained by reverse correlation in simple cells for ON responses. The procedure was repeated for different cortical states, by varying the depth of the anesthesia (EEG indicated above each color map). (Left) Desynchronized EEG states (light anesthesia); (right) synchronized EEG states with prominent slow oscillatory components (deeper anesthesia). Receptive fields were always smaller during desynchronized states. Color code for spike rate (see scale). [Modified from (12)]

In contrast, upon awakening or during rapid eye movement (REM) sleep (also termed brain activated states), EEG spontaneous activity is characterized by low-amplitude waves, with low spatial and temporal coherence and high spatiotemporal complexity (Fig. 1B), not dominated by any identifiable pattern (4). The spectral power of the activated EEG is characterized by frequencies above 15 Hz. Intracellular recordings in vivo during activated states demonstrate absence of slow oscillations or any large Vm fluctuations characteristic of sleep (7). Instead, cortical and thalamic neurons show a stable resting Vm at a depolarized level close to firing threshold and a noisy, highly irregular pattern of background synaptic activity (7)(Fig. 1C). Interspersed within the synaptic background activity, there are short bouts of fast (20 to 80 Hz) oscillations, which last a few tens of milliseconds and which are occasionally crownedbyspikes (8). Therefore, fast oscillations in cortical and thalamic networks are different from the intrinsically generated, well-organized, and stable subthreshold oscillations in highly rhythmic structures such as the inferior olive (9). Fast oscillations also appear in relation to sensory stimuli and have been proposed to subserve a coordinating function among neuronal groups representing similar stimulus features [reviewed in (10)]. How the same oscillations that appear spontaneously as part of the background activity (8) are also steered to fulfill such function is unknown.

Relation Between Spontaneous Activity and Neuronal Responsiveness

The relation between spontaneous activity and neuronal and network responsiveness has been mainly studied by observing the dramatic changes that take place during the transition between the behavioral states of waking and sleep or during variations in the level of anesthesia (Fig. 1D). A plethora of functional studies in the visual (1116), somatosensory (17), auditory (1820), and olfactory (21) systems have shown that slow, high-amplitude activity in the EEG is associated with reduced neuronal responsiveness and neuronal selectivity. [For an extensive review of the literature and a historical perspective, see (5).]

Cellular studies in vivo and in vitro have shown how such changes in responsiveness come about. During the transition to sleep or anesthesia, cortical and thalamic cells progressively hyperpolarize, thus distancing the membrane from spike threshold and decreasing excitability. In addition, hyperpolarization brings the membrane potential to the activation range of intrinsic currents underlying burst firing, particularly in thalamic cells. Because of its all-or-none behavior and its long refractory period, thalamic bursting is incompatible with the relay function that characterizes activated states and thus act as the first gate of forebrain deafferentation, i.e., blockade of ascending sensory inputs (22, 23). Synchronized inhibitory inputs during sleep oscillations further hyperpolarize cortical and thalamic neurons and generate large membrane shunting, resulting in a dramatic decrease in responsiveness and a large increase in response variability. Finally, highly synchronized patterns of rhythmic activity (24) dominate neuronal membrane behavior and render the network unreliable and less responsive to inputs. Taken together, the above mechanisms result in the functional brain deafferentation that characterizes sleep and anesthesia (2, 22).

In contrast, during waking and REM sleep, a depolarized stable resting Vm close to spike threshold allows neurons to respond to inputs more reliably and with less response variability. However, the detailed understanding of the cellular mechanisms underlying the changes in processing state between waking and sleep or anesthesia is not enough to explain an important paradox posed by the two activated brain states. Despite their striking electrophysiological similarity at the intracellular and EEG levels (7) and the often enhanced evoked potentials during REM (25, 26), waking and REM are diametrically opposite behavioral states (27), because REM sleep is the deepest stage of sleep, i.e., it is the stage with the highest threshold for waking up. In an attempt to explain this paradox, it was shown, using magnetoencephalography in humans, that the main difference in responsiveness during the two states is their effect on the ongoing gamma (∼40 Hz) oscillations (28). Responses to auditory clicks caused a reset of the ongoing gamma rhythm, whereas during REM, the evoked response did not change the phase of the ongoing oscillation; these findings suggest that, during dream sleep, sensory input is not incorporated into the context represented by the ongoing activity (29). The obvious conclusion is that much smaller changes in network dynamics than those that differentiate sleep and waking are critical in determining the processing state of the brain. The failure to detect the differences in network dynamics that must exist between waking and REM sleep is a clear indication that new approaches are necessary.

Another outstanding example of the role of intrinsic network dynamics in determining neuronal responsiveness is the effect of attention. Even though the parameters of network activity measured with current techniques seem to remain stable, shifts in attentional focus both in space (30) and time (31) increase the ability of the network to process stimuli by increasing neuronal sensitivity to stimuli. The neuronal mechanisms underlying attentional shifts are unknown; however, the effect of directed attention enhancing neuronal responsiveness and selectivity, as well as behavioral performance (32), is a clear indication of the critical role played by subtle changes of network dynamics in determining the outcome of network operations.

Two types of computer models discussed in the second part of this review attempt to capture the relation between network dynamics and neuronal responsiveness. Both classes of models explore the complex interaction between sensory inputs and noisy network states. In the first category of models, noise is generated externally and does not explicitly represent the properties of the network itself. In the second class of models, noise is generated intrinsically and is, therefore, constrained by the properties of the network, a state that is much closer to the in vivo situation.

The reverse problem is also of critical importance: how much the network dynamics are modified by ongoing sensory inputs. Although cortical and thalamic networks may be strongly activated by specific patterns of stimuli (20, 33), such effects are likely due to the engagement of brainstem neuromodulatory systems, which receive dense collaterals from ascending sensory inputs (5). Recordings from visual cortex of awake, freely viewing ferrets (34) revealed that the spatial and temporal correlation between cells while natural scenes were viewed varies little when compared with values obtained during eyes closed. This subtle variation indicates that most of the spatial and temporal coordination of neuronal firing is driven by network activity and not by the complex visual stimulus. This paradigm is captured by network models in which the input is interrelated with the network state (see below).

In conclusion, the parameters that determine network dynamics have a critical effect in determining responsiveness and information representation. Network dynamics are likely to be defined at the single-cell level and, therefore, to elude current recording methods that either grossly undersample the population, such as multiunit recordings, or that average out neuronal specificity, such as field potentials or optical recordings. Therefore, critical transitions of network state underlying changes in responsiveness would go undetected by the global measures of activity currently in use. This underlines the important role of neuronal modeling to explore the properties of network dynamics in the irregular and noisy conditions of the waking state.

Computational Models of Network State–Dependent Computations

In the simplest type of computational model, the role of intrinsic network dynamics was investigated by representing irregular network activity as “noise” added to either single neurons or networks. Here, a variable presynaptic discharge, summed over many synapses, is approximated by “noise” imposed on the cell. A second, more elaborate type of model is to consider the state of the network explicitly and how network states can be used for various forms of computation. In the most sophisticated type of model, the input and network state are interdependent. We consider these three types of approaches successively.

Single-neuron and network responsiveness in the presence of noise. The simplest type of activity-dependent model is designed to consider the responsiveness of single neurons or networks in the presence of variable amounts of noise. Contrary to intuition, noise can have beneficial effects, especially in nonlinear systems driven by weak inputs. Such a positive effect of noise was first investigated by physicists and globally termed “stochastic resonance” (35), in which the signal-to-noise ratio is maximal for a nonzero level of noise. This type of paradigm is relevant for the central nervous system and, in particular, for the cerebral cortex for several reasons. One is the nonlinearity of neurons, and another is that cortical neurons operate in vivo during the waking state in highly irregular or noisy states. Electrophysiological and modeling studies have measured the impact of cortical network activity during activated states on single neurons (through intracellular recordings). In parallel, models have estimated the conditions of “synaptic bombardment” that correspond to these measurements (36). These studies concluded that cortical neurons are in a “high-conductance state,” in which synaptic activity causes large Vm fluctuations (also called synaptic noise) and an intense overall membrane conductance compared with the resting (leak) conductance of the neuron. Therefore, synaptic noise can have substantial effects on the behavior of the neuron. Despite this noisy aspect, high-conductance states provide computational advantages to neurons (36); the responsiveness to small inputs is enhanced by synaptic noise (Fig. 2, A and B), and the effect of synaptic inputs can become roughly independent of their location in dendrites (37). These effects are due to both the high conductance and the level of noise. Moreover, the small membrane time constant due to high conductances gives the neuron a better temporal resolution. Enhanced responsiveness can also be viewed as gain modulation and was also identified in real neurons by injecting artificial synaptic noise like that experienced in vivo by using dynamic clamp techniques (3842). The response curve in the presence of noise is smooth (Fig. 2, B and C), so that subthreshold inputs are boosted, while suprathreshold inputs are attenuated (43) (arrows in Fig. 2B). Similar response curves were also obtained during the depolarizing phase of the slow oscillation in vitro (38). Synaptic noise can also combine with intrinsic properties, such as the low-threshold calcium currents in thalamic neurons, which lead to a responsiveness that is much less dependent on the Vm level (44) (Fig. 2D). This shows that intrinsic neuronal properties are expressed differently when considered together with network activity; both combine to yield a global responsiveness that depends on the properties of intrinsic currents and the amount of synaptic noise. Thus, network activity has a decisive impact on the input-output transformations of single neurons and confers to networks' fundamentally different information-processing capabilities as a function of their state.

Fig. 2.

Impact of network activity (“synaptic noise”) on single neurons. (A) Single-trial responses of a model cortical neuron receiving synaptic noise. A stimulus (glutamatergic conductance) was delivered (arrow), either subthreshold (left) or suprathreshold (right). A fraction of the subthreshold stimuli gave rise to action potentials (left); however, not all suprathreshold stimuli gave a response. (B) Response curve computed from simulations similar to (A). The response curve gives the probability of action potential evoked by the stimulus, as a function of stimulus strength. In quiescent conditions, the response curve is all-or-none (action potential threshold around 0.2 mS/cm2). With synaptic noise, subthreshold stimuli were boosted (downward arrow), while suprathreshold stimuli were attenuated (upward arrow). [(A) and (B) modified from (42)] (C) Effect of the amount of synaptic noise (measured by its variance; increasing noise levels from 0 to 2) on the response curve in real cortical neurons where synaptic noise was injected under dynamic clamp. [Modified from (38)] (D) Effect of synaptic noise on thalamic neurons. (Top) Spike probability as a function of interstimulus interval in a quiescent thalamic neuron stimulated by random glutamatergic conductances. The responsiveness was very different at hyperpolarized potentials (Hyp) because of the boosting effect of calcium currents and bursts. (Bottom) Same paradigm in the presence of synaptic noise. Here, the spike probability was nearly independent of stimulus ISI and of membrane potential. [Modified from (44)]

Further evidence that network state has impacts on information processing comes from studies of the effect of noise in neural network models. Noise is beneficial to associative memories by avoiding convergence to spurious states (45); it enables networks to follow high-frequency stimuli (46), boosts the propagation of waves of activity (47), enhances input detection abilities (48, 49), and enables populations of neurons to respond more rapidly (5052). Noisy networks can also sustain a faithful propagation of firing rates [(53, 54), but see (55)] or pulse packets (56) across successive layers (Fig. 3). The latter results are particularly interesting, because noise allows populations of neurons to relay a signal across successive layers without attenuation [in the case of firing rate propagation (Fig. 3C)] or prevents a catastrophic invasion of synchronous activity (Fig. 3D). The fact that a complex waveform propagates in a noisy network (Fig. 3C), but not with low noise levels (Fig. 3B), can be understood qualitatively from the response curve of neurons in the presence of noise (Fig. 2B), for which there is a reliable coding of stimulus amplitude. Indeed, a similar effect is visible in the population response of networks of noisy neurons (Fig. 3E). With low noise levels, the nearly all–or–none response acts as a filter, which allows only strong stimuli to propagate and leads to propagation of synfire waves (Fig. 3D). With stronger noise levels, comparable to intracellular measurements in vivo, the response curve is progressive, which allows a large range of input amplitudes to be processed (Fig. 3C).

Fig. 3.

Beneficial effects of noise at the network level. (A) Scheme of a multilayered network of integrate-and-fire (IF) neurons where layer 1 received a temporally varying input. (B) With low levels of noise (“synfire mode”), firing was only evoked for the strongest stimuli, and synchronous spike volleys propagated across the network. (C) With higher levels of noise (“Rate mode”), the network was able to reliably encode the stimulus and to propagate it across successive layers. [(A) to (C) modified from (53)] (D) Another example of a network able to sustain the propagation of synchronous volleys of spikes (“synfire chains”) only in the presence of noise. [Modified from (56)] (E) Example of population response in a network of noisy neurons (noise), compared with the same network in the absence of noise (quiescent). Network response was close to all-or-none in quiescent conditions, but with noise, the population encoded stimulus amplitude more reliably. [Modified from (42)]

Thus, as for the single-cell paradigms discussed above, noise can have beneficial effects at the network level. Here also, noise can be thought of as representing the background network activity presynaptic to single cells, so these studies can be viewed as investigating network computations in states of irregular network activity. However, instead of explicitly modeling these states as generated by the network itself (see below), the study is performed in a quiescent network subject to external sources of noise. In this case, the main finding is that the nature of propagation of activity is fundamentally different—and in many cases, better—in the presence of noise.

Computing with intrinsic network states. A more elaborate type of model comes from explicitly considering the state of the network and its effect on computations or responsiveness to external inputs. Here again, one may find inspiration from physics, in particular from studies of how different dynamic states of matter provide different properties with respect to interactions with the environment. For example, in fluid dynamics, a fluid can adopt laminar or turbulent states when subject to different constraints. Turbulent states have considerably larger effective transport coefficients that enable the fluid to satisfy those constraints (57). A similar paradigm was applied to describe propagating activity in networks of excitatory and inhibitory neurons that display either silent, oscillatory (periodic), or irregular (chaotic or intermittent) states of activity (58) (Fig. 4A). Irregular states are optimal with respect to information transport [as defined by the diffusion coefficient for Shannon mutual information (Fig. 4A, right)]. Thus, similar to turbulence in fluids, irregular cortical states may represent a dynamic state that provides an optimal capacity for information transport in neural circuits. However, such an analogy must be refined by using more realistic models and connectivity (59).

Fig. 4.

Role of internally generated noise on information propagation in networks. (A) (Left) Stimulation paradigm consisting of injecting a complex waveform [f(t), left] and monitoring the spread of activity as a function of distance (ρ) and state of the network. (Middle) Example of two self-sustained dynamic states of the network, periodic oscillations (top) and irregular activity (“chaotic,” bottom). (Right) Diffusion coefficient calculated for Shannon information as a function of the state of the network. Periodic states (green) had a relatively low diffusion coefficient, whereas, for irregular or chaotic states (blue), information transport was enhanced. [Modified from (58)] (B) Propagation of activity in a network of neurons displaying self-sustained irregular states. (Left) Definition of successive layers and pathways; (middle) absence of propagation with uniform conditions (left) contrasted with propagation when pathway synapses were reinforced (right); (right) propagation of a time-varying stimulus with pathway synapses reinforced. [Modified from (61)] (C) Propagation of activity in a network with self-sustained irregular dynamics. Successive snapshots illustrate that a stimulus (leftmost, red) led to an “explosion” of activity, followed by silence and echoes. [Modified from (60)]

More recent studies have explicitly considered networks endowed with intrinsically generated irregular activity states (51, 52, 60, 61). Can the effect of noise on propagation discussed above be obtained when this noise is internally generated by the network? Such “internally generated noise,” stemming from self-sustained irregular states of activity, was tested with respect to enhancing propagation properties in networks of excitatory and inhibitory neurons (60, 61). However, propagation was difficult to observe; firing rates did not propagate unless synapses were reinforced (more than 10-fold) along specific feedforward pathways (61) (Fig. 4B). Similarly, pulse packets led to explosions of activity (“synfire explosions”) in the network (Fig. 4C), and to enable propagation, synfire chains also had to be pre-embedded into the connectivity (60). Such embedding of feedforward pathways is of course not satisfactory, and the problem of how to obtain reliable propagation in recurrent networks is still open. The network states studied may not have the right level of excitability. In agreement with this, for firing rate models (61), a simple calculation shows that the total synaptic conductance in single cells is about 15 to 30 times as large as the leak conductance, which is about 5 times as large as in vivo measurements (62) and probably exerts an excessive shunting effect and counteracts propagation. Future studies should verify if better propagation capabilities are present in networks with a diversity of conductance states and cellular properties compatible with in vivo measurements, although such states may not be easy to obtain (63).

Interrelated input and network state. A further step in complexity corresponds to models where the input and the network state are interdependent. The simplest of such models is when external inputs influence network activity. Network activity will necessarily be influenced by external inputs, so the complex spatiotemporal activity in a given network is likely to reflect properties of the inputs and cannot be considered as independent. The first approach to take into account such dependence is the “liquid-state machine” paradigm (64). In this case, a network of spiking neurons is maintained in a self-sustained irregular state, and the network receives ongoing inputs. A few cells from the network are taken as output, and their noisy pattern of activity is decoded by “readout units,” which can learn to extract information about the inputs. The activity of neurons at a given time contains information about the input at prior times, thus it is possible to generate a desired output at any time in a system receiving ongoing inputs. Information is stored in the activity of the network, which necessarily reflects input history. Thus, in the liquid-state machine paradigm, network activity acts as a sophisticated nonlinear filter, and although different network states were not explored, this approach has the merit of proposing a computing paradigm that explicitly uses complex dynamics in network activity as a means to integrate information [see also (65, 66)]. This type of paradigm is also compatible with results showing that cortical sensory responses primarily reflect modulations of network activity rather than being input-driven (34).

Finally, the most sophisticated paradigm is that in which the input itself depends on network state. This type of modulation has been found in several sensory systems where internally generated signals are matched with sensory inputs. This is the case for the corollary discharge (also termed efference copy), which represents a copy of the internally generated motor command, which is matched to sensory inputs, performing cancellation or prediction (6770). A similar interaction may arise more generally through thalamocortical loops; the cortex massively projects to the same thalamic cells from which cortical input originates, and cortical synapses on thalamic neurons outnumber peripheral synapses by about one order of magnitude (71). Such a massive corticothalamic feedback can potentially modulate, complement, or even predict sensory inputs. In these cases, cortical network state will necessarily influence and modulate its own inputs (72). Such bidirectional interactions between input and network state are, of course, considerably more difficult to model and constitute clear challenges for future studies.

Conclusion

Much remains to be done to properly characterize internal brain dynamics and how they modulate computations. We need to obtain adequate experimental methods to properly measure the different dynamic states exhibited by neural circuits, and how network activity is modulated by parameters such as attention or sensory inputs. To characterize their computational properties, modeling studies have so far implicitly assumed that a given network produces only one prototypical state of irregular activity, but evidence indicates that this may not be true in general [in Fig. 3A (right), for example, the information flow can be up to two times larger between different chaotic states (58)]. Furthermore, networks may switch rapidly among states according to rules not yet known (33) and with important consequences for information processing. One approach would be to characterize intracellularly the diverse dynamics of fluctuations (or oscillations) in single cells and to model their effect on the neuron's input/output function. This single-cell characterization can then be used to infer propagating properties at the network level, constrained by global recordings methods such as imaging. It is only through a tight combination of experiments and models that we will better understand the computational properties of internally generated brain states.

References and Notes

View Abstract

Navigate This Article