Special Reviews

Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and Abstraction

See allHide authors and affiliations

Science  06 Oct 2006:
Vol. 314, Issue 5796, pp. 80-85
DOI: 10.1126/science.1127240

Abstract

The fundamental building block of every nervous system is the single neuron. Understanding how these exquisitely structured elements operate is an integral part of the quest to solve the mysteries of the brain. Quantitative mathematical models have proved to be an indispensable tool in pursuing this goal. We review recent advances and examine how single-cell models on five levels of complexity, from black-box approaches to detailed compartmental simulations, address key questions about neural dynamics and signal processing.

Ahundred years ago, Lapicque (1) proposed that action potentials are generated when the integrated sensory or synaptic inputs to a neuron reach a threshold value. This “integrate-and-fire” model remains one of the most influential concepts in neurobiology because it provides a simple mechanistic explanation for basic neural operations, such as the encoding of stimulus amplitude in spike frequency. However, advances in experimental technique have shown that the integrate-and-fire model is far from accurate in describing real neurons. Their morphology, composition of ionic conductances, and distribution of synaptic inputs generate a plethora of dynamical phenomena and support various fundamental computations (Table 1 and Table 2).

Table 1.

Information processing in single neurons: Basic computations that follow from generic neuronal properties.

ComputationBiophysical mechanismModel level
Addition or subtraction Dendritic summation of excitatory and/or inhibitory inputs (View inline) I, II
Subtraction Shunting inhibition plus integrate-and-fire mechanism (View inline) I, II
Multiplication or division Synaptic interaction (View inline, View inline) I, II
Gain modulation via synaptic background noise (View inline, View inline) I, II
High-pass filter Firing rate adaptation (View inline) III
Low-pass filter Passive membrane properties (View inline) I, II, III
Toggle switch Bistable spike generation (View inline) III
Table 2.

Information processing in single neurons: Task-specific computations of direct behavioral relevance.

Biological goalComputationBiophysical mechanismModel levelExperimental systems
Collision avoidance Multiplication: object size x times angular velocity y xy = exp(log x + log y) via input nonlinearity (log), dendritic summation (+), and output nonlinearity (exp) III Lobula giant movement detector in locusts (View inline)
Sound localization Logical AND: comparison of interaural time difference Coincidence detection of two spikes, lagged by different axon delays (View inline, View inline) II Binaural neurons in the auditory brainstem (View inline)
Motion detection Logical AND or AND-NOT: comparison of spatially adjacent but temporally shifted local light intensities Coincidence detection of one lagged (axonal delay) and one nonlagged spike (View inline) IV Peripheral neurons in the fly visual system [see, e.g., (View inline, View inline)]
Nonlinear dendritic processing (View inline, View inline) I, II Retinal amacrine and ganglion cells (View inline, View inline)
Motion anticipation Linear filtering with negative feedback Adaptation of neuronal gain IV Salamander and rabbit retinal ganglion cells (View inline)
Intensity-invariant recognition of analog patterns Separation of pattern identity and pattern intensity; subsequent comparison with stored template Transformation: local stimulus intensity mapped to spike time using subthreshold membrane potential oscillations; readout: coincidence detection (View inline) III, IV Insect and vertebrate olfactory neurons, in particular in antennal lobe and olfactory bulb, respectively (View inline)
Short-term memory Temporal integration or storage Dendritic Ca waves (View inline) II Layer V neurons in entorhinal cortex (View inline)
Transitions between two Ca-conductance states (View inline) II Layer V neurons in entorhinal cortex (View inline)
Time interval prediction Temporal integration or storage Calcium dynamics with positive feedback (View inline) III Climbing activity in prefrontal neurons (View inline)
Redundancy reduction Subtraction: local signal minus background signal Dendritic summation (View inline) IV Center-surround receptive fields in the visual system (View inline)
Efficient coding in variable environment Modification of tuning curve to track time-varying stimulus ensemble Adaptation of single-cell input-output function (View inline, View inline) II, IV, V Motion-sensitive H1 neuron in the fly visual system (View inline, View inline)
Consequence (View inline) of Reichardt motion detector circuit (View inline) IV, V

Understanding the dynamics and computations of single neurons and their role within larger neural networks is therefore at the core of neuroscience: How do single-cell properties contribute to information processing and, ultimately, behavior? Quantitative models address these questions, summarize and organize the rapidly growing amount and sophistication of experimental data, and make testable predictions. As single-cell models and experiments become more closely interwoven, the development of data analysis tools for efficient parameter estimation and assessment of model performance constitutes a central element of computational studies.

All these tasks require a delicate balance between incorporating sufficient details to account for complex single-cell dynamics and reducing this complexity to the essential characteristics to make a model tractable. The appropriate level of description depends on the particular goal of the model. Indeed, finding the best abstraction level is often the key to success. We highlight these aspects for five main levels (Fig. 1) of single-cell modeling.

Fig. 1.

Examples for five levels of single-cell modeling. Level I: Detailed compartmental model of a Purkinje cell. The dendritic tree is segmented into electrically coupled Hodgkin-Huxley–type compartments (level III). Level II: Two-compartment model as in (23). The dendrite receives synaptic inputs and is coupled to the soma where the neuron's response is generated. Level III:Hodgkin-Huxley model, the prototype of single-compartment models. The cell's inside and outside are separated by a capacitance Cm and ionic conductances in series with batteries describing ionic reversal potentials. Sodium and potassium conductances (gNa, gK) depend on voltage; the leak gleak is fixed. Level IV: Linear-nonlinear cascade. Stimuli S(t) are convolved with a filter and then fed through a nonlinearity to generate responses R(t), typically time-dependent firing rates. Level V: Black-box model. Neglecting biophysical mechanisms, conditional probabilities p(R|S) describe responses R for given stimuli S.

Level I: Detailed Compartmental Models

Morphologically realistic models are based on anatomical reconstructions and focus on how the spatial structure of a neuron contributes to its dynamics and function. These models extend the cable theory of Rall, who showed mathematically that dendritic voltage attenuation spreads asymmetrically (2). This phenomenon allows dendrites to compute the direction of synaptic activation patterns, and thus provides a mechanism for motion detection (3). When voltage-dependent conductances are taken into account, numerical integration over the spatially discretized dendrite—the “compartmental model” (3)—is needed to solve the resulting high-dimensional system of equations.

For complex dendritic trees, more than 1000 compartments are required to capture the cell's specific electrotonic structure (e.g., to simulate spike backpropagation in pyramidal neurons) (4). Such detailed models also generate testable mechanistic hypotheses. For instance, simulations of Purkinje cells predicted that a net inhibitory synaptic current underlies specific spike patterns in vivo (5), in accordance with later experimental findings (6). In turn, even established models such as the thalamocortical neuron (7) are constantly improved by adding new biophysical details such as dendritic calcium currents responsible for fast oscillations (8).

A large body of morphologically realistic models demonstrates how spatial aspects of synaptic integration in dendrites support specific computations (Table 1 and Table 2), as discussed in various reviews (9, 10). In pyramidal cells, for example, distal inputs are amplified via dendritic spikes or plateau potentials, supporting local coincidence detection and gain modulation. Dendritic inward currents play a major role in the control of spiking (6) or the modulation of responses to synchronous inputs (11). Such interactions among synaptic inputs, voltage-gated conductances, and spiking output can be specifically affected by dendritic branching structures (12); axonal geometries, on the other hand, influence activity-dependent branch point failures and may thus implement filter and routing operations on the neuron's output side (13).

Finally, detailed spatial representations help predict the effects of extracellular electrical stimulations. This is of great interest for deep-brain stimulation used in the treatment of Parkinson's disease (14) and underscores the need for morphologically realistic models.

Level II: Reduced Compartmental Models

Although detailed compartmental models can approximate the dynamics of single neurons quite well, they suffer from several drawbacks. Their high dimensionality and intricate structure rule out any mathematical understanding of their emergent properties. Detailed models are also computationally expensive and are thus not well suited for large-scale network simulations. Reduced models with only one or few dendritic compartments overcome these problems and are often sufficient to understand somatodendritic interactions that govern spiking (15) or bursting (16).

A well-matched task for such models is to relate behaviorally relevant computations on various time scales to salient features of neural structure and dynamics. For example, the detection of binaural time differences within Jeffress' time-delay framework (17) has been explained in a three-compartment model of bipolar cells by local nonlinear input interactions and the fact that each of the two dendrites provides a sink for inputs received by the other dendrite (18). Computations involving short-term memory may rely in part on the existence of multiple stable firing rates in single neurons. Reduced compartmental models suggest that calcium currents are essential for this phenomenon (19), through dendritic calcium wavefronts (20) or transitions between different conductance states (21). On longer time scales, neurons self-adjust their activity patterns, both during development and after external perturbations (22). Simulations with a two-compartment model show that such homeostatic plasticity can follow from cellular “learning” rules that recalibrate dendritic channel densities to yield optimal spike encoding of synaptic inputs (23).

For large-scale network studies, reduced compartmental models offer a good compromise between realism and computational efficiency. For example, a simulation involving several classes of multicompartmental cortical and thalamic neurons and a total of more than 3000 cells demonstrates that gap junctions are instrumental for cortical gamma oscillations (24). A slightly less complex network with two-compartment neurons reproduces slow-wave sleep oscillations (25). Clearly, the challenge for all such studies is to find the least complex neuron models with which the observed phenomena can still be recreated (26).

Level III: Single-Compartment Models

Single-compartment models such as the classic Hodgkin-Huxley model (27) neglect the neuron's spatial structure and focus entirely on how its various ionic currents contribute to subthreshold behavior and spike generation. These models have led to a quantitative understanding of many dynamical phenomena including phasic spiking, bursting, and spike-frequency adaptation (28).

Systematic mathematical reductions of Hodgkin-Huxley–type models and subsequent bifurcation and phase-plane analysis (29, 30) explain why, for example, some neurons resemble integrate-and-fire elements or why the membrane potential of others oscillates in response to current injections enabling a “resonate-and-fire” behavior. They also show which combination of dynamical variables governs the threshold operation (31) and how adaptation (32) and spike-generation mechanisms (33) influence spike trains (Fig. 2).

Fig. 2.

Diversity of neural response patterns. As illustrated in the top row, neurons can respond with rather different spike-train patterns to identical step currents. For time-varying inputs (middle row), the computational power of even simple single-neuron models becomes apparent: A first current pulse might trigger a subthreshold oscillation. Only if a second pulse arrives at the right phase of this oscillation is a spike triggered through resonance. An integrator, on the other hand, is driven most effectively by quickly succeeding pulses. Finally, a bistable cell can realize a toggle-switch. These phenomena [and many more; see (30)] are exhibited by the same point-neuron model: Its time evolution (bottom row, left) is derived from Hodgkin-Huxley–type dynamics; involves the membrane potential v and a slower auxiliary variable u; and generates the different responses for specific values (right) of the parameters c (reset of voltage v with peak p) and a, b, and d (decay rate, sensitivity, and reset of the auxiliary variable u). Figure courtesy of E. Izhikevich.

Spike generation is not a deterministic process. The stochastic dynamics of ion channels generate voltage noise that limits the reliability and precision of spikes (34). Background synaptic noise (35), on the other hand, can modulate the neural gain without changing spike variability or mean firing rates (36). But even without intrinsic noise, the all-or-none characteristics of spike generation amplify the input variability (37)—perhaps this is the price of long-distance communication.

More than 50 years after Hodgkin and Huxley analyzed the squid axon, simple neuron models still offer surprises, as these findings show. A recent study even indicates that the standard Hodgkin-Huxley formalism does not explain the sharp kink at the onset of cortical spikes (38). Its mechanistic origin and functional consequences require further investigation.

Level IV: Cascade Models

Whereas models incorporating specific ionic currents or morphological details are needed to investigate the biophysics of single neurons, modeling on a more conceptual level allows one to directly address their computations. To this end, cascade models based on a concatenation of mathematical primitives, such as linear filters, nonlinear transformations, and random processes, present an excellent framework for distilling key processing steps from measured data.

Consider, for example, a model that first convolves its time-varying input with a linear filter and then applies a rectifying nonlinearity. In studies of sensory systems, this simple structure is often considered as the canonical model for the receptive field of a neuron and the transformation of its internal activation state into a firing rate. The appeal of this linear-nonlinear (LN) cascade stems from its conceptual simplicity and the fact that, for white-noise stimulation, it can be easily fitted to experimental data by correlating response and stimulus (39). Recent studies even demonstrate that LN cascades can be obtained under far more naturalistic stimulation (40, 41).

Cascade models have a long tradition in the investigation of the visual system. More recently, they have been used to assess neuronal sensitivity for different stimulus features and have helped to elucidate the simultaneous adaptation to mean light intensity and light contrast (42) and the generic nature of adaptation in the retina (43). New analysis tools have opened up the possibility of using multiple parallel linear filters in an LN cascade to investigate, for example, complex cells in visual cortex (44) and thus improve on classical energy-integration models (45).

Extending LN cascades allows one to capture additional neural characteristics while retaining the ability to fit these more complex models to experimental data. To reveal filter mechanisms that are otherwise hidden by spike-time jitter, one may append a noise process to the cascade (46, 47) or measure spike probabilities instead of spike times (48). For the latter method, temporal resolution is limited only by the precision of stimulus presentation so that parameters of more elaborate models (e.g., LNLN cascades) can be obtained.

The analog output of traditional cascade models describes a firing rate. An important conceptual extension is therefore achieved by adding an explicit spike generation stage. Using a fixed firing threshold and feedback mimicking neural refractoriness (49), this has led to a successful model of spike timing in early vision (50). Even when augmented with an integrate-and-fire mechanism and intrinsic bursting, this model structure still allows generic fits to measured spike trains (51).

Cascade models can also directly translate into specific computations: Experiments indicate that in locusts, an identified neuron multiplies the visual size x and angular velocity y of an object while tracking its approach (52). The nearly exponential shape of this neuron's output curve suggests that logarithmic transforms of x and y are summed on the dendrite and then passed through the output nonlinearity, implementing the multiplicative operation as an NLN cascade via the identity exp(log x + log y) = xy.

Despite their success, simple model structures have their limitations—especially when applied to neurons far downstream from the sensory periphery and when aimed at generalizing over different stimulus types—because additional nonlinear dynamics, negligible within a specific stimulation context, affect the transition between different experimental conditions (53, 54). In specific cases, however, LN models yield accurate information-theoretical descriptions of neuronal responses (55).

Level V: Black-Box Models

Last but not least, one may want to understand and quantify the signal-processing capabilities of a single neuron without considering its biophysical machinery. This approach may reveal general principles that explain, for example, where neurons place their operating points and how they alter their responses when the input statistics are modified.

For such questions about neural efficiency and adaptability, a neuron is best regarded as a black box that receives a set of time-dependent inputs—sensory stimuli or spike trains from other neurons—and responds with an output spike train. To account for cell-intrinsic noise, it is necessary to characterize the input-output relation by a probability distribution, p(R|S), which measures the probability that response R occurs when stimulus S is presented.

Although models on levels I to IV make specific assumptions about neural processes and hence about the functional form of p(R|S), such assumptions can be overly restrictive at level V. Here, it is often advantageous to work with nonparametric estimates of p(R|S) that are directly taken from the measured data. Such models have, for example, been used to estimate the information that the spike train of a neuron conveys about its inputs and have revealed that sensory neurons operate highly efficiently, often close to their physical limits (56). Indeed, Barlow's “efficient coding hypothesis” suggests that neurons optimize the information about frequently occurring stimuli (57).

Theoretical studies have shown how individual neurons may shift their input-output curves to reach that goal (23). Moreover, recordings of a motion-sensitive neuron in the fly visual system reveal that adaptation can modify a neuron's input-output function to maximize information about time-varying sensory stimuli (58). In this case, however, it is possible that the adaptive mechanism is not implemented on the single-cell level but instead results from the underlying multicellular Reichardt motion detection circuitry (59, 60). Similar ambiguities between single-cell and network adaptation exist in the auditory midbrain (61).

Evolutionary adaptations may not be guided to optimize the information about all natural stimuli. In acoustic communication systems, for example, neural responses are well matched to particular behaviorally relevant subensembles. Most likely, stimuli from those ensembles were selected as communication signals because they lead to efficient neural representations (62, 63).

Challenges

“A good theoretical model of a complex system should be like a good caricature: it should emphasize those features which are most important and should downplay the inessential details. Now the only snag with this advice is that one does not really know which are the inessential details until one has understood the phenomena under study” (64).

This general dilemma, formulated by the physicist Frenkel almost a century ago, applies in particular to the single neuron. Which details of ionic conductances and morphology are relevant for particular aspects of its cell type–specific or individual dynamics? How do these dynamics contribute to the neuron's information processing? Identification of a fundamental computation performed by the neuron (Table 1 and Table 2) may help address these questions. Brain function, however, relies on the interplay of hundreds to billions of neurons that are arranged in specialized modules on multiple anatomical hierarchies. Even today, it remains unclear which level of single-cell modeling is appropriate to understand the dynamics and computations carried out by such large systems. However, only by understanding how single cells operate as part of a network (35) can we assess their coding and thus the level of detail required for modeling. For example, most network models use point-neuron models (65), whereas several aspects of brain function require multicompartmental models (24, 25).

It has thus become increasingly clear that a thorough understanding of single-neuron function can be obtained only by relating different levels of abstraction. Trying to incorporate every biological detail of the investigated neuron is likely to obscure the focus on the essential dynamics, whereas limiting investigations to highly abstract processing schemes casts doubt on the biological relevance of specific findings. Help may come from analyzing the transition between different modeling levels. Interesting connections have been drawn, for example, by transforming a Hodgkin-Huxley–type model (level III) into a phenomenological firing rate description (66) or a cascade on level IV (31). And the integrative properties of dendritic trees as evolved as those of pyramidal cells can be captured by a two-layer feedforward network (i.e., an NLN cascade) (level IV), at least for stationary stimuli (67). For nonstationary stimuli, however, the cascade fails (Fig. 3). This underscores the need to alternate between different levels of single-neuron models in close connection with considerations about the neural codes of larger cell populations.

Fig. 3.

Single-neuron computation. The neuron in the center (86) can be approximated by an NLN cascade (left) for stationary inputs (67), or, more generally, by a compartmental model (right). The cascade (level IV) is equivalent to a two-layer feedforward network and shows that under a firing-rate assumption, a single neuron may perform the function of an entire artificial neural net. Electrical couplings within compartmental models (levels I and II) are bidirectional. The right model therefore corresponds to a feedback network and can exhibit persistent activity, hysteresis, periodic oscillations, and even chaos. These phenomena are impossible in feedforward systems and may support complex computations in the time domain. The relevance of either model depends on the statistics of synaptic inputs (i.e., on the neural code of the investigated brain area).

Deriving model parameters from experimental data brings about its own collection of problems: How should we deal with the cell-to-cell variability of parameter values? The common resort, population averaging, can be misleading because the dynamical behavior of single-cell models is, in general, not a monotone function of their parameters; the mean behavior within a class of models may strongly differ from that of a model with mean parameter values (68), and nearly identical dynamical characteristics may be implemented by rather different parameter combinations (69). With increasing model complexity, the number of parameters to be estimated increases to such an extent that they must be taken from different cells or even different preparations, further lowering the model's trustworthiness. Furthermore, models are often calibrated using in vitro data, yet they are designed to capture the neural dynamics and computations of behaving animals.

Conclusions

There is no general solution for any of these challenges. Iterating the loop of model prediction, experimental test, and model adjustment is an obvious strategy for stepwise progress. One should be aware, however, that elaborate single-cell models are not sufficiently constrained by data, nor is there any guarantee that crucial components of the real biological neuron are already included. Wrong models may therefore be falsely “verified,” and long-term progress may require many iterations of the model-experiment loop until an incorrect assumption is eventually falsified.

There is good news, too: The rapid development of experimental tools to study single neurons in vivo (70) will generate data urgently needed to advance quantitative models. With powerful computers tightly integrated in modern laboratories, advanced on-line techniques such as the “dynamic clamp” (71) will be used routinely in the future. In this technique, voltage-gated conductances that cannot be selectively blocked by pharmacological agents are counterbalanced by currents that are artificially generated using the neuron's present state. This approach has clarified, for example, the influence of persistent sodium currents on spike generation (72). To mimic in vivo input patterns during in vitro experiments, synaptic conductances can be inserted with a dynamic clamp (6, 36, 73). Adaptive stimulations with real-time data analysis can also be used to optimize recording times, allowing one to extend traditional concepts such as “best stimulus” to the information-theoretic level (62).

These developments show that the divide between experiment and theory is disappearing. There is also a change in attitude reflected by various international initiatives (74, 75): More and more experimentalists are willing to share their raw data with modelers. Many modelers, in turn, make their computer codes available. Both movements will play a key role in solving the many open questions of neural dynamics and information processing—from single cells to the entire brain.

References and Notes

View Abstract

Navigate This Article