Report

Reproducibility and Variability in Neural Spike Trains

See allHide authors and affiliations

Science  21 Mar 1997:
Vol. 275, Issue 5307, pp. 1805-1808
DOI: 10.1126/science.275.5307.1805

Abstract

To provide information about dynamic sensory stimuli, the pattern of action potentials in spiking neurons must be variable. To ensure reliability these variations must be related, reproducibly, to the stimulus. For H1, a motion-sensitive neuron in the fly's visual system, constant-velocity motion produces irregular spike firing patterns, and spike counts typically have a variance comparable to the mean, for cells in the mammalian cortex. But more natural, time-dependent input signals yield patterns of spikes that are much more reproducible, both in terms of timing and of counting precision. Variability and reproducibility are quantified with ideas from information theory, and measured spike sequences in H1 carry more than twice the amount of information they would if they followed the variance-mean relation seen with constant inputs. Thus, models that may accurately account for the neural response to static stimuli can significantly underestimate the reliability of signal transfer under more natural conditions.

The nervous system represents signals by sequences of identical action potentials or spikes (1), which typically occur in an irregular temporal pattern (2). The details of this pattern may just be noise that should be averaged out to reveal meaningful signals (3). Alternatively, if the precise arrival time of each spike is significant, then temporal variability provides a large capacity for carrying information (4, 5). This issue has been debated for decades (6) and is receiving renewed attention (5, 7). In fact, different views of the neural code may be appropriate to different contexts—in an environment where signals vary slowly, the brain may neither need nor use the full information capacity of its neurons, but as sensory signals become more dynamic the demands on coding efficiency increase (5, 8). Here we show that in H1, a motion-sensitive neuron in the fly visual system (9), variability of response to constant stimuli coexists with extreme reproducibility for more natural dynamic stimuli, and that this reproducibility has a direct impact on the information content of the spike train.

Figure 1 shows results of an experiment in which a fly (Calliphora vicina) views a pattern of random bars that moves across the visual field at constant velocity (10). After a transient, the H1 neuron settles to a steady state, spiking at a constant rate that depends on velocity. Such results are well known for H1 (9) and have parallels in many experiments on sensory neurons. Spike sequences appear irregular, and interspike intervals are distributed almost exponentially (Fig. 1D), so that the coefficient of variation (CV) is near unity (11). If we count the spikes in a fixed window of time during the steady response, then by repeating the stimulus many times we can measure both the mean count and the variance across trials. Figure 1E shows that, counting spikes for different stimulus strengths and different size time windows, the variance grows almost in proportion to the mean, both for H1 and for cells in the mammalian visual cortex (12). There is also a tendency for excess variance in large time windows (13).

Fig. 1.

Spike statistics for constant stimuli. (A) A random bar pattern (10) moves across the visual field at constant speed (0.022°/s) and in the H1 neuron's preferred direction. (B) Fifty response traces to the stimulus in (A), each lasting 1 s, and taken 20 s apart. The occurrence of each spike is shown as a dot. The traces are taken from a segment of the experiment where transient responses have decayed. (C) The peristimulus time histogram (PSTH; bin width 3 ms, 96 presentations), which describes the rate at which spikes are generated in response to the stimulus shown in (A). The fluctuations are due to finite sampling. (D) Interval histogram describing the probability density, P(τ), of finding an interspike interval of length τ. (E) Scatter plot of spike count variance as a function of mean count. Open circles are data for the fly's H1 neuron, stimulated with a wide field pattern moving at several constant velocities (0°, 0.007°, 0.014°, 0.022°, 0.029°, and 0.058°/s) For each velocity, spikes are counted in windows of different sizes (3, 10, 30, 100, 300, and 1000 ms). The variance of these counts is plotted against the mean for each combination of velocity and window size. Points obtained at the same velocity are connected by lines. The data plotted here are for average rates below 80 spikes per second. For large counting windows, the variance grows faster then the mean. The filled circles [redrawn from Tolhurst et al. (12)] are data from simple cells in cat visual cortex analyzed in the same way (but with either 250- or 500-ms counting windows). Comparison of the data shows that for constant stimuli, the neurons from fly and cat are very similar in their counting statistics. Furthermore, they both approximately follow the Poisson behavior, variance = mean, given by the dashed line.

In Fig. 2 we show the spike trains generated when the fly views the same pattern of random bars, but now moving along a dynamic, and presumably more naturalistic (14), trajectory. This stimulus modulates the spike rate rapidly over a wide range (Fig. 2C). Integrating the rate over a fixed time window gives the mean spike count (5), and we also measure the variance of the spike count in that window. If we do this for all possible locations of the window (with 1-ms resolution), we obtain, by analogy with Fig. 1E, the relation between variance and mean (Fig. 2, E and F). In 100-ms windows, mean counts up to 15 occur with a variance close to unity. In 10-ms windows, the variance drops to nearly zero for windows that contain one or two spikes on average. Spikes are discrete events, so there must be variation from trial to trial if, for example, the average count is 0.5. The variance is minimized if half the trials have one spike and the other half have none, in which case σ2 = 0.25. Generally, if the mean count is an integer plus a fraction f, the minimum variance is σmin2 = f(1 − f). The plot of minimum variance versus mean is scalloped, repeating with period one. Figure 2E shows that the data points cluster near this curve of minimum variance (15), far from the relation variance ≈ mean found with static stimuli.

Fig. 2.

Spike statistics for dynamic stimuli. (A) The fly views the same spatial pattern as in Fig. 1A, but now moving with a time-dependent velocity, part of which is shown. The motion approximates a random walk with diffusion constant D ≈ 14 degrees2/s. For illustration, the waveform shown is low-pass filtered. In the experiment, a 10-s waveform is presented 900 times, every 20 s. During the second half of this 20-s period the fly sees the same pattern, but now for each trial we draw a new—independent—velocity waveform from the same distribution. (B) A set of 50 response traces to the repeated stimulus waveform shown in (A). (C) Averaged rate (PSTH) for the same segment. The rate is strongly modulated, but its time average is very close to that in Fig. 1C. (D) Interval histogram for the nonrepeating part of the experiment. It is clearly nonexponential, with CV = 1.94, and very different from the interval distributions in Fig. 1D. (E and F) Scatter plots of variance versus mean count. Here, in contrast to Fig. 1E, each figure shows the mean and the variance for only one size of counting window—10 ms in (E), 100 ms in (F). Each point is a variance-mean combination for counts across all 900 trials in a fixed time window relative to the onset of the repeated stimulus. The first window starts 100 ms after onset of the repeated waveform, spanning 100 to 110 ms in (E) or 100 to 200 ms in (F). Successive windows overlap as they are stepped in 1-ms increments [for example, 101 to 111 ms, 102 to 112 ms, … and so on for (E)], and altogether 9000 time windows are analyzed. For comparison, the variance for the Poisson distribution is given by the dashed lines.

Spike counts in response to dynamic stimuli have smaller variances than those in response to static stimuli, but interspike intervals seem more variable (see Figs. 1D and 2D). Interspike interval distributions, however, confound variations across time with variations across trials. To characterize the reproducibility across trials, we measure the distribution of interspike intervals that bracket a fixed time in the stimulus; typically, these “stimulus-locked” interval distributions have a CV ∼ 0.1. This indicates that, although the responses to dynamic stimuli are variable across time, they are reproducible from trial to trial.

The spike patterns seen, for example, in Fig. 2B, are complex: Short interspike intervals come in bursts, a specific event in the stimulus may fail to elicit a spike on some trials, and isolated spikes may occur with low probability. It might be interesting to understand how each feature arises, but here it is more important to ask whether all these different features can be quantified in the same units, summarizing the variability and reproducibility of the spike train. Shannon proved that the only measure of variability consistent with certain intuitive requirements is the entropy (16). We need two different entropies, each of which can be estimated directly from experiment (17): the total entropy of the spike train, which quantifies the variations across time and sets the capacity of the spike train to carry information, and the noise entropy, which measures the irreproducibility from trial to trial. Both quantities depend on the size of the time windows T and on the time resolution Δτ with which we observe the spike train.

To observe the full range of temporal variability, we deliver a stimulus chosen from the same probability distribution as in the experiments of Fig. 2, but continuing for 9000 s without repeating. In time windows of size T we digitize the spike train with a precision Δτ, so that possible spike trains are labeled by K-letter “words,” with K = T/Δτ (Fig. 3); a complete analysis requires that we explore a range of T and Δτ (17). Searching through the entire experiment we estimate the probability P(W) of each possible word W and then compute the entropy of this distribution,

Embedded Image (1)

To assess the reproducibility of the responses, we return to the experiment in which a single dynamic stimulus waveform is presented many times and examine the probability of occurrence P(Wt) for words W at a particular time t relative to the stimulus. These distributions (one for each t) also have entropies, and the average of these entropies over time is the noise entropy,

Embedded Image (2)

where 〈 ··· 〉t denotes the average over all possible times t, with resolution Δτ (18). The average information I that the spike train provides about the stimulus is precisely the difference between these two entropies, I = StotalSnoise. This characterization of variability, reproducibility, and information transmission is independent of any assumptions about which features of the stimulus are being encoded or about which features of the spike train are most important in the code (17, 19).

Fig. 3.

Word frequency distributions and information transfer. (A) Two segments from 100 response traces of H1, starting at about 600 and 1800 ms, respectively, after onset of the repeated stimulus of Fig. 2. (B) Construction of local word frequencies. We start with a set of spike trains in response to a repeated random velocity sequence. Beginning at 600 ms these spike trains are divided in 10 contiguous 3-ms bins, as indicated by the array of vertical lines. For each trial, the spikes in each of the 10 bins are counted, and this set of 10 numbers forms a word, W. Here almost all words are binary strings, as two spikes occur only very rarely within 3 ms. This procedure gives us as many words as there are trials (here 900). From this set we compute the probability for each word, and the resulting distribution is depicted in the histogram, P(Wt) = 600 ms, where the words are ordered according to their probability. (C) As in (B), but now starting at 1800 ms. (D) Distribution, P(W), of all words throughout the experiment. Words are defined in the same way as in (B) and (C). However, here they are taken from the long (900 times 10 s) nonrepeated part of the stimulus sequence in order to obtain a large number of independent stimulus samples. Thus, stepping in 3-ms bins, ∼3 × 106 words are sampled, and the distribution shown here describes their ranked frequencies. In these windows, by far the most likely word is 0000000000, and roughly 1500 different words are observed.

With windows of T = 30 ms—comparable to the behavioral reaction times (14)—and a time resolution of Δτ = 3 ms, we find Stotal = 5.05 ± 0.01 bits and Snoise = 2.62 ± 0.02 bits. Thus, the average information about the stimulus conveyed in 30 ms is 2.43 ± 0.03 bits, and this is increased slightly if we sample with Δτ = 1.5 or even 0.7 ms (20). Hence, down to millisecond time resolution, half of the total variability of the spike train is used to provide information about the stimulus (21).

Information transmission is clearly enhanced by rapid modulations of the spike rate (Fig. 2C). Are these rapid rate variations the only important feature of the response? Consider a model neuron that has the correct dynamics of the firing rate, but follows the variance-mean relation observed in response to static stimuli. If the variance-mean relation is given by the dashed line in Fig. 1E, then neural firing is a modulated Poisson process (5, 22). We simulate spike trains that result from a Poisson process with the rate modulations observed in Fig. 2C and then repeat the analysis of Fig. 3. The total entropy (Stotal = 5.17 bits) is almost the same as that of the real spike trains, whereas the noise entropy (Snoise = 4.22 bits) is substantially larger: Real spike trains are almost as variable as possible given the mean spike rate, but they are much more reproducible than Poisson trains. H1 thus transmits more than twice as much information (2.43 versus 0.95 bits in a 30-ms window) about these stimuli as would be the case if the neuron exhibited the noisiness found with constant inputs (23).

Several mechanisms may contribute to the reproducibility of responses. First, to achieve millisecond precision in the spiking of H1, the fly's visual system must resolve events in the motion stimulus on this time scale; more detailed analysis suggests that this is close to the limit set by photoreceptor noise. Second, neural computation and encoding must be adaptive in order to follow rapid modulations of the stimulus over a wide dynamic range (24). Finally, refractoriness regularizes spike trains at high firing rates (11), enforcing a more deterministic relation between stimulus and response (25).

In summary, during stimulation dynamic H1 makes efficient use of its capacity to transmit information. This efficiency is achieved by establishing precise temporal relations between individual action potentials and events in the sensory stimulus. These observations on the encoding of naturalistic stimuli cannot be understood by extrapolation from quasistatic experiments, nor do such experiments provide any hint of the timing and counting accuracy that the brain can achieve. Just as H1 resembles cortical neurons in its noisy response to static stimuli, many systems may resemble H1 in their reproducible response to dynamic stimuli (26).

REFERENCES AND NOTES

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
View Abstract

Stay Connected to Science

Navigate This Article