Research Article

Discovery of Brainwide Neural-Behavioral Maps via Multiscale Unsupervised Structure Learning

See allHide authors and affiliations

Science  25 Apr 2014:
Vol. 344, Issue 6182, pp. 386-392
DOI: 10.1126/science.1250298

Optogenetic Insights

Mapping functional neural circuits for many behaviors has been almost impossible, so Vogelstein et al. (p. 386, published online 27 March; see the Perspective by O'Leary and Marder) developed a broadly applicable optogenetic method for neuron-behavior mapping and used it to phenotype larval Drosophila and thus developed a reference atlas. As optogenetic experiments become routine in certain fields of neuroscience research, creating even more specialized tools is imperative (see the Perspective by Hayashi). By engineering channelrhodopsin, Wietek et al. (p. 409, published online 27 March) and Berndt et al. (p. 420) created two different light-gated anion channels to block action potential generation during synaptic stimulation or depolarizing current injections. These new tools not only improve understanding of channelrhodopsins but also provide a way to silence cells.

Abstract

A single nervous system can generate many distinct motor patterns. Identifying which neurons and circuits control which behaviors has been a laborious piecemeal process, usually for one observer-defined behavior at a time. We present a fundamentally different approach to neuron-behavior mapping. We optogenetically activated 1054 identified neuron lines in Drosophila larvae and tracked the behavioral responses from 37,780 animals. Application of multiscale unsupervised structure learning methods to the behavioral data enabled us to identify 29 discrete, statistically distinguishable, observer-unbiased behavioral phenotypes. Mapping the neural lines to the behavior(s) they evoke provides a behavioral reference atlas for neuron subsets covering a large fraction of larval neurons. This atlas is a starting point for connectivity- and activity-mapping studies to further investigate the mechanisms by which neurons mediate diverse behaviors.

Nervous systems can generate a wide range of motor outputs, depending on their incoming sensory inputs and internal state. A comprehensive understanding of how behavioral diversity and selection is achieved requires the identification of neural circuits that mediate many distinct motor patterns in a given nervous system. Mapping a functional circuit for one behavior in a given organism is difficult enough, but doing so for many behaviors has been almost impossible. The first step in mapping a circuit that mediates a behavior is to identify neurons whose activity is causally related to the behavior. Such a list of neurons provides a starting point for identifying the connectivity patterns between the relevant neurons. Thus, to map circuits underlying many behaviors, one would need a comprehensive neuron-behavior atlas of the nervous system that would list all neurons causally related with each behavior.

Generating such neuron-behavior maps has been difficult for several reasons. First, the experimental tools to selectively manipulate small sets of neurons while simultaneously observing natural behavior were lacking. Fortunately, recent advances in genetic toolkits allow reasonably selective manipulation of neuron types in genetic model organisms, such as Drosophila (13). Advances in behavior tracking methods allow high-resolution monitoring of the effect of such manipulations (4, 5). Neural manipulation screens can therefore be coupled with high-resolution monitoring of motor outputs to causally link complex behaviors to correspondingly complex neural circuits.

Second, establishing the causal links between neural manipulations and the resulting time-varying behavioral responses is a daunting computational statistics challenge. Existing supervised machine-learning methods can detect only predetermined behaviors (6); moreover, they are limited by the speed with which humans can annotate training data sets. An alternative approach uses unsupervised clustering of the multidimensional time series. However, the high-content and high-throughput nature of the time-varying behavior data presents both computational and statistical challenges.

We developed a methodology for data-driven neuron-behavior mapping and applied it to larval Drosophila. The nervous system of larval Drosophila consists of a well-developed brain and nerve cord containing only about 10,000 neurons, rendering it sufficiently simple to obtain a relatively comprehensive characterization of it. Moreover, there exist more than 1000 genetic GAL4 lines in Drosophila larvae with recently characterized sparse neuronal expression patterns that together cover most of the 10,000 neurons in the larval nervous system (http://flweb.janelia.org/cgi-bin/flew.cgi) (3).

Optogenetic Neural Activation Screen

We designed an optogenetic neural activation screen (see supplementary materials) to obtain a neuron line–behavior atlas of the larval nervous system that would contain causal links between neuron lines and the motor patterns they control. We used 1049 distinct GAL4 lines to selectively target channelrhodopsin-2 (ChR2) (7) to sparse distinct subsets of neurons, with each line activating 2 to ~15 neurons. Because these lines essentially span the entire set of larval neurons, some lines activate sensory and motor neurons as well as many neurons involved in decisions and action selection. We included four positive control lines in the screen that drive expression in nociceptive, mechanosensory, and proprioceptive neurons, previously determined to reliably mediate distinct behaviors (810), as well as one negative control line in which no neurons were optogenetically activated (2, 10), for a total of 1054 lines and 37,780 animals tested. In each experiment, we exposed dishes of larvae to 470-nm light stimuli (one exposure of 30 s followed by four exposures of 5 s, with a 30-s interval after the long exposure and a 10-s interval between the short exposures) to optogenetically activate Ch2-expressing neurons; we captured video before, during, and after stimulation (Fig. 1A). The Multi-Worm Tracker (MWT) software (4) tracked time-varying, two-dimensional closed contours of larvae and sketched eight time-varying features that collectively characterize larval shape and motion (Fig. 1B). Streaming and sketching reduced the data complexity by a factor of more than 200,000, enabling a compressive yet expressive representation of the data. These reduced data served as the input into the multiscale unsupervised structure learning methodology to reveal data-driven behavior types (Fig. 1C). Each behavior type was then linked to the subset of lines that mediate them (Fig. 1D).

Fig. 1 Experimental design and methodology for obtaining neuron line–behavior maps.

(A) Optogenetic activation screen of 1054 lines while digitally recording high-dimensional larval responses. (B) Streaming extracts the contours of each larva from each video frame; sketching extracts eight time-varying features from the contours that characterize the shape and motion of each animal. (C) Machine-driven behavioral phenotyping learns phenotype categories (called behaviotypes) from the sketches via multiscale unsupervised structure learning. (D) Manifold testing discovers which neuron lines evoke sets of behaviors that are different from negative controls, which facilitates associating each such line with some number of behaviotypes.

Discovery of Behavior Types via Multiscale Unsupervised Structure Learning

As a first step, we sought to discover a large, inclusive, and nonpredetermined set of statistically distinguishable behavioral responses performed by the 37,780 animals during the first (30-s) optogenetic activation period. Recently developed methods for multiscale unsupervised structure learning (1113) can be thought of as generalizations of manifold learning techniques, in that they can learn structures more general than manifolds, such as unions of manifolds. We adopted iterative denoising tree (IDT) methodology (11, 14), which offers demonstrated utility across several domains (15, 16).

The input to IDT is the collection of all 37,780 larval sketches, irrespective of which line generated each sketch (Fig. 2A). IDT consists of five key steps collectively resulting in a hierarchical clustering tree. In step one, IDT computes a dissimilarity between all pairs of sketches (Fig. 2B). The dissimilarity choice can have drastic computational and inferential impact; thus, choosing one that captures signal variability, and can be efficiently computed, is key to the success of IDT. This is important because the 37,780 observations yield more than 1.4 billion dissimilarities. We used a smoothed distance between the sketches of the data (17) as a dissimilarity function (see supplementary materials).

Fig. 2 IDT detected 29 distinguishable behaviotypes.

(A) The input to IDT is the collection of all 37,780 larval sketches. Structure is not obvious. Blue shading represents the period of photostimulation in all figures. (B) Dissimilarity matrix between all pairs of animals. Only ~1.4 million of the ~1.4 billion pairwise dissimilarities are shown. (C) Embedded Image = 29 distinguishable behavior clusters (behaviotypes) were identified using IDT to learn a data-adaptive clustering of the high-content responses. (D) Mean and standard error of the responses of each of the 29 behaviotypes identified in (C), using the same color code as in (C). (E) Mean and standard error of the responses of the eight behaviotype subfamilies. (F) The same tree as in (C) but with post hoc human labels assigned to the automatically detected behaviotype families and subfamilies.

In step two, IDT transforms the interpoint dissimilarity matrix into a set of n relatively high-dimensional Euclidean vectors, via an approach such as multidimensional scaling (18). Once each data point is in Euclidean space, an extensive toolkit from classical statistical machine learning is applicable (19).

Step three consists of iterating, once per tree depth, the following substeps: (i) select a subset of the dimensions from each cluster, (ii) cluster all the nodes of the partition tree obtained thus far, and (iii) check for convergence at each resulting cluster. For each cluster at each scale, the number of dimensions to select and the number of subclusters to generate are decided in a data-driven adaptive fashion. The final result is a hierarchical tree characterizing families and subfamilies of behavioral responses (Fig. 2C).

To visualize the phenotype categories learned by IDT (or nodes/clusters of the tree), called behaviotypes, Fig. 2D shows, for each cluster, the time-varying means and standard errors for each of the eight sketched time-varying features. Each behaviotype is well separated from at least some other cluster for at least some of the time for at least some features. This provides an intuitive validation of the uniqueness of the behaviotypes, thereby demonstrating the efficacy of IDT.

IDT Identified Both Previously Described and Novel Behavioral Sequences

Inspection of the time-varying means of the responses of each behaviotype cluster (Fig. 2D) and of the videos of animals at the center of each cluster (movies S1 to S58) revealed that IDT clustered many of the larval behavioral responses into categories similar to those a human expert would have identified. We could label the nodes of the learned tree post hoc (Fig. 2, E and F). For example, the first division revealed by IDT differentiates between slow and fast families. The fast family is subdivided into turn-avoid and escape-crawl subfamilies. IDT further distinguished between the right turn–left turn–avoid and the symmetric left turn–right turn–avoid sequence (fig. S1) and between two types of escape crawl that are preceded by more or less hunching and wiggling (fig. S2). The slow family is subdivided into still or backup and turner; the still or backup is further subdivided into still (no movement) and backup (lots of backward crawling) (fig. S3), and the turner is subdivided into turn-turn-turn (continuous turning) and turn-slow-crawl (fig. S4).

After level three of the behaviotype tree, subjective visual inspection of the videos failed to detect differences between related behaviotypes. Nonetheless, they do have distinct properties for some of the features for some of the time (Fig. 2D). Behaviotypes 17 and 18 represent animals with tracking errors in which the MWT streaming and sketching software inverted the front and back of the animals (fig. S6 and movies S33 to S36). IDT assigned animals with this type of tracking error to the two separate behavioral clusters, conveniently isolating such errors.

Many of the optogenetically evoked, automatically detected behaviotype subfamilies are similar to previously described larval responses to various natural stimuli (10, 20, 21). IDT also detected behavioral categories not previously documented. This is unsurprising because only a relatively small portion of the larval stimulus space has been explored in previous studies.

Biased Probabilistic Relationship Between Neuron Activation and Behaviotypes

We used the prior results that nociceptive, mechanosensory, and proprioceptive sensory neurons reliably mediate distinct behaviors to assess the validity of this behavior discovery approach. The optogenetic screen included lines that targeted ChR2 expression to the nociceptive [ppk and R38A10 (10, 22)], mechanosensory (iav) (9), and proprioceptive (R11F05) neurons. Activating distinct sensory neurons mostly biased the behaviotype probability toward distinct subfamilies: (i) Nociceptive stimulation tended to evoke escape behaviotypes; (ii) mechanosensory stimulation tended to evoke turn-avoid behaviotypes; and (iii) proprioceptive tended to evoke slow behaviotypes (Fig. 3A). In contrast, activating the same nociceptive neurons with two distinct GAL4 lines evoked similar behaviotype probability distributions. The response profiles of the negative controls (pBDPU-ChR2) (2, 10) that did not express ChR2 likely represented larval reactions to blue light alone and appeared different from those of animals with optogenetically activated nociceptive, mechanosensory, or proprioceptive neurons (Fig. 3A). Finally, analysis of the 1049 previously unexplored neuronal test lines (Fig. 3A; top rows show 30 examples) demonstrated that distinct lines bias the probability toward distinct behaviotypes. Optogenetic stimulation of different animals of the same line (that is, activating the “same” neurons in different animals) did not always evoke the same behavior; rather, it biased the probability toward a few possible behaviotypes.

Fig. 3 Learning neuron line–behaviotype maps via manifold testing.

(A) Behaviotype probability vectors for a number of lines. Each row shows the percentage of animals performing a behaviotype for optogenetic activation of a particular line: 30 randomly sampled test lines (top), negative controls (pBDPU-ChR2) with no optogenetically activated neurons (middle), and positive controls—nociceptive (ppk-ChR2, noci 1; R38A10-ChR2, noci 2); mechanosensory (iav-ChR2); and proprioceptive (R11F05-ChR2) neuron lines with known effects on behavior (bottom). (B) Line dissimilarity matrix showing pairwise P values for all 1054 lines computed via the manifold test. The first entry is the negative control. Remaining entries are sorted according to P value for the comparison with the negative control, from lowest to highest; 455 lines (4 positive controls + 451 test lines) were significantly different from the negative control (hit lines), and 598 lines were not (nonhit lines). (C) The P values between all pairs of known somatosensory neuron lines and negative controls are “correct”: Those that should be significantly different according to previous studies are, and those that should not be are not. This panel is a subset (top left 5 × 5 submatrix entries) of (B). (D) Behaviotype probability distributions for all 451 significantly distinct test neuron lines, sorted according to their maximum probability behaviotype. (E) The clusters of lines that bias the probability toward the same behaviotypes are reproducible, demonstrated by the fact that the mode of the empirical null distribution of the adjusted Rand index (ARI) for reliable clustering (black curve) is lower in all four repeated trials than the observed ARI (all P values < 0.01).

We also applied IDT jointly to the responses from the four additional identical repeats of the 5-s optogenetic activation stimulus of the same organisms and analyzed the behaviotype distributions of the same individuals on different trials (fig. S5A). We found that even repeated activation of the same neurons in the same individual did not always evoke the same behaviotype; rather, it biased the probability toward a few possible behaviotypes (fig. S5A). However, the responses of individual animals were significantly more similar to each other than to the responses of distinct individuals (fig. S5B).

Screening Neuron Lines via Manifold Testing

Each neuron line is characterized by the empirical probability of larvae performing each behaviotype and is encoded by a Embedded Image = 29-dimensional vector with nonnegative entries that sum to unity. This encoding enables direct testing of each pair of lines by choosing an appropriate test statistic and applying a standard test. However, the choice of test statistic for these multivariate probability vectors is not obvious; moreover, existing tests do not sufficiently address the multiple dependent hypothesis–testing problem. We therefore devised the following manifold test (see supplementary materials).

The main idea of the manifold test is to jointly and nonlinearly embed each experiment into a lower-dimensional representation. The test first computes the Hellinger distance between all 10542 pairs of experiments and then uses multidimensional scaling (MDS) (18) to obtain a low-dimensional Euclidean embedding. In this space, one can easily compute the difference between any pair of trials via an “out-of-sample” extension to this embedding methodology (23). Moreover, via bootstrap, the test computes the significance of those differences (24). Then it adjusts the P values to account for multiple correlated hypotheses (25) and other batch effects (26). Using this manifold test, we identified a large fraction of lines (4 positive controls and 451 test lines) as being significantly different (hit lines) from the negative control line (Fig. 3B; see supplementary data set 1 for the list of all such lines).

The positive control lines with different neurons activated (nociceptive, mechanosensory, and proprioceptive) were all significantly distinct from the negative controls and from each other (Fig. 3C). The two positive control lines with the same nociceptive neurons activated (ppk and R38A10) were not significantly different from one another (Fig. 3C). The known somatosensory neuron controls lend additional credence to this manifold test.

A Neuron Line–Behaviotype Atlas

Given the list of lines significantly distinct from the negative controls, we desired to identify which lines bias the probability toward which behaviotypes, thereby generating a neuron line–behaviotype atlas. Many lines biased the probability of behavior primarily to one behaviotype, with the remaining probability distributed to a few related behaviotypes (Fig. 3D).We confirmed that the line clusterings are reliable by applying IDT independently to the four additional repeats of the 5-s optogenetic activation stimulus of the same organisms, and then comparing how similar the line clusters from each of these additional trials were to the first trial. The identified line clusters were indeed reproducible (P < 0.001 via an adjusted Rand index permutation test, Fig. 3E).

For most behaviotypes, activation of at least one line biased the probability toward that behaviotype significantly more than the negative controls (see supplementary data set 1 for significant line–behaviotype probability distribution numbers and P values). Collectively, these significance results constitute a reference atlas that associates each neural line with a set of behaviors it mediates (and vice versa, associating each behaviotype with a set of neural lines that mediates it). Images of neuronal expression patterns of all of these lines are available from http://flweb.janelia.org/cgi-bin/flew.cgi.

Figure 4 shows examples of behaviotype probability vectors (Fig. 4A) and neuronal expression patterns of a few significant lines that bias the probability toward turn-avoid (Fig. 4B), backup (Fig. 4C), turn-turn (Fig. 4D), or escape subfamilies (Fig. 4E). We analyzed with higher resolution the projections of individual neurons from some of these lines using a single-cell flip-out method (27) and gave names to the identified neurons. A number of the significant lines only drove expression in single pairs of larval neurons, yet they still biased the probability toward specific behaviotypes. For example, activation of the pair of PVL019 neurons in the basal posterior region of the larval brain (with the line R15E06) or of the pair of SeIN161 neurons in the subesophageal ganglion (with the line R15A04) evoked turn-turn-turn behaviotypes in a large fraction of animals. This indicates that single pairs of cells can exert drastic control over behavior, consistent with findings in other systems (28, 29).

Fig. 4 Examples of significant neuron lines and candidate neurons involved in distinct behaviotypes.

(A) Behaviotype probabilities of negative controls and a few example lines with sparse expression patterns that were significantly different from controls. (B to E) Z-projections of confocal stacks of larval nervous systems illustrating neuronal expression (gold) of selected lines. Neuropil is stained for reference with antibody to N-cadherin (blue). High-resolution images of candidate neurons from these lines are shown beneath the lower-magnification images of the entire nervous system. These lines biased the probability toward (B) turn-avoid, (C) backup, (D) turn-turn, and (E) escape behaviotype subfamilies. In some cases, two or more lines that drive in the same neuron biased the probability toward the same behaviotype. R15E06 and R44G12 both drive in PVL019 (red arrows) and biased the probability toward turn-turn behaviotypes 25, 26, 28, and 29. Ppk, R38A10, and R27H06 drive in the same nociceptor neurons and biased the probability toward escape behaviotype 13. (F) R82E12 projections (green) overlap tightly with nociceptive axon terminals (red) in a single 0.5-μm confocal section, likely forming synaptic contacts. These two lines both biased the probability toward the same escape behaviotype 13 and likely belong to the same anatomical and functional neural circuit.

In some cases, we found that lines that contribute to the same behaviotypes contain neurons with anatomically overlapping projections that may constitute presynaptic and postsynaptic partners. For example, the motor neurons in R31A11 and the interneurons in R12A09 both strongly evoked turn-turn-turn behaviotype 28, and their projections occupy the same motor region of the nerve cord and likely overlap. Similarly, R27H06 and R82E12 (Fig. 4E) biased the probability toward escape behaviotype 13 and drive expression in the nociceptive neurons, and in a class of ascending projection neurons (A08n), respectively. Double labeling of the nociceptive and the A08n neurons confirmed that their arbors do indeed tightly overlap, likely forming synaptic contacts (Fig. 4F).

Discussion

An important step toward understanding the structure and function of neural circuits is to identify comprehensive lists of neurons whose activation is causally related to a comprehensive set of motor patterns. This requires statistical approaches for identifying structure in high-dimensional behavior data.

Methods for systematic, automated, and unsupervised clustering are rarely applied to high-dimensional behavioral data. A recent study used unsupervised learning to detect behavioral motifs defined as sequences of four “eigenworm” positions in mutant and wild-type freely moving Caenorhabditis elegans in the absence of any stimulation (30). Here, we developed methods for clustering entire behavioral response sequences during a stimulation period and made use of discovered behavioral clusters to detect significantly distinct experiments from a large-scale screen, providing statistical validation that the learned clusters are meaningful. We applied these methods to discover neurons causally related to behaviors, providing a proof of principle that it is possible to classify neurons in terms of their activation of motor outputs in a fully automated way.

The collection of neuron line–behaviotype relationships collectively constitutes an atlas. Starting with a list of lines whose activation is sufficient to evoke each behaviotype, the relevant individual neurons from each line can readily be identified. Whereas some lines drive expression in a single pair of neurons, most drive in the range of two to five candidate neuron types. In cases where lines drive in more than one neuron, intersectional strategies can be used to target individual neurons and test the effect of their activation on behavior (2).

This reference atlas provides a valuable starting point for understanding how distinct behaviors are selected and controlled. Large-scale connectomics (3133) and functional brain imaging methods (34, 35) will soon provide similarly comprehensive views of the structure of neural circuits and of the activity patterns within those circuits. However, a connectome by itself does not carry information about which neurons mediate which behaviors. Similarly, a brain-activity map alone shows the flow of information through the network, but does not reveal causal relationships between neurons and behavior. Together, the neuron-behavior map, the neuron-activity map, and the connectome complement one another, laying the groundwork for a brainwide understanding of the principles by which brains generate behavior.

The statistical methods presented here are generally applicable to discovery of scientifically meaningful structure from big data—a pressing problem in the information age.

Supplementary Materials

www.sciencemag.org/content/344/6182/386/suppl/DC1

Materials and Methods

Figs. S1 to S6

Movies S1 to S58

Supplementary Data Sets 1 and 2

References (3640)

References and Notes

  1. Acknowledgments: We thank G. M. Rubin, B. D. Pfeiffer, A. Nern, and B. Condron for fly stocks; B. D. Mensh for exceptionally helpful comments on the manuscript; A. Cardona, J. H. Simpson, and K. Branson for helpful discussions; C. Sullivan and A. Mondal for help with editing; H. Li and Fly Light Project Team at Janelia HHMI for images of neuronal lines; Janelia Fly Core for setting up the fly crosses for the activation screen; and Janelia Scientific Computing for help with data processing and storage, especially E. Trautman, R. Svirskas, and D. Olbris. Supported by the Larval Olympiad Project and Janelia HHMI, the XDATA program of the Defense Advanced Research Projects Agency administered through Air Force Research Laboratory contract FA8750-12-2-0303, and a National Security Science and Engineering Faculty Fellowship. All raw data, data derivatives, and code are freely available from http://openconnecto.me/behaviotypes.
View Abstract

Navigate This Article