Report

Schema cells in the macaque hippocampus

See allHide authors and affiliations

Science  08 Feb 2019:
Vol. 363, Issue 6427, pp. 635-639
DOI: 10.1126/science.aav5404

Abstract concepts in the primate brain

Do primates have neurons that encode the conceptual similarity between spaces that differ by their appearance but correspond to the same mental schema? Baraduc et al. recorded from monkey hippocampal neurons while the animals explored both a familiar environment and a novel virtual environment that shared the same general structure as the familiar environment but displayed never-before-seen landmarks. About one-third of hippocampal cells showed significantly correlated firing for both familiar and novel landscapes. These correlations hinged on space or task elements, rather than on immediate visual information. The functional features of these cells are analogous to human concept cells, which represent the meaning of a specific stimulus rather than its apparent visual properties.

Science, this issue p. 635

Abstract

Concept cells in the human hippocampus encode the meaning conveyed by stimuli over their perceptual aspects. Here we investigate whether analogous cells in the macaque can form conceptual schemas of spatial environments. Each day, monkeys were presented with a familiar and a novel virtual maze, sharing a common schema but differing by surface features (landmarks). In both environments, animals searched for a hidden reward goal only defined in relation to landmarks. With learning, many neurons developed a firing map integrating goal-centered and task-related information of the novel maze that matched that for the familiar maze. Thus, these hippocampal cells abstract the spatial concepts from the superficial details of the environment and encode space into a schema-like representation.

The human hippocampus is home to concept cells that represent the meaning of a stimulus—a person or an object—rather than its immediate sensory properties (1). This invariance involves an abstraction from the percept to extract only relevant features and attribute an explicit meaning to them (2, 3). Whereas concept cells are emblematic of the human hippocampus, place cells, which fire when the animal is in a particular place, are typical of rodent hippocampus (4). Place and concept cells share properties, such as stimulus selectivity. Concept cells are specific to one person or object, and place cells are selective to one position within an environment. Furthermore, place cells identified in one environment are silent in a different environment (5). Exceptions to this likely stem from resemblance or common elements across spaces (610). However, in humans and rodents, it is unknown whether hippocampal cells can represent a spatial abstraction. We tested this possibility in monkeys, in which hippocampal neurons develop high-level spatial representations (11). We hypothesized that spatial abstraction involves elementary schemas (12, 13), extracting commonalities across experiences beyond superficial details to signify interrelations among elements (14, 15). We accordingly trained macaques to explore a virtual maze with a joystick in search of an invisible reward whose location had to be triangulated with respect to visible landmarks (Fig. 1, A and B, and fig. S1). After monkeys were proficient in this familiar maze (more than 90% correct, fig. S2), they were tested in an isomorphic novel maze bearing never-before-seen landmarks (Fig. 1B), presented for each session after or before the familiar maze. Thereupon, animals rapidly displayed flexible spatial inference and rapidly reached good performance [figs. S2 and S3 and (11)], indicating that they had constructed a schema of the task (fig. S3) (16) rather than a series of stimulus response associations (learning set). We tested whether this process results in environment-specific memories or in a single schema for both spaces.

Fig. 1 Hippocampal neurons show correlated activity for two functionally equivalent environments.

(A) Task setup. The animal viewed the screen in stereopsis through projector-synchronized shutter goggles and virtually moved in the three-dimensional maze via a joystick. (B) Familiar and novel mazes. Each day, the monkey searched for the hidden reward (depicted as a blue drop of liquid) in a familiar maze (left, green outline) and a novel maze (right, cyan outline). Landmarks of a novel environment were only present in one session. (C) Anatomical MRI images showing the location of the recording sites (green dots) in monkeys S and K. Left panels: Sagittal section at 13 mm lateral from midline showing the recording sites throughout the hippocampus. Right panels: Coronal sections at four antero-posterior locations with respect to interaural line, in mm. (D) Correspondence between position and state space. The state-space graph is laid out so that a state position relative to the center aligns with the direction of the current field of view of the animal. Dashed lines indicate passive return paths. Colored arrows on the right indicate the possible rotations of the animal in the center of the maze, viewed in the physical space (top) or in the state space (bottom). (E) Number of cells with significant IC in familiar, novel, or both environments, as a function of decoding space. (F) Four schema cells showing similar activity maps in the familiar (left, green outline) and novel environments (right, cyan outline) in position (top) and state-space (bottom) coordinates. Color scale ranges from blue (minimum activity) to yellow (peak activity). For each cell, the peak firing rate is indicated. White circles represent reward location. In both environments, cell 1 fires more on the extremity of the arms, cell 2 fires more on the northwest arm, cell 3 fires more on the second turn toward the reward goal, and cell 4 has a high activity for left turns in the center. (G) Correlations between neural maps in the familiar and novel environments. (Left) About a third of neural maps were correlated between familiar and novel mazes (hachures show cells with map correlations insensitive to rotation). (Middle and Right) Distribution of map correlation coefficients in position and state space (dark orange and dark blue). The histogram highlights significant positive correlations (light orange and light blue) and negative correlations (black). The gray outline indicates the distribution of correlations for the random surrogate datasets (scale on right).

We compared the activity of 101 cells active in both environments out of 189 hippocampal cells (Fig. 1C). We first examined neural activity (17) mapped in two ways: as a function of animal position and as a function of task-related state (Fig. 1D). The latter parses navigation behavior into all the possible action trajectories (rotation, translation) as a function of the virtual head orientation. This conjunctive variable better captures the coding of hippocampal cells than any single parameter such as position, direction, or point of gaze (11).

Overall, 70 cells (69.3%) exhibited a significant information content (IC) in at least one environment (familiar or novel) and map type (position or state space). Approximately half of the cells (N = 30) coded only one environment, whereas a proportion significantly higher than chance (N = 40) coded both familiar and novel environments in at least one map type (P < 10−4, chi-square test; Fig. 1E). Thus, although many cells discriminated between the different environments, consistent with observed remapping in the rodent (5, 18), others participated in the representation of both environments. This afforded us a systematic comparison of their activity profiles across mazes.

A subpopulation of cells active in both environments showed similar firing when their maps were realigned with respect to reward position. Figure 1F illustrates the individual maps of four cells in position and state spaces. Unlike cells exhibiting a discrete place field, these cells displayed a complex pattern of activity in position and state space that was similar in both environments. We term these neurons “schema cells,” because reward-centered map similarity reflects a conceptual knowledge of space structure: Reward was not signaled by specific cues but had to be triangulated from landmarks. Similarly, cell activity was not anchored to specific stimuli or events but organized by knowledge of the maze layout. Thus, in both environments, cells fired in corresponding parts of the animal’s trajectory in the task. We quantified these task-situated similarities by computing spatial correlations between the maps. Because processes linked to reward expectation and consumption could affect our analysis, we excluded spikes recorded in the period from −500 to +1500 ms around reward delivery. For neurons with significant IC in at least one of their neural maps, familiar and novel maps were significantly correlated in 32.3% (N = 20) of cells when computed in state space and in 30.7% (N = 19) of cells when computed in position space (all P < 10−4, Fig. 1G). To further investigate the stability of the created schema, we recorded a subset of cells while one animal learned two successive novel environments. Position and state-space neural maps were also significantly closely correlated between two different novel environments (fig. S4).

We next examined alternative explanations. First, if cells were responsive to visual elements (paths, grass, or sky) shared between environments, then those responses, even though not informative for reward position, could cause the observed homologies. Because these elements obeyed a fivefold symmetry, correlations should be high for maps rotated by any multiple of 72°. Although Fig. 1F illustrates such an example (cell K-40226/11), observing significant correlations for all rotated versions of a neural map (even without Bonferroni correction) was actually rare (see hatching in Fig. 1G). Therefore, the familiar-novel correlations went beyond simple superficial geometrical similarity and were more deeply informed by the reward location with respect to other cues.

Next, to further ascertain whether the correlations derived from a common spatial concept rather than from apparent visual cues, we analyzed cell activity as a function of the animal’s instantaneous point of regard (point-of-gaze maps; Fig. 2, A and B). The neurons with significant IC in their point-of-gaze neural maps did not encode both environments more often than expected by chance (P = 0.27, chi-square test, Fig. 2E). Moreover, their maps were less similar across environments: Only 15.4% of the cells displayed a significant correlation between familiar and novel (P = 0.0016, Fig. 2F). Furthermore, the greater specificity of the firing patterns for each environment was not caused by a systematic difference in visual exploration behavior in novel compared to familiar: The behavioral gaze-point density maps were comparable in the two environments, both overall (Fig. 2C) and when articulated by behavioral epochs (Fig. 2D). This lower correlation of neural point-of-gaze maps is an additional indication that visual elements shared across the two environments did not drive the map correlations. More fundamentally, it is consistent with our interpretation of schema cells, because spatial concepts do not depend on view.

Fig. 2 Neural activity maps with respect to point of gaze showed little correlation, though gazing behavior did.

(A) Geometrical projection used for the point-of-gaze maps. (Top) Definition of the animal’s point of gaze (red dot) in three-dimensional space. Brown rectangles indicate landmark positions. When directed above the ground and not on a landmark, the point of gaze was ascribed to an invisible wall at landmark distance from the maze center. Two different positions of the monkey are illustrated. (Bottom) Projected view from above used for plotting the gaze-referred neural maps (B). (B) Neural activity maps of the same four cells as in Fig. 1F, as a function of point of gaze in each environment (familiar, green outline; novel, cyan outline). Open white rectangles, landmark locations; red bar, path leading to the reward from the maze center. Other graphical conventions as in Fig. 1F. (C) Density of gaze position in familiar or novel environments. Color coding is the same as in (B) but applied to point-of-gaze density instead of firing rate. Gray rectangles, landmarks. (D) Density of gaze position at the time of landmark appearance in the field of view, separated by behavioral epoch (or edge on state-space graph) and side of landmark appearance for familiar and novel environments. The left column illustrates the situation when landmarks appear on the left. Color coding is the same as in (C). White rectangle, appearing landmark. (E) Number of cells with significant IC in familiar or novel environment. (F) Distribution of the correlation coefficients. The significant positive correlations are shown in light brown. Other graphical conventions are the same as in Fig. 1G.

We also pursued the possibility that correlations were artifacts of overlooked elements of the virtual world or of the experimental setup. If so, correlations should be strong from the start of the session and should not evolve with learning. Alternatively, if neural map similitude reveals a common space concept, learning should be accompanied by an increase in map correlation. We binned trials longitudinally across the novel session, from the moment performance rose above chance (fig. S2). The structure of these neural maps tended to converge across trials toward the familiar map (Fig. 3A, four example cells in position or state spaces). Over the population of neurons encoding both environments in either position or state space, the correlation between familiar and novel maps significantly increased across trials (Fig. 3, left panel of B and C). A similar increase was not observed in cells that coded both environments in point of gaze, as expected if these cells represented either some elements common to the two mazes (which, by design, bore no spatial information) or some maze-specific feature. Cells that significantly encoded only one environment also displayed no such trend (right panel of Fig. 3B), ruling out a nonspecific change. This convergence of representations was not due to an increase of spatial information (Fig. 3D), nor was it mirrored in a significant increase of the success rate (Fig. 3D).

Fig. 3 Correlation between familiar and novel mazes increased as a function of learning.

(A) Activity maps of four example cells, computed at three different stages of learning (cyan outline) and for the whole familiar session (green outline). N, number of successful trials used for each map; r, familiar-novel correlation coefficient for each learning stage. (B) Average cross-correlation for cells with a significant IC in familiar and novel environments (left) and for cells with a significant IC in only one environment (right), as a function of trial after reaching performance (perf.) criterion. Error bars indicate confidence intervals. *P < 0.05; **P < 0.01; ***P < 0.001. (C) Distribution of correlation coefficients for cells with significant IC in both environments, early and late in learning. The difference is shaded. Colors identify decoding space, as in (B). (D) IC as a function of relative trial number in the novel environment, for cells with significant IC in both familiar and novel environments. Colors identify decoding space, as in (B). Error bars indicate confidence intervals. Gray dashed line indicates success rate (scale on the right). bit/sp., bit per spike.

Finally, we explored whether the map correlations reflected (i) only a reward-centric self-positioning or (ii) an isomorphism between more elaborated task-informed representations. In contrast to the position maps, the state-space maps differentiate activity in the central choice point of the maze according to view orientation, rotation direction, and task context (an initial center rotation versus a later one). The higher correlation of state-space maps (fig. S5; Wilcoxon test, P = 0.024) suggests that a task-situated representation of space tended to be shared across environments. Thus, this schematic representation probably encompassed dimensions other than physical space and assimilated task-related internal variables, such as present and future action.

In sum, our data show that whereas the majority of primate hippocampal neurons displayed environment-specific activity [“remapping,” (19)], about a third of hippocampal neurons displayed components that generalized across environments. This pattern is apparent when activity is analyzed in position or state space, but less so according to point of gaze. Thus, activity was not dominated by what the animal fixated but rather by an awareness of situation with respect to the goal, that is, a schema.

The role of the hippocampus in schema formation and updating has been shown in the rat by lesion and inactivation studies (14, 20, 21) during learning of spatially arranged odor-location pairs. Now, our results indicate that generic spatial or task schemas extracted across repeated experiences are encoded by primate hippocampal cells. The convergence of schema cell activity through learning is consistent with the role of schemas in organizing past experiences (12, 13) and in promoting knowledge acquisition (14, 15). That animals needed months of training to master the familiar environment but thereafter needed only a dozen trials to reach a similar proficiency in novel mazes attests to the advantage offered by such conceptual generalization. This generalization goes beyond the contextual modulation of rat place cells depending on trajectories and/or goal (2224). When present in rodents, spatial invariance across spaces stemmed from sensory similarity (8, 9, 18). Although rat hippocampal cells may encode schemas in the sense of relational representations (10, 25), their coding for a schema of spatial structure across environments (that is, spatial abstraction) remains to be demonstrated.

In humans, conceptual coding in the hippocampus has been shown for social or object categories in single cells (3) and for categorical or relational domains via functional magnetic resonance imaging (2628). Capture of higher-order structure across experiences may involve a recurrence of information through an entorhinal-hippocampal loop (29) or a hippocampal-prefrontal interaction (30). In the spatial domain, the human hippocampus codes morphological homologies between layouts (31) but also differentiates environments containing common elements through pattern separation (32). Our results shed light on the neural mechanisms underlying these observations: The monkey hippocampus simultaneously represents both environment-specific items (episodic memory) via cells that express remapping, and schemas that apply to these different episodes via schema cells. Such complementarity has important representational properties (data compression), while facilitating adaptive behavior. Indeed, schema cells do not signal simple geometry per se but encode an abstract, behaviorally relevant structure of space. In this respect, they are the closest documented example of a nonhuman primate analog to human concept cells (3). Spatial abstraction seems thus to exist in a species lacking both symbolic language and any proven ability for symbolic representation of space. The activity of these schema cells likely participates in translating the recognition of a space context into a possible array of actions, thereby encoding the functional importance of the present environment. This link in the memory chain connecting a spatiotemporal sensorimotor context to behavioral decisions would not necessitate symbolic associations.

Supplementary Materials

www.sciencemag.org/content/363/6427/635/suppl/DC1

Materials and Methods

Figs. S1 to S5

References (3335)

References and Notes

  1. For each neuron, 999 surrogate datasets were created by randomly shifting spiking chunks of 5 s; these datasets allowed us to normalize variables and derive nonparametric statistics.
  2. Results obtained in virtual reality; direct comparison with remapping in the rodent must be taken with care.
Acknowledgments: The authors thank A. Planté for help with data collection and animal training; O. Zitvogel and S. Pinède for task and set-up programming; and L. Parsons for editing the English. Funding: LABEX-CORTEX of the University of Lyon, www.labex-cortex.com (grant number ANR-11- LABEX-OO42); Marie Curie Reintegration Grant, https://erc.europa.eu/ (grant number MIRG-CT-21939); Centre National de la Recherche Scientifique, www.cnrs.fr (grants PEPII and PEPS to P.B.); and Agence Nationale pour la Recherche, www.agence-nationale-recherche.fr (ANR-08-BLAN-0088- Brain-GPS to J.-R.D. and S.W. and ANR-17-0015-Navi-GPS to S.W.). Author contributions: Conceptualization: S.W. and J.-R.D. Data curation: S.W. Formal analysis: P.B. Funding acquisition: S.W. Investigation: S.W. and P.B. Methodology: P.B. and S.W. Project administration: S.W. Resources: S.W. and J.-R.D. Software: P.B. Supervision: P.B. and S.W. Validation: P.B. and S.W. Visualization: P.B. and S.W. Writing, original draft: P.B. and S.W. Writing, review and editing: P.B., J.-R.D., and S.W. Competing interests: The authors declare no competing interests. Data and materials availability: Data are available at http://crcns.org/data-sets/hc/hc-21.
View Abstract

Navigate This Article