Report

Reach Plans in Eye-Centered Coordinates

See allHide authors and affiliations

Science  09 Jul 1999:
Vol. 285, Issue 5425, pp. 257-260
DOI: 10.1126/science.285.5425.257

Abstract

The neural events associated with visually guided reaching begin with an image on the retina and end with impulses to the muscles. In between, a reaching plan is formed. This plan could be in the coordinates of the arm, specifying the direction and amplitude of the movement, or it could be in the coordinates of the eye because visual information is initially gathered in this reference frame. In a reach-planning area of the posterior parietal cortex, neural activity was found to be more consistent with an eye-centered than an arm-centered coding of reach targets. Coding of arm movements in an eye-centered reference frame is advantageous because obstacles that affect planning as well as errors in reaching are registered in this reference frame. Also, eye movements are planned in eye coordinates, and the use of similar coordinates for reaching may facilitate hand-eye coordination.

To reach toward an object, information about its location must first be obtained from the retinal image. Early visual cortical areas contain topographic maps of the retina, and as a result the target is originally represented in eye-centered coordinates. However, targets for reaches should ultimately be represented in limb coordinates that specify the direction and amplitude the limb must move (motor error vector) to obtain its goal. Thus, for the brain to specify an appropriate reach command, coordinate transformations must take place. Transformation of signals from eye to limb coordinates requires information about eye, head, and limb position. These signals could be combined all at once to accomplish this transformation or in serial order to form intermediate representations in head-centered coordinates (by adding eye position information) and body-centered coordinates (by adding eye and head position information) (1). At some point in this process a plan to make the movement is formed; knowing how reach plans are represented in the brain can tell us much about the mechanisms and strategies the brain uses to generate reaches.

The major anatomical pathway for visually guided reaching begins in the visual cortex and passes through the posterior parietal cortex (PPC) to the frontal lobe. Different regions of PPC have recently been shown to be specialized for planning different types of movements (2, 3), including areas specialized for saccadic eye movements [the lateral intraparietal area (LIP)], for reaches [the parietal reach region (PRR)], and for grasping (the anterior intraparietal area). In other words, at this level of the visual-motor pathway the pattern of neural activity reflects the outcome of a movement selection process. Because PPC is partitioned into planning regions for different actions, it has been proposed that each subdivision should code its respective movement in the coordinate frame appropriate for making the movement (4). This proposal predicts that targets for reaches should be coded in limb coordinates in PRR. Here we demonstrate that the responses of reach-specific neurons in PRR are more consistent when reach targets are described in eye coordinates than in arm coordinates, showing that, at least in PRR, early reach plans are coded in terms of visual space rather than in terms of the limb.

Single cell recordings were made in PRR (5). We tested neurons in four conditions; in two conditions different reaches were performed to targets at the same retinal location, and in the other two conditions identical reaches were made to targets at different retinal locations (6). This paradigm allowed us to observe independently the effects on PRR neurons of manipulating target location in eye and limb reference frames. A reach-specific neuron tested in these four conditions is shown in Fig. 1. The effect of varying the initial hand position is shown in Fig. 1, A and B; the cell's spatial tuning is similar in the two conditions, showing that the cell is largely insensitive to changes in the limb-centered positions of the targets. The effect of changing the direction of gaze is shown in Fig. 1, C and D. The cell's spatial tuning changes markedly between these two conditions, even though the arm-centered locations of the targets do not change. In all conditions, the cell's preferred reach end point is constant relative to the direction of gaze—down with respect to fixation. This neuron is selectively active for reaches and encodes target location in an eye-centered reference frame.

Figure 1

Behavior of a PRR neuron in the coordinate frame task. (A to D) Responses of the cell for reaches made during one of the four task conditions. Icons depict behavioral conditions at the beginning of a trial: initial hand position and fixation are represented by solid black and gray circles, respectively; other target button locations are represented by open circles. Below each icon, spike density histograms (28) are plotted at positions corresponding to the target button locations on the board (11 locations in A, B, and D; 10 locations in C). Initial hand position and fixation position are indicated by H and E, respectively. Histograms are aligned at the time of cue onset, indicated by the long tic on the time axis. The cue was illuminated for 300 ms; its duration is marked in (C). Tic marks = 100 ms.

This neuron exemplified the population of 74 neurons from three monkeys tested in this experiment. The data from all neurons are summarized by correlation analysis in Fig. 2A (7). Each point represents one neuron; a point's position along the horizontal axis represents the correlation between the cell's two tuning curves measured with targets at the same retinal location (conditions shown inFig. 1, A and B). The position along the vertical axis represents the correlation between that neuron's tuning curves measured with targets at the same limb-centered location (conditions shown in Fig. 1, C and D). Eighty-four percent of the neurons lie below the line of equal correlation (8), showing a better correlation in an eye-centered reference frame than in a limb-centered reference frame. A second test was used in which the two tuning curves measured with the same initial hand position but with different eye positions were shifted into alignment in eye-centered coordinates (Fig. 2B). With this analysis, 81% of neurons had a correlation that was greater when the tuning curves were shifted into eye-centered alignment than when they were not shifted. Thus, the responses of most PRR neurons were better correlated for identical reach targets in eye coordinates than for identical reach targets in arm coordinates. For most neurons, spatial tuning was also more consistent in eye coordinates than in head- or body-centered coordinates; although target locations in the latter two reference frames were invariant across the four task conditions, neural responses varied with the direction of gaze.

Figure 2

Reference frame analysis for the population of reach neurons. (A) For each neuron (○), the correlation between the two tuning curves that have a common initial hand position (Fig. 1, C and D) is plotted on the vertical axis, and the correlation between the two tuning curves that have a common eye position (Fig. 1, A and B) is plotted on the horizontal axis. Seventy-four neurons are shown. Diagonal line represents equal correlation in limb-centered and eye-centered coordinates. Solid circle represents the neuron shown in Fig. 1. (B) Vertical axis is the same as in (A); horizontal axis is the correlation for the tuning curves collected with the same initial hand position but shifted into the same eye-centered alignment (data in Fig. 1C correlated with data in Fig. 1D shifted two buttons to the left).

An eye-centered representation of a reach plan potentially may be disrupted if the eyes move before the reach can be executed, particularly if the reach is to a remembered location in the dark. Other brain areas involved in movement planning have been shown to update their spatial representations across saccadic eye movements and head movements (9). To test whether PRR can compensate for a saccade, we trained animals to make a saccade while planning a reach [the intervening saccide (IS) task] (10). The reach target was presented outside of or on the edge of the response field, and then, after the target was turned off, a saccade was instructed that brought the reach goal into the center of the response field. Figure 3C shows a neuron tested in this task. Before the monkey makes a saccade, the neuron's response is low, indicating the target is out of the response field (Fig. 3A). After the saccade, the neuron responds at a higher rate, similar to its response when the target actually appeared in the response field (Fig. 3B). A neuron was deemed to exhibit compensation for saccades if its response after the saccade was significantly greater (Mann-Whitney test,P < 0.05) than its response in the task where the target is presented out of the response field and no saccade is made (as in Fig. 3A) (11). All 34 PRR neurons that we tested showed compensation for saccades (Fig. 3D). Thus, PRR compensates for saccades to preserve correct encoding of reach targets in an eye-centered reference frame.

Figure 3

(A to C) Behavior of one neuron tested in the intervening saccade experiment. The three spike density histograms show the response for reaches to the same target in the three tasks. The position of the response field is indicated by the gray region. (A) Condition of the CF task with gaze directed so that the target is out of the response field. (B) Condition of the CF task with the target in the response field. (C) IS task. The eye movement carries the reach goal into the neuron's response field. Below each histogram is a trace of the horizontal component of eye position during one trial. Bars above histograms, timing of cue. Histograms are aligned on time of cue presentation. (D) Population analysis. Index is [(after saccade) − (target out)]/[(after saccade) + (target out)] where after saccade is the mean firing rate in the IS task during the 500-ms epoch from 100 ms after the saccade to the go signal, and target out is the mean firing rate in the CF task condition with the target out of the response field (A) during the 500 ms before the go signal. The index value for the cell in (A) to (C) is indicated by the arrow.

Psychophysical studies have provided evidence for a number of extrinsic coordinate frames for reach planning including eye-, head-, and shoulder-centered coordinates (12). Presumably the studies that find eye-centered effects are probing early planning stages in areas like PRR, which codes in eye coordinates.

There is suggestive evidence that PRR may work in conjunction with other areas to specify reach plans in eye coordinates. Lacquanitiet al. (13) found some area 5 neurons with reach activity that was more closely linked to the spatial location of the goal than to the direction of limb movement, although the paradigm they used did not allow them to determine the reference frame used by these cells. Although cells with response fields that are spatially invariant when the direction of gaze changes have been found in the area of the ventral intraparietal (VIP) [which has been suggested to play a role in head movements (4)], this is true of only about half the cells in VIP (14); the other cells in this area code in an eye-centered frame. Even in the premotor cortex, where limb-centered (15) and other nonretinal (16) response fields are found, about half the cells are still modulated by eye position (17), although it has yet to be established whether the response fields are in eye coordinates.

Recent studies have emphasized that two largely nonoverlapping circuits, distributed through multiple brain regions, are responsible for eye movements (18) and reach movements (19). We propose that there is an initial stage in the multiarea reach circuit in which reaches are coded in eye-centered coordinates (Fig. 4). Response fields in a variety of areas in this presumed reach network, which includes PRR, area 5, and premotor cortex, are gain modulated by eye, head, and limb position signals. These gain fields can provide the mechanism necessary for the transformation (20) to later effector-centered reference frames such as limb-centered coordinates (Fig. 4). The few cells we found in PRR that were better correlated in a limb reference frame than in an eye reference frame, along with the cells with nonretinotopic and limb-centered fields in other reach areas, could reflect these later stages of movement processing. A prediction of this model, borne out by this study, is that for the transformation to operate correctly, neurons with eye-centered response fields must compensate for IS because eye position gains will necessarily change after the saccade.

Figure 4

Summary of pathways for sensory-motor control. Putative flow of information is from bottom to top.

In summary, our model proposes that initial plans to reach or make a saccade to a target are formed within distinct networks in eye-centered coordinates; these plans are updated if disrupted by IS; and finally, later stages of reach processing in head, body, and limb coordinates are achieved through gain modulations of the eye-centered representation.

There are several advantages to making reach plans in eye coordinates. First, natural scenes are cluttered with many potential reach goals as well as obstacles to reaching. If every object had to be converted to limb coordinates before formation of a planned reach, considerably more computation would be required than if the initial planning were performed in visual coordinates (21). Second, reach movements can be modified in flight by visual cues and cortical motor activity is correlated with these modifications (22). Because the hand is usually visible during reaching, it would be most parsimonious to make corrections to the reach plan in the same coordinates as on-line visual error signals. Third, the reach system is plastic, as has been demonstrated in adaptation experiments in which the visual feedback during reaching is perturbed with prisms (23). Clower et al. (24) have shown that the parietal cortex is uniquely involved in prismatic adaptation for reaches. Again the errors detected for adaptation are in eye coordinates, and this would be a most natural coordinate frame in which to recalibrate reach plans. Finally, planning reaches in eye coordinates may facilitate hand-eye coordination. Even in simple tasks, there is a complex orchestration of eye and hand movements, with the eyes and hands often moving independently to different locations (25). Nearby parietal area LIP is involved in planning eye movements and shares many similarities with PRR including eye-centered response fields, compensation for IS, and gain field modulation by eye position (Fig. 4). These two areas may use a similar encoding of space to enable fast and computationally inexpensive communication between them for simultaneous, coordinated movements of the eyes and arms. The above four considerations lead to the conclusion that the findings of this study, which at first glance appear quite surprising, are perhaps not so surprising after all.

  • * Present address: Howard Hughes Medical Institute and Department of Neurobiology, Stanford University School of Medicine, Fairchild Building, Room D209, Stanford, CA 94305, USA.

  • Present address: Washington University School of Medicine, Department of Anatomy and Neurobiology, Box 8108, 660 South Euclid Avenue, St. Louis, MO 63110, USA.

  • To whom correspondence should be addressed. E-mail: andersen{at}vis.caltech.edu

REFERENCES AND NOTES

View Abstract

Navigate This Article