Research Article

Direct Cortical Control of 3D Neuroprosthetic Devices

See allHide authors and affiliations

Science  07 Jun 2002:
Vol. 296, Issue 5574, pp. 1829-1832
DOI: 10.1126/science.1070291


Three-dimensional (3D) movement of neuroprosthetic devices can be controlled by the activity of cortical neurons when appropriate algorithms are used to decode intended movement in real time. Previous studies assumed that neurons maintain fixed tuning properties, and the studies used subjects who were unaware of the movements predicted by their recorded units. In this study, subjects had real-time visual feedback of their brain-controlled trajectories. Cell tuning properties changed when used for brain-controlled movements. By using control algorithms that track these changes, subjects made long sequences of 3D movements using far fewer cortical units than expected. Daily practice improved movement accuracy and the directional tuning of these units.

Ever since cortical neurons were shown to modulate their activity before movement, researchers have anticipated using these signals to control various prosthetic devices (1, 2). Recent advances in chronic recording electrodes and signal-processing technology now open the possibility of using these cortical signals efficiently in real time (3,4).

However, many neurons may be needed to predict intended movement accurately enough to make this technology practical. Estimates range from 150 to 600 cells or more being necessary (4,5), based on open-loop experiments that recreate three-dimensional (3D) arm trajectories from cortical data offline (6). Here, we compare this approach to a closed-loop paradigm in which subjects have visual feedback of the brain-controlled movement. We then incorporate a movement prediction algorithm that tracks learning-induced changes in neural activity patterns.

Rhesus macaques made real and virtual arm movements in a computer-generated, 3D virtual environment by moving a cursor from a central-start position to one of eight targets located radially at the corners of an imaginary cube. The monkeys could not see their actual arm movements, but rather saw two spheres (the stationary “target” and a mobile “cursor”) with motion controlled either by the subject's hand position (“hand-control”) or by recorded neural activity (“brain-control”) (see supplementary material).

We examined the effect of visual feedback on movements derived from cortical signals by comparing “open-loop” trajectories, created offline from cortical signals recorded during hand-controlled cursor movements, with “closed-loop” trajectories made by the cursor under real-time brain control. In the closed-loop case, subjects saw the cursor movements created from their cortical signals in real time. In the open-loop case, the trajectories were created offline, after the experiment, from the cortical activity recorded during the movement blocks where the cursor was under hand control (7). Therefore, the subject had no knowledge of these offline brain-predicted trajectories. In both the open- and closed-loop cases, the same cortical decoding algorithm was used to generate trajectories. This decoding algorithm assumed that the cells' tuning functions remained constant under both conditions.

This experiment was conducted with monkeys “L” and “M” for 32 and 40 days, respectively. In both subjects, about 18 cells were used to create open- and closed-loop trajectories. As expected, with so few cells, the open-loop trajectories were not very accurate. Although these trajectories went toward the correct targets more often than they would have by chance, they usually had at least one of the X, Y, or Z components pointing in the wrong direction.

Closed-loop trajectories ended in the target more often than did open-loop trajectories (Table 1). Both animals improved their closed-loop target hit rate over the course of the experiment, which suggests that the subjects learned to modulate their brain signals more effectively with visual feedback (8).

Table 1

Mean ± standard deviation of daily statistics from the open- versus closed-loop experiment. Percent time in the correct octant was calculated per movement as the % time the trajectory's X,Y, and Z components had the same signs as the target [based on coordinate system with (0, 0, 0) at the center start position and each target located equal distance in the ± X, Y, and Z directions]. Differences between the open- and closed-loop values were significant in all six categories (P < 0.0001).

View this table:

Many of the cortical units we recorded were stable from day to day. Some were stable for more than 2 years. Other units showed significant changes in their waveforms and movement properties between days (9). The brain-control algorithm was adjusted daily to make use of the current properties of the recorded units. Therefore, subjects had to learn a slightly different brain-to-cursor-movement relation each day. We looked for trends within days that would indicate learning of each new relation. Paired t tests showed that subjects initially improved their target hit rate by about 7% from the first to the third block of eight closed-loop movements each day (P < 0.002).

Subjects had 10 to 15 s to move the cursor to each target—enough time to use visual feedback to make online error corrections in the closed-loop case. We tested a subject's ability to make more ballistic brain-controlled movements by continuing the experiment in monkey M for an additional 20 days with an increased brain-controlled cursor gain and the movement time constrained to 800 ms. As in the slow-movement case, the closed-loop trajectories still hit the targets more often than did the open-loop trajectories (42 ± 5% versus 12 ± 5% of targets hit; P < 0.0001). Again, there was significant improvement with daily practice (0.9%/day; P < 0.009) as well as an initial improvement of about 7% within each day (first to third block;P < 0.05) (10). Despite the shorter movement time, visual feedback still allowed the subject to learn from consistent errors in the brain-controlled trajectories.

In these experiments, the movement-prediction algorithms were based on fixed tuning properties obtained from neural activity recorded each day during a baseline set of hand-controlled cursor movements. This type of calibration cannot be carried out in movement-impaired patients. We have developed a “coadaptive” movement prediction algorithm that does not require physical limb movements or any a priori knowledge of cell tuning properties. By iteratively refining estimates of cell tuning properties as the subject attempted 3D brain-controlled cursor movements, we were able to track learning-induced changes in cell tuning properties.

We tested this coadaptive method in two healthy macaques by restraining both arms during a brain-control task after first recording each day's baseline hand-controlled movements and calculating each cell's tuning properties. At the end of each day's experiment, tuning functions were also calculated directly from the cortical activity collected during the brain-controlled movements. Figure 1A shows a unit whose directional tuning differed significantly between the brain-controlled movements and the hand-controlled movements made earlier that day. Both well-isolated individual cells and inseparable multi-cell groups showed substantial changes in their preferred directions between the two tasks. On average, the magnitude of these changes increased over the course of the experiment (Fig. 1B), and the direction of these changes varied from cell to cell (Fig. 1C). By the last 2 weeks of the experiment, these individual shifts in preferred direction became consistent from day to day (Fig. 1D).

Figure 1

Changes in cortical activity between hand-control and brain-control tasks in subject M. (A) Cell with a 107° change in tuning direction between the hand-control (HC) and brain-control (BC) tasks (the unit waveform is shown in black). Each dot is the mean firing rate during one movement. HC rates are in the right column and BC rates are in the left column of each square. The eight squares correspond to the eight target directions (center four = distal; outer four = proximal). (B) Daily mean angles (thick lines) between hand- and brain-controlled preferred directions for all cells significantly tuned during both tasks (black = contralateral and gray = ipsilateral units to the arm moved during the hand-control task). The thin diagonal lines are linear fits with slopes significant at P < 0.006 (contra) and P < 0.0001 (ipsi). (C) Lines connecting hand-controlled preferred directions with brain-controlled preferred directions (circle ends) projected onto a unit sphere (day 28, only cells significantly tuned in both tasks; black = contra.; dotted = ipsi.). (D) Change in the X, Y, and Z components of the preferred direction unit vectors between the hand- and brain-controlled tasks plotted day-against-day for eight random pairs of days (days 27 or later, only units that were significantly tuned in both tasks on both days; 35 ± 3 units per pair of days) (see supplementary material).

Across days, the directional tuning of most units improved in the brain-control task versus the hand-control task (Fig. 2, A and B) (11). This increase in tuning quality was due, in part, to an improved fit of the units' firing rates to a cosine tuning equation under brain control (12). Although the control algorithm was designed to accommodate the most common deviations from cosine tuning (i.e., larger increases in rate with movements in the preferred direction than decreases in rate with movements opposite the preferred direction, as can be seen in Fig. 2C), the units still showed changes in their average tuning properties that were closer to a true linear function of the cosine of the angle between movement and preferred direction (Fig. 2D). These changes may have provided more uniform control and stability over the workspace (13). The daily improvement in cosine tuning was mirrored by a steady increase in the accuracy of the brain-controlled movements (Fig. 2E).

Figure 2

Changes in cell tuning quality and performance. (A) Daily average R 2 from regressing each cell's mean firing rate per movement against target direction. The dotted line shows hand-control R 2 values. The black line shows brain-control R 2 values. The closed circles indicate days when brain- and hand-controlR 2 values were significantly different (pairedt test, P < 0.05). (B) Difference in R 2 values (brain-control minus hand-control). The thin diagonal line is the linear fit (P< 0.0001). (C and D) show average normalized firing rates to each target plotted as a function of the component of the movement in each cell's preferred direction for day 39. The lines are linear fits to all points with cos Θ above zero (gray) or below zero (black). (C) Hand-controlled task (gray line,R 2 = 0.65; black line = 0.04). (D) Brain-controlled task (gray line, R 2 = 0.67; black line = 0.54). (E) Daily minimum (solid line) and mean (dotted line) target radii used to maintain a 70% target hit rate. The diagonal dotted line is the linear fit of the daily mean (P < 0.0001). The bottom horizontal line shows the minimum target radius allowed (1.2 cm) (see supplementary material).

This technique shows how we could train immobile patients to make 3D cursor movements by coadapting a prediction algorithm to their changing cell tuning properties. However, for patients to control useful prosthetic devices, they would need to use this prediction algorithm without continued adaptation of its parameters, and they would want to make a wider variety of movements than the ones practiced during the coadaptive training.

We tested these issues by following several days' coadaptation training with an additional movement task, the constant-parameter prediction algorithm (CPPA) task, which used fixed tuning parameters, added novel target positions, and required 180° changes in movement directions (Fig. 3).

Figure 3

Monkey M's brain-controlled trajectories in the CPPA task. Trajectories start from the exact center, go to an outer target (colored circles), and return to the center target (gray circle). Trajectories are color-coded to match their intended targets. The black dots indicate when the intended outer or center target was hit. The three letters by each target indicate Left (L)/Right (R), Upper (U)/Lower (L), and Proximal (P)/Distal (D) target locations. The dashes indicate a middle position. (Aand B) are to the eight “trained” targets used in the coadaptive task. (C and D) are to the six “novel” targets.

There was no significant difference between the novel and trained target hit rates in either animal, and both monkeys improved their performance with daily practice (Table 2) (14). Movies of these brain-controlled movements are included in the supplementary material.

Table 2

Mean ± standard deviation of daily performance statistics during the coadaptive and CPPA tasks. “# Units recorded” includes “noise” units that were removed during coadaptation. Coadaptive target hit rates were recalculated with targets at various radii based on movements made after the algorithm had converged.

View this table:

Our work shows that visual feedback combined with an algorithm that tracks changes in cortical tuning parameters improves the efficacy of cortical activity as a control signal for both fast and slow brain-controlled movements. Switching from the hand-controlled to the brain-controlled task caused global changes in the tuning parameters of the recorded neuronal population. How the rest of the population shaped and supported these changes is still an open question. The increased consistency of these changes across days combined with the improvement in performance suggests that the learning process settled on an effective set of parameters for the imposed control scheme.

During the first several days of the coadaptive experiment, our monkeys pushed methodically against the arm restraints in the direction they needed the cursor to move. However, this behavior quickly subsided as performance improved. Spot checks of electromyographic (EMG) activity on the well-adapted days showed suppression of EMG activity throughout the brain-control task. This indicates that it is possible to develop effective brain-control modulation patterns in the absence of physical limb movements or normal muscle activation patterns.

Although the healthy, arms-restrained animal model may not address all the issues related to retraining cortex altered from disuse after an injury or illness, recent magnetic resonance imaging research indicates that the underlying motor maps are maintained, even after years of paralysis (15). Additionally, there are a few cases where completely immobile or “locked-in” patients have had cortical electrodes implanted and have been taught to communicate by scrolling through a sequence of letters using the activity of a few motor cortex cells. These human case studies suggest that cortical cells can regain and maintain the level of activity needed to perform prosthetic control tasks—even after long periods of complete immobility (16). Our results show that neural activity can be reorganized within minutes and, with the proper algorithm, used to achieve brain-controlled virtual movements with nearly the same accuracy, robustness, and speed as normal arm movements.

  • * To whom correspondence should be addressed. E-mail: aschwartz{at}


View Abstract

Stay Connected to Science

Navigate This Article