To Sleep, Perchance to Dream

+ See all authors and affiliations

Science  02 Mar 2007:
Vol. 315, Issue 5816, pp. 1219b-1220b
DOI: 10.1126/science.315.5816.1219b

In his Perspective “What do robots dream of?” (17 Nov. 2006, p. 1093), C. Adami provides an interesting interpretation of the Report “Resilient machines through continuous self-modeling” by J. Bongard et al. (17 Nov. 2006, p. 1118). Bongard et al. designed a robot with an algorithm of its stored sensory data to indirectly infer its physical structure. The robot was able to generate forward motion more adaptively by manipulating its gait to compensate for simulated injuries. Adami equates this algorithm to “dreams” of prior actions and asks whether such modeling could extend to environmental mapping algorithms. If this were possible, then a robot could explore a landscape until it is challenged by an obstacle; overnight, it could replay its actions against its model of the environment and generate (or synthesize) new actions to overcome the obstacle (i.e., “dream up” alternative strategies). It could then return the next day with a new approach to the obstacle.

This work in robotics complements current findings regarding sleep and dreaming in humans. There is now strong evidence in human sleep research showing that performance on motor (1) and visual (2) tasks is strongly dependent on sleep, with improvements consistently greater when sleep occurs between test and retest. This is generally believed to be related to neural recoding processes that are possibly connected to dreaming during sleep (3). However, when one considers human dreaming, it is not a simple replay of daily scenarios. It has complex, distorted images from a vast variety of times and places in our memory, arranged in a random, bizarre fashion (4). If we are to model such activity in robots, we would need to have some form of “sleep” algorithm that randomizes memory and combines it in unique arrays. This could be a way to generate unique approaches to scenarios that could be simulated. Otherwise, how else would scenario replay be an improvement over repeated trials in the environment?


The study of human phenomena can be extremely difficult, and the study of sleep and dreaming is no exception (5). Robots would be ideal experimental subjects in many ways. Robots do not forget things, do not censor what they report, will not have problems sleeping, will not be bored by the tasks, are not going through life crises, and are not distracted by the laboratory or experimenter. Adami states that the discipline of experimental robot psychology may not be far off. I say, “Bring it on!”


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.


Conduit discusses recent work by Bongard et al. in light of dream research. I argued in my Perspective that the periods of action synthesis that are interspersed with periods of physical testing of actions could be interpreted as “robotic dreams” and speculated about a future discipline of experimental robotic psychology. Conduit suggests that, more than replaying the past days' events, human sleep consists of arrays of apparently randomly juxtaposed memories from different times and places in memory, and that these unique experiences (that do not exist in reality) are perhaps the reason for the “creative leap” that sometimes follows restful sleep.


But the periods between physical actions in the algorithm of Bongard et al. are by no means just replays of the previous days' events. Rather, during those periods the robot is evaluating candidate models of itself and its ability to respond, that is, it is checking whether a particular physical action (say, “move leg forward”) is compatible with the remembered result (say, “tilt sensor 1 increases, all others the same”) given the robot's self-modeling. In other words, the robot is not rethinking the day's events, but rather imagining possible self-models in light of the day's events. Only after this phase does the robot look for actions that could discriminate between models. If we would translate this algorithm into one where a robot is to infer a model of the environment rather than self, it would be necessary to generate as wide a variety of environments as possible, so that mental trials of actions would have a better chance of generating a response compatible with what is remembered. In such a case, perhaps the jagged and discontinuous nature of dreams can be viewed as a combinatorial algorithm designed to create as much diversity in environment models as possible. But to generate behaviors that discriminate between these potential models, we would have to imagine living and navigating in them. Which, it seems to me, we do, but only in our dreams.


The analogy between machine and human cognition may suggest that reported bizarre, random dreams may not be entirely random. The robot we described did not just replay its experiences to build consistent internal self-models and then “dream up” an action based on those models. Instead, it synthesized new brief actions that deliberately caused its competing internal models to disagree in their predictions, thus challenging them to falsify less plausible theories and, as a result, improving its overall knowledge of self. It is possible that the mangled experiences that people report as bizarre dreams correspond to this unconscious search for actions able to clarify their self-perceptions. Many of the intermediate candidate models and actions developed by the robot (as seen in Movie S1 in our Supporting Online Material) were indeed very contorted, but were optimized nonetheless to elucidate uncertainties. Edelman (1), Calvin (2), and others have suggested the existence of competitive processes in the brain. Perhaps the fact that human dreams appear mangled and brief is exactly because they are—as in the robot—“optimized” to challenge and improve these competing internal models?

Indeed, analogies between machines learning from past experiences and human dreaming are potentially very fruitful and may be applicable in both directions. Although robots and their onboard algorithms are clearly simpler and may bear little or no direct relation to humans and their minds, it may be much easier to test hypotheses about humans in robots. Conversely, ideas from human cognition research may help direct robotic research beyond merely serving as inspiration. Specifically, it is likely that as robots become more complex and their internal models are formed indirectly rather than being explicitly engineered and represented, indirect probing techniques developed for studying humans may become essential for analyzing machines too.


  1. 1.
  2. 2.

Related Content

Navigate This Article