Report

Programmable self-assembly in a thousand-robot swarm

See allHide authors and affiliations

Science  15 Aug 2014:
Vol. 345, Issue 6198, pp. 795-799
DOI: 10.1126/science.1254295

Large-scale robotic self-assembly

When individuals swarm, they must somehow communicate to direct collective motion. Swarms of robots need to deal with outliers, such as robots that move more slowly than the rest. Rubenstein et al. created a large swarm of programmed robots that can form collaborations using only local information. The robots could communicate only with nearby members, within about three times their diameter. They were able to assemble into complex preprogrammed shapes. If the robots' formation hit snags when they bumped into one another or because of an outlier, additional algorithms guided them to rectify their collective movements.

Science, this issue p. 795

Abstract

Self-assembly enables nature to build complex forms, from multicellular organisms to complex animal structures such as flocks of birds, through the interaction of vast numbers of limited and unreliable individuals. Creating this ability in engineered systems poses challenges in the design of both algorithms and physical systems that can operate at such scales. We report a system that demonstrates programmable self-assembly of complex two-dimensional shapes with a thousand-robot swarm. This was enabled by creating autonomous robots designed to operate in large groups and to cooperate through local interactions and by developing a collective algorithm for shape formation that is highly robust to the variability and error characteristic of large-scale decentralized systems. This work advances the aim of creating artificial swarms with the capabilities of natural ones.

In nature, groups of thousands, millions, or trillions of individual elements can self-assemble into a wide variety of forms, purely through local interactions. Examples can be found across a wide range of physical scales and systems: at the molecular scale with self-assembly of crystals or rotary motors of bacterial flagella (1, 2), at the cellular scale with the development of multicellular organisms (3, 4), and at the colony level with ants creating structures such as rafts, chains, and nests (bivouacs) using only their interconnected bodies as building material (5, 6). Through collective shape formation, a group can dramatically change how it interacts with its environment. For example, the evolution of multicellular body plans enabled organisms to rapidly fill many ecological niches (7), and self-assembly of bridges and bivouacs allows army ant colonies to traverse difficult terrain while providing security and environmental regulation for the queen and brood (5). These examples of collective intelligence are fascinating to scientists across disciplines, as much of the global complexity arises from interactions among individuals that are myopic, sensing and interacting at scales many orders of magnitude smaller than the phenomenon itself.

In the field of robotics, researchers use inspiration from collective intelligence in nature to create artificial systems with capabilities observed in natural swarms. Researchers have designed tiny robots, inspired by bees and ants, that are envisioned to work together in large groups, even assembling together to cross difficult terrains (810). Similarly, using inspiration from multicellular development, several groups have developed self-assembling “cellular” robots that can reconfigure into many morphologies to complete different grasping or locomotion tasks (11), or can act as programmable matter that can rapidly assemble into user-specified tools such as a wrench or key (12, 13). However, there still exists a substantial gap between the conceptual designs and the realized systems. Increasing the scale of autonomous robotic systems remains a significant challenge both for algorithm and hardware design. Current systems are on the order of 10 to 50 robots, with only a few exceeding 100 robots (14, 15), in part because of cost and design choices that make manufacturing and operations difficult. This lack of large-scale physical systems has in turn hindered experimental investigation of algorithms for engineering self-assembly and other forms of complex collective intelligence. Although simulators are a valuable tool for systematically exploring algorithm behavior, they frequently involve simplifications for computational tractability. Such simulated systems can fail to faithfully reproduce the intricate physical interactions and variability that exist in real systems, and their fidelity to the real world is difficult to verify or improve without feedback from physical experiments.

We report a system that addresses both the physical and algorithmic challenges of robotic swarms. We demonstrate a thousand-robot swarm capable of large-scale, flexible self-assembly of two-dimensional shapes entirely through programmable local interactions and local sensing, achieving highly complex collective behavior. The approach involves the design of a collective algorithm that relies on the composition of basic collective behaviors and cooperative monitoring for errors to achieve versatile and robust group behavior, combined with an unconventional physical robot design that enabled the creation of more than 1000 autonomous robots.

In designing a large-scale robot swarm, the extent to which the robots can be fully autonomous (capable of computation, locomotion, sensing, and communication) must be balanced against the cost per robot. Mass production favors robots with fewer and cheaper components, resulting in lower cost but also reduced capabilities and reliability. Additionally, operations such as powering on/off, reprograming, and charging must be addressed; at the scale of a thousand robots, there is a substantial and practical need for group-level rather than individual-level operation (16). In general, how individual capabilities and reliability relate to collective capabilities and reliability remains an active theoretical question. Here, we demonstrate that the limited capabilities of our system are sufficient to permit the execution of a complex algorithm for self-assembly at the swarm level for a collective of 1024 robots (Fig. 1), both theoretically and experimentally.

Fig. 1 Kilobot swarm robot.

(A) A Kilobot robot, shown alongside a U.S. penny for scale. (B) Each Kilobot has an onboard microcontroller for executing programs autonomously, two vibration motors for moving straight or turning on a flat surface, and a downward-facing infrared transmitter and receiver. Robots communicate with others within a range of 10 cm (roughly three robot diameters) by reflecting infrared light off the table below. Communicating robots can evaluate relative distance by measuring the strength of the received infrared signal, but they cannot sense relative bearing (angle). (C) A 210 Kilobot swarm. The Kilobot design allows for all operations on the entire swarm (charging, programming, etc.) to take a constant time to complete, independent of the number of robots in the swarm.

We designed a simple low-cost robot called Kilobot, first presented in (17) with a small prototype swarm of 30 robots executing a suite of simple collective behaviors. Kilobot exploits several unconventional design choices, such as using vibration motors for sliding movement rather than traditional wheels (18) and reflecting infrared light off the table surface below for communication and distance sensing between robots. Vibration motors provide noisy locomotion without position feedback, preventing a single robot from traveling long distances with any precision. Robots can measure distance to a neighboring robot by sensing the infrared light intensity of received messages, although this measurement is noisy and provides no information about the relative angle between the two robots. In addition, the system as a whole is inherently decentralized and asynchronous; any collective behavior must be achieved purely through local interactions between neighboring robots. Some individual limitations can be overcome through cooperation; a robot can use distance measurements from a stationary neighbor as feedback to enable more precise motion, and errors in distance sensing can be reduced by comparing readings among neighbors.

We developed a collective algorithm that guarantees that a large group of robots, with limited capabilities and local communication, can cooperatively assemble into any 1-connected user-specified shape [with some restrictions (19)]. The self-assembly algorithm (Fig. 2) composes three primitive collective behaviors: (i) edge-following, where a robot can move along the edge of a group by measuring distances from robots on the edge; (ii) gradient formation, where a source robot can generate a gradient value message that increments as it propagates through the swarm, giving each robot a geodesic distance from the source; and (iii) localization, where robots can form a local coordinate system using communication with, and measured distances to, neighbors. The self-assembly algorithm links these three primitives with a finite-state automaton. We describe the algorithm at a high level here; see (19) for a detailed description of each aspect along with proofs of correctness and completion.

Fig. 2 Collective self-assembly algorithm.

Top left: A user-specified shape is given to robots in the form of a picture. Top right: The algorithm relies on three primitive collective behaviors: edge-following, gradient formation, and localization. Bottom: The self-assembly process by which a group of robots forms the user-defined shape.

All robots are given an identical program that includes the fixed self-assembly algorithm and an image of the specific target shape and size. The robots are initially tightly packed into an arbitrarily shaped group without any knowledge about their location in the environment (Fig. 2). To start the self-assembly process, a user places four specially programmed seed robots next to the initial group, marking the position and orientation for where the shape should be formed in the environment. The seed robots become the source of a gradient formation that propagates through the initial group. Robots along the outer edge of the initial group are able to move without being physically blocked. They can determine that they are on the outer edge by comparing their gradient values to those of their neighbors. These robots can then use edge-following around the stationary initial group to reach the seed. To avoid congestion that would occur if all robots along the outer edge moved at once, randomness is used to allow only a subset to start motion at any one time. This method creates the effect of the initial group “dissolving” from the outer edge inward, with all robots eventually edge-following to the seed.

Robots have no direct information about their global position, only the distances to nearby neighbors. However, they can use this distance information to collectively construct a coordinate system. Initially, the stationary seed robots embody the origin of this coordinate system. As new robots arrive at the seed, they can use the localization primitive to determine their own position in this coordinate system; once this occurs, they can become reference points for the localization of additional robots. In this way, a coherent coordinate system can be built up through local interactions. The localization primitive works in the following way: Localized robots continually transmit messages that contain their (x, y) position in the coordinate system. A new robot listens and measures distances to all previously localized neighbors; if it can observe at least three stationary and noncollinear localized robots, it can compute its own (x, y) position using a distributed implementation of trilateration (Fig. 2) (19, 20). Moving robots continually compute both their location and gradient value as they move.

Once a robot is localized, it can determine whether it is located inside the desired shape by comparing its position to the shape image. If it is not within the shape, the robot edge-follows the partially formed assembly until it detects that it has entered the shape. Once inside the shape, the robot continues to edge-follow until one of two conditions is met. If the robot is about to exit the shape, or if it is next to (less than a predefined distance from) a stationary robot that has the same gradient value as itself, it becomes stationary for the remainder of the process. This process of robots moving into and joining the assembly continues until the shape is filled with robots. The overall effect is that the shape builds up in successive layers (Fig. 2 and Fig. 3J).

Fig. 3 Self-assembly experiments using up to 1024 physical robots.

(A, C, and E) Desired shape provided to robots as part of their program. (B and D) Self-assembly from initial starting positions of robots (left) to final self-assembled shape (right). Robots are capable of forming any simply connected shape, subject to a few constraints to allow edge-following (19). (F) Completed assembly showing global warping of the shape due to individual robot errors. (G) Accuracy of shape formation is measured by comparing the true positions of each robot (red) and each robot’s internal localized position (gray). (H to K) Close-up images of starting seed robots (H), traffic backup due to a slowly moving robot (I), banded patterns of robots with equal gradient values after joining the shape (robots in each highlighted row have the same gradient value) (J), and a complex boundary formed in the initial group (dashed red line) due to erosion caused by imprecise edge-following (K).

This seed-initiated self-assembly algorithm can form a large class of shapes, relying only on local interactions of the robots [see (19) for proof]. The algorithm resembles previous lattice-based algorithms for collective construction and self-assembly of modular robots (12, 13, 2124) and uses conceptual primitive behaviors found in previous work, such as edge-following (2325), gradient formation (23, 26) and localization (22, 23). However, unlike most previous methods, our algorithm allows components to be positioned freely in continuous two-dimensional space and comes with a provable guarantee that the shape will form correctly. The use of continuous space implies that the implementations of the primitive behaviors are substantially different because of continuous and noisy sensing and motion, and proving correctness is more complex as the shape may be self-assembled differently each time because of differential packing of robots. However, avoiding grid restrictions allows for simpler robots and more robustness to imprecision in element positioning, which was instrumental in translating our algorithm onto a 1000-robot swarm.

Moving from abstract algorithms to physical robots requires solving several additional challenges. Mathematical models assume idealized behavior in order to achieve tractability; however, real robots exhibit much more complex behaviors. For example, robots often have imprecise locomotion, noisy distance sensing, and message loss. Moving from tens of robots to more than a thousand introduces additional nonideal behaviors that are not easily observed in smaller groups and are therefore rarely incorporated into simulators for modeling large robot swarms. These include systematic errors and high variability in robot locomotion and sensing, which can be manually detected and easily corrected in small groups but are too cumbersome and time-consuming to correct in larger groups, resulting in robots with statistically different variability. Increasing group size also increases the occurrence of statistically rare events, such as a robot becoming completely immobilized because of motor failure or a robot getting physically pushed by another. We observed that with a thousand physical robots running for tens of hours, high variability and rare errors are inevitable and capable of completely bringing such abstract algorithms to a halt.

To achieve robustness at the 1000-robot scale, we used several additional algorithmic strategies (19). For example, a continuous-space shape representation in our algorithm describes the desired shape as a boundary but does not dictate the exact location of robots within that shape. This enables the algorithm to tolerate a variety of packing patterns of robots within the shape, helping to prevent small position errors by the robots from propagating into large-scale self-assembly failures. One key technique for robustness was cooperative monitoring, in which a robot uses interactions with neighbors to detect and recover from faults that it cannot observe itself. For example, an immobilized robot can cause havoc in an edge-following algorithm by causing a permanent blockage. Similarly, localized robots can be physically pushed and therefore no longer have correct knowledge of their position in the swarm, and this error can then propagate through the rest of the assembling swarm. The robots lack the internal sensors to detect such rare events on their own; however, by using neighbor interaction, these faults can be detected. For example, both errors cause sensed distance between neighbors to change in predictable ways. This allows for autonomous recovery: An immobilized robot can reset its motors and try again, or it can signal to other robots that they should move around it; similarly, a pushed robot can selectively reevaluate its position and correct for the unintended movement. In general, many faults can be detected through information exchange with neighbors; this cooperative monitoring was essential to enable large-scale swarm experiments without human intervention.

Using this algorithmic approach, we conducted several large-scale experiments (Fig. 3 and movies S1 to S4). We programmed the full swarm of 1024 robots to self-assemble into two shapes, each taking roughly 12 hours of execution time. We performed 11 additional experiments with smaller collectives to analyze the accuracy of shape formation and to investigate the consistency of the collective behavior (19). All experimental trials fully assembled the desired shape without human intervention (such as charging batteries or reprogramming); hence, the collective behavior is remarkably robust. To quantitatively measure the accuracy of shape formation, we measured the mean square error between the true position of each robot (as measured from a calibrated camera) and each robot’s internal localized position in the shared coordinate system (Fig. 3G). We observed that the final shape accuracy is high but not perfect; errors by individual robots during the assembly process can translate into global warping of the shape or small packing defects, but without halting the self-assembly process. To test consistency, we used approximately 100 robots to form the same rectangular shape 10 times. The results show that although the shape accuracy values are comparable between experiments, the packing pattern of robots can exhibit considerable variability, as is true in natural self-assemblages. These patterns are a manifestation of the natural variation in agent behavior in such large groups.

Experiments also revealed several interesting emergent behaviors that occur during the formation process (Fig. 3). For example, “traffic jams” of edge-following robots formed behind particularly slow robots, similar to vehicle traffic. Another observed behavior was the complex “erosion” of the initial group, caused by edge-following robots that would accidently push stationary robots and create sharp edges; this resulted in a positive-feedback process where the sharp edges attracted more collisions, resulting in complex boundaries being formed as the initial group dissolved. Idealized mathematical models of robot swarms do not predict these phenomena.

Collective behavior in nature often involves large numbers of independent individuals interacting to produce complex assemblies. Engineered systems such as DNA-based self-assembly (27) have replicated this ability in synthesized chemical systems. We have demonstrated this ability in a robotic system by creating and programming a large-scale autonomous swarm to achieve complex global behavior from the cooperation of many limited and noisy individuals. The large-scale experiments advance our ability to engineer complex robotic systems. This motivates new investigations into advanced collective algorithms capable of detecting malfunctioning robots and recovering from large-scale external damages, as well as new robot designs that, like army ants, can physically attach to each other to form stable self-assemblages.

Supplementary Materials

www.sciencemag.org/content/345/6198/795/suppl/DC1

Materials and Methods

Figs. S1 to S10

Tables S1 and S2

Movies S1 to S4

References (28, 29)

References and Notes

  1. See supplementary materials on Science Online.
  2. Acknowledgments: Supported by the Wyss Institute for Biologically Inspired Engineering and by NSF grants CCF-0926148 and CCF-0643898.
View Abstract

Navigate This Article