Feature

Free agents

See allHide authors and affiliations

Science  13 Apr 2018:
Vol. 360, Issue 6385, pp. 144-147
DOI: 10.1126/science.360.6385.144

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

For more than half a century, U.S. government officials have considered disaster scenarios, such as the consequences of a nuclear bomb going off in Washington, D.C. Only now, instead of following fixed story lines and predictions assembled ahead of time, they are using computers to play what-if with an entire artificial society: an advanced type of computer simulation called an agent-based model. Today's version of the nuclear attack model includes a digital simulation of every building in the area affected by the bomb, as well as every road, power line, hospital, and even cell tower. The model includes weather data to simulate the fallout plume. And the scenario is peopled with some 730,000 agents. Each agent is an autonomous subroutine that responds in reasonably human ways to other agents and the evolving disaster by switching among multiple modes of behavior. The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is hard to simulate otherwise. The models tend to be big, computation-wise—forcing the agents to be relatively simple-minded. But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously.

  • * M. Mitchell Waldrop is a journalist based in Washington, D.C.