PerspectiveEthics

Our driverless dilemma

See allHide authors and affiliations

Science  24 Jun 2016:
Vol. 352, Issue 6293, pp. 1514-1515
DOI: 10.1126/science.aaf9534

You are currently viewing the summary.

View Full Text

Summary

Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger. On page 1573 of this issue, Bonnefon et al. (1) explore this social dilemma in a series of clever survey experiments. They show that people generally approve of cars programmed to minimize the total amount of harm, even at the expense of their passengers, but are not enthusiastic about riding in such “utilitarian” cars—that is, autonomous vehicles that are, in certain emergency situations, programmed to sacrifice their passengers for the greater good. Such dilemmas may arise infrequently, but once millions of autonomous vehicles are on the road, the improbable becomes probable, perhaps even inevitable. And even if such cases never arise, autonomous vehicles must be programmed to handle them. How should they be programmed? And who should decide?