In DepthCybersecurity

Hackers easily fool artificial intelligences

See allHide authors and affiliations

Science  20 Jul 2018:
Vol. 361, Issue 6399, pp. 215
DOI: 10.1126/science.361.6399.215

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


Last week, at the International Conference on Machine Learning (ICML) in Stockholm, a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm that can normally recognize turtles saw it differently. Most of the time, it thought the turtle was a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of "adversarial attacks"—subtly altered images, objects, or sounds that fool AIs without setting off human alarm bells. Impressive advances in AI—particularly machine learning algorithms that can recognize sounds or objects after digesting vast training data sets—have spurred the growth of living room voice assistants and autonomous cars. But these AIs are surprisingly vulnerable to being spoofed. At the ICML meeting, adversarial attacks were a hot subject, with researchers reporting novel ways to trick AIs as well as new ways to defend them. Somewhat ominously, one of the conference's two best paper awards went to a study suggesting protected AIs aren't as secure as their developers might think.

  • * Matthew Hutson is a science journalist based in New York City.

View Full Text

Stay Connected to Science