# News this Week

Science  17 Jul 2015:
Vol. 349, Issue 6245, pp. 218
1. # This week's section

### Plants in cold storage

In the race to preserve biodiversity even as species extinction rates soar, Smithsonian Institution scientists are starting with their own backyards. Plants from the U.S. Botanic Garden, the U.S. National Arboretum, and the Smithsonian Gardens—all located in Washington, D.C.—are the first targets of an initiative to capture the genomic diversity of half of the world's living plant genera in less than 2 years. On 8 July, the Smithsonian's National Museum of Natural History launched the effort, which will include a summer field team of students who will assist scientists with the museum's Global Genome Initiative in sampling plants from the gardens' holdings. The scientists will then preserve them in cryogenic vials and store them in liquid nitrogen. GGI, the Norway-based Svalbard Global Seed Vault, and the Royal Botanic Gardens' Millennium Seed Bank in the United States are three of the largest ex situ conservation efforts to preserve genetic material outside of its natural environment. Ex situ conservation is a strategy described in the Intergovernmental Panel on Climate Change's 2014 Fifth Assessment Report as an insurance policy against climate change and other potential sources of biodiversity loss.

### Seeing (black leopard) spots

Scientists keep track of the population density of leopards in the wild using camera traps; they snap pictures of the cats and identify individuals by their characteristic spot patterns. But, due to a recessive gene that causes melanism, most of the leopards prowling the Malay Peninsula are black—even their spots—making it nearly impossible to distinguish them using the cameras. Now, there's a workaround, scientists report in the Journal of Wildlife Management. The spots are visible at infrared wavelengths, so by modifying infrared flash camera traps on the peninsula so that they were forced into night mode throughout the day, the researchers were ultimately able to distinguish distinct spot patterns and identify 94% of the animals along a wildlife corridor in Malaysia—crucial for keeping tabs on the region's population over time. The work not only represents the first leopard density estimate in Malaysia, the authors note, but also the first successful attempt to estimate population size using melanistic phenotypes—such as a black leopard's dark spots.

### IBM back on track with Moore's law

IBM announced last week that it has manufactured working versions of a new, ultradense computer chip that will reduce the area needed for a given amount of circuitry by half and has four times the capacity of the most powerful chips on the market. Currently, commercial chip technology is at the “14-nanometer manufacturing” stage, referring to the size of the chip's smallest features. The new IBM chips have 7-nanometer transistors, making it possible to pack more into the same space and increasing the capacity. The new chips were manufactured by using silicon-germanium in some regions rather than pure silicon, which the company says reduces power requirements and makes faster transistor switching possible. But IBM hasn't said when the new chips are likely to be commercially available.

“I've secretly been working on a lander.”

New Horizons principal investigator Alan Stern, at a press conference 14 July as the spacecraft made its closest approach to Pluto. Stern was joking with reporters about plans to return to the dwarf planet.

### By the numbers

15 million—Number of people receiving antiretrovirals for HIV in March 2015, according to a new UNAIDS report released this week.

834,000—Number of rabbits, nonhuman primates, and other regulated animals used in U.S. biomedical research last year—a drop of 6% from the previous year, and the lowest number since data collection began in 1972.

18.7%—Increase in global wildfire activity over the last 35 years, according to a study in Nature Climate Change.

## Around the world

### Washington, D.C.

“Cures” bill clears U.S. House

### A flightless, winged dino

Researchers have found the largest ever dinosaur with full-fledged wings and feathers, they report online this week in Scientific Reports. The 125 million-year-old dino, called Zhenyuanlong suni and found in China, was about 1.65 meters long, a little longer than a modern condor. It belonged to a group of dinos called dromaeosaurs, which includes Velociraptor from Jurassic Park, and which was closely related to early birds. But although Zhenyuanlong's wings had multiple layers of birdlike feathers, they were very short compared with those of most other winged dinos, and thus it was probably unable to fly. That leaves the discovery team wondering what the wings were for. One possibility is that it evolved from ancestors that could fly, but used its wings mainly for sexual display. http://scim.ag/flightlessdino

2. Artificial Intelligence

# The synthetic therapist

1. John Bohannon

Some people prefer to bare their souls to computers rather than to fellow humans.

People have always noticed Yrsa Sverrisdottir. First it was ballet, which she performed intensively while growing up in Iceland. Then it was science, which she excelled at and which brought her to the stage at conferences. And starting in 2010, when she moved to the University of Oxford in the United Kingdom to study the neurophysiology of the heart, it was her appearance. With her Nordic features framed by radiant blonde hair, “I just stand out here,” she says. “I can't help it.”

After she arrived in the United Kingdom, she found that she no longer enjoyed the attention. She began to feel uncomfortable in crowds. Her relationships suffered. There had been some obvious stressors, such as the death of a parent. But the unease didn't let up. By 2012, she says, “I felt like I was losing control.” Then she met Fjola Helgadottir, one of the few other Icelanders in town. Helgadottir, a clinical psychology researcher at Oxford, had created a computer program to help people identify and manage psychological problems on their own. Sverrisdottir decided to give it a try.

The program, based on a technique called cognitive behavioral therapy (CBT) and dubbed CBTpsych, begins with several days of interactive questioning. “It was exhausting,” Sverrisdottir says. The interrogation started easily enough with basic personal details, but then began to probe more deeply. Months of back and forth followed as the program forced her to examine her anxieties and to identify distressing thoughts. CBTpsych diagnosed her as having social anxiety, and the insight rang true. Deep down, Sverrisdottir realized, “I didn't want people to see me.”

Then the program assumed the role of full-fledged therapist, guiding her through a regimen of real-world exercises for taking control. It sounds like a typical success story for clinical psychology. But no human psychologist was involved.

CBTpsych is far from the only computerized psychotherapy tool available, nor the most sophisticated. Ellie, a system built at the University of Southern California (USC) in Los Angeles, uses artificial intelligence (AI) and virtual reality to break down barriers between computers and humans. Originally funded by the U.S. military, its focus is on diagnosing and treating psychological trauma. Because patients interact with a digital system, the project is generating a rare trove of data about psychotherapy itself. The aim, says Albert “Skip” Rizzo, the USC psychologist who leads the effort, is nothing short of “dragging clinical psychology kicking and screaming into the 21st century.”

A 19 June editorial in The New York Times deemed computerized psychotherapy “effective against an astonishing variety of disorders.” The penetration of the Internet into far-flung communities could also bring mental health treatment to vast numbers of people who otherwise have no access.

But whether clinical psychologists will accept AI into their practice is uncertain. Nor is it clear that the tools of AI can carry computerized psychotherapy beyond its so far limited capacity, says Selmer Bringsjord, a cognitive scientist and AI researcher at Rensselaer Polytechnic Institute in Troy, New York. “It is incredibly ambitious.”

ALL OF TODAY'S VIRTUAL psychologists trace their origins to ELIZA, a computer program created half a century ago. Named after the young woman in Pygmalion who rapidly acquires sophisticated language, ELIZA was nothing more than a few thousand lines of code written by Joseph Weizenbaum and other computer scientists at the Massachusetts Institute of Technology (MIT) in the early 1960s to study human-computer interaction.

ELIZA followed rules that determined how to respond during a dialogue. The most convincing results came from a rule set called DOCTOR that simulated a psychotherapist: By turning patients' statements around as questions, the program coaxed them to do most of the talking. For instance, in response to a patient saying, “I feel helpless,” the computer might respond, “Why do you think you feel that way?” (You can talk to ELIZA yourself at http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm.)

People engaged readily with ELIZA, perhaps more for its novelty than its conversational skills, but AI researchers were unimpressed. “The idea that you could make a convincing AI system that didn't really have any intelligence was seen as cheating,” says Terry Winograd, a computer scientist at Stanford University in Palo Alto, California, who was a Ph.D. student down the hall from Weizenbaum. This was a wildly optimistic time for the field, with many researchers anticipating computers with human-level general intelligence right around the corner.

But work on artificial general intelligence didn't pan out, and funding and interest dried up in what has come to be known as the “AI winter.” It wasn't until the turn of the new millennium that mainstream interest in AI resurged, driven by advances in “narrow AI,” focusing on specific problems such as voice recognition and machine vision.

Conversational “chatbots” such as ELIZA are still viewed as a parlor trick by most computer scientists (Science, 9 January, p. 116). But the chatbots are finding a new niche in clinical psychology. Their success may hinge on the very thing that AI researchers eschew: the ability of an unintelligent computer to trick people into believing that they are talking to an intelligent, empathetic person.

THAT ISN'T EASY, as Rizzo is keenly aware. What most often breaks the spell for a patient conversing with Ellie isn't the content of the conversation, because the computer hews closely to a script that Rizzo's team based on traditional clinical therapy sessions. “The problem is entrainment,” he says, referring to the way that humans subconsciously track and mirror each other's emotions during a conversation.

For example, a patient might say to Ellie, “Today was not the best day,” but the voice recognition software misses the “not.” So Ellie smiles and exclaims, “That's great!” For an AI system striving to bond with a human patient and earn trust, Rizzo says, “that's a disaster.”

To improve entrainment, a camera tracks a patient's psychological signals: facial expression, posture, hand movement, and voice dynamics. Ellie crunches those data in an attempt to gauge emotional state.

The patterns can be subtle, says Louis-Philippe Morency, a computer scientist at USC who has led the development of the AI that underlies Ellie. For instance, he says, a person's voice may shift “from breathy to tense.” The team devised algorithms to match patterns to a likely emotional state. It's imperfect, he says, but “our experiments showed strong correlation with [a patient's] psychological distress level.”

Other patterns unfold over multiple sessions. For instance, the team's work with U.S. veterans suffering from post-traumatic stress disorder (PTSD) revealed that “smile dynamics” are a strong predictor of depression. The pattern is so subtle that it took a computer to detect it: Smiling frequency remained the same in depressed patients, on average, but the duration and intensity of their smiles was reduced.

Even if Ellie were to achieve perfect entrainment, Rizzo says, it “is really just an enhanced ELIZA.” The AI under the hood can only sustain about a 20-minute conversation before the spell breaks, which limits the system's usefulness for diagnosis and treatment of most psychological problems. Without sophisticated natural language processing and semantic knowledge, Ellie will never fool people into believing that they are talking to a human. But that's okay, Rizzo says: Becoming too humanlike might backfire. One counterintuitive finding from Rizzo's lab came from telling some patients that Ellie is a puppet controlled by a human while telling others she is fully autonomous. The patients told there was a puppeteer were less engaged and less willing to open up during therapy.

That's no surprise to AI researchers like Winograd. “This goes right back to ELIZA,” he says. “If you don't feel judged, you open up.”

Ethical and privacy issues may loom if AI therapy goes mainstream. Winograd worries that online services may not be forthcoming about whether there is a human in the loop. “There is a place for deceiving people for their own good, such as using placebos in medicine,” he says. But when it comes to AI psychology, “you have to make it clear to people that they are talking to a machine and not a human.”

If patients readily open up to a machine, will clinicians be needed at all? Rizzo is adamant that a human must always be involved because machines cannot genuinely empathize with patients. And Ellie, he points out, has a long way to go before being ready for prime time: The program does not yet have the ability to learn from individual patients. Rizzo envisions AI systems as a way to gather baseline data, providing psychologists with the equivalent of a standard battery of blood tests. “The goal isn't to replace people,” he says, “but to create tools for human caregivers.”

Helgadottir has a bolder vision. Although computers are not going to replace therapists anytime soon, she says, “I do believe that in some circumstances computerized therapy can be successful with no human intervention … in many ways people are not well suited to be therapists.” A computer may be more probing and objective.

Sverrisdottir's experience suggests that CBTpsych, at least, can make a difference. Under the program's tutelage, she says, “very slowly, I started to analyze myself when I'm amongst other people.” She identified a pattern of “negative thoughts about people judging me.”

She might have got there with a human therapist, she says. But in the years since she first started talking to a computer about the trouble swirling in her mind, Sverrisdotter says, “I have been able to change it.”

Correction (23 July 2015): This article has been corrected so that it accurately describes a stressor experienced by Yrsa Sverrisdottir.

3. Artificial Intelligence

# Fears of an AI pioneer

1. John Bohannon

Stuart Russell argues that AI is as dangerous as nuclear weapons.

From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding—and fresh concerns about where AI may lead us.

One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.

Q:What do you see as a likely path from AI to disaster?

A:The basic scenario is explicit or implicit value misalignment: AI systems [that are] given objectives that don't take into account all the elements that humans care about. The routes could be varied and complex—corporations seeking a supertechnological advantage, countries trying to build [AI systems] before their enemies, or a slow-boiled frog kind of evolution leading to dependency and enfeeblement not unlike E. M. Forster's The Machine Stops.

Q:You've grappled with this issue for a long time.

A:My textbook has a section “What If We Do Succeed?” devoted to the question of whether human-level AI or superintelligent systems would be a good idea. [More recent causes for concern are] the rapid developments in AI capabilities such as the legged locomotion of Big Dog [the autonomous robot created by Boston Dynamics, recently acquired by Google] and progress in computer vision.

Q:What needs to be done to prevent an AI catastrophe?

A:First, research into the precise nature of the potential risk and development of technical approaches to eliminate the risk. Second, modification of the goals of AI and the training of students so that alignment of AI systems with human objectives is central to the field, just as containment is central to the goals of fusion research.

Q:But by the time we were developing nuclear fusion, we already had atomic bombs, whereas the AI threat seems speculative. Are we really at the “fusion” stage of AI research?

A:Here's what Leo Szilard wrote in 1939 after demonstrating a [nuclear] chain reaction: “We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.” To those who say, well, we may never get to human-level or superintelligent AI, I would reply: It's like driving straight toward a cliff and saying, “Let's hope I run out of gas soon!”

Q:The intention with fission was to create a weapon. The intention with AI is to create a tool: intelligence on tap. Does that explain the reluctance to regulate AI?

A:From the beginning, the primary interest in nuclear technology was the “inexhaustible supply of energy.” The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. I'm not aware of any large movement calling for regulation either inside or outside AI, because we don't know how to write such regulation.

Q:Should we start tracking AI research as we track fissile material? Who should do the policing?

A:I think the right approach is to build the issue directly into how practitioners define what they do. No one in civil engineering talks about “building bridges that don't fall down.” They just call it “building bridges.” Essentially all fusion researchers work on containment as a matter of course; uncontained fusion reactions just aren't useful. Right now we have to say “AI that is probably beneficial,” but eventually that will just be called “AI.” [We must] redirect the field away from its current goal of building pure intelligence for its own sake, regardless of the associated objectives and their consequences.