News this Week

Science  10 Oct 2014:
Vol. 346, Issue 6206, pp. 146

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

  1. This week's section

    Ancient cave art in Indonesia

    These hand stencils in an Indonesian cave may be among the oldest rock art in the world.


    At least 40,000 years ago, prehistoric artists were stenciling their hands and painting images of primitive pigs on the walls of a cave on the Indonesian island of Sulawesi, according to a study published in Nature this week. The new dates came from the decay of uranium in the mineral crusts that formed over time atop the paintings in seven caves. If the dates are accurate, this would be some of the oldest rock art in the world, challenging the view that Europeans were the first to paint figurative rock art. Eerily similar stencils of hands and paintings of animals in France date to almost the same time and have long been thought to be the first examples of such sophisticated figurine art. The finding suggests that humans might have created rock art independently on both continents at the same time—or that modern humans brought this sensibility with them as they swept out of Africa and spread around the globe in the past 60,000 years.

    Overheated oceans

    Each globe shows cumulative ocean warming (in red) for successive decades from 1955 through 2011.


    The oceans store more than 90% of the warming caused by greenhouse gases. But a new study argues scientists may have underestimated how much heat the upper oceans have actually absorbed over the past 40 years, due to the lack of ocean data in the Southern Hemisphere. The team compared a new analysis of heat content in the region, inferred from satellite data and models, to measurements from buoys and other devices in the water—and found a large inconsistency. Past global estimates of long-term warming could be 25% or more too low, they report this week in Nature Climate Change. The findings also suggest the oceans may be considerably more sensitive to greenhouse warming than thought.

    “Just because someone says something confidently doesn't mean it's true.”

    Psychologist Elizabeth Loftus of the University of California, Irvine, on a National Research Council report calling for a more scientific approach to eyewitness identifications of suspects in a lineup.

    Around the world


    Stem cell trial is off

    The Italian government won't proceed with plans for a human study of a stem cell treatment that has led to fierce debates the past 3 years. On 3 October, an expert panel appointed by Health Minister Beatrice Lorenzin concluded that the treatment, developed by the Stamina Foundation in Turin and widely criticized by scientists, hasn't been shown to be safe and should not undergo the formal clinical trial ordered by law in 2013 (Science, 31 May 2013, p. 1028). The panel's verdict led Lorenzin to scrap the study immediately. “The decision has left me a mazed,” says Stamina Foundation President Davide Vannoni, who says he will appeal the verdict at a court in Rome.

    Estevan, Canada

    First commercial CCS plant opens

    A Saskatchewan-based company has opened the first commercial-scale coal-fired power plant to include carbon capture and storage technology. SaskPower's CA$1.4 billion Boundary Dam project, hailed by proponents of “clean coal” technology, went online 2 October. The company plans to capture and sell as much as 90% of its emissions, amounting to 1 million tonnes of CO2 each year, to oil company Cenovus Energy. Cenovus will then transport the compressed gas via pipeline to an enhanced oil recovery site 66 kilometers away.

    Mauna Kea, Hawaii

    Construction on TMT begins

    With the untying of the maile lei and the turning of dirt with the O'o stick, construction of the $1.4 billion Thirty Meter Telescope (TMT) officially began this week on the sacred summit. TMT will comprise a primary mirror 30 meters across made up of 492 hexagonal segments, and two smaller mirrors; together, they will give about 10 times better resolution than the Hubble Space Telescope. An international consortium from the United States, Canada, Japan, China, and India, TMT is the third to join the extremely large telescopes club; the 25-meter Giant Magellan Telescope and the 40-meter European Extremely Large Telescope are under construction in Chile. All three are scheduled to collect first light in 2022.

    [10 October 2014: At press time, the TMT groundbreaking ceremony looked like it would continue as planned, but several dozen protestors disrupted the event on 7 October, and it was canceled. It is not yet clear whether officials will reschedule the event.]

    Mexico City

    Students fight degree downgrade

    After days of protests, the Mexican government has agreed to reverse a controversial change to the degrees granted by the country's National Polytechnic Institute (IPN). On 24 September, IPN released new bylaws that downgraded some of its professional engineering degrees to technical degrees. Many students and professors felt the move was kowtowing to businesses that employ IPN graduates, because technicians cannot command the higher salaries engineers earn. On 3 October, interior minister Miguel Ángel Osorio Chong, appearing before thousands of student protesters, agreed to cancel the bylaw changes. “Like you, we're invested in making sure IPN continues to offer a quality education,” he said. IPN's director and secretary-general have both since resigned.


    Drug trial data to be shared

    After an 18-month saga, the European Medicines Agency on 2 October approved the details of a new system opening clinical trial data to public scrutiny. Proponents of open sharing praised the decision, which will allow scientists to download and reanalyze data. Earlier proposals would have allowed data to be viewed only on screen (Science, 23 May, p. 784). But campaigners still worry that information could be redacted before the reports are shared. To take effect in 2016, the new rules will apply to data submitted after 1 January 2015 as part of applications to market drugs. The policy will serve as a bridge until the rollout of a revamped E.U. clinical trial regulation, which will include new provisions for the release of clinical trial results.


    Three Q's


    Hong Kong is in the throes of a confrontation between the government and a prodemocracy movement over proposed electoral restrictions. University students are playing a leading role, boycotting classes to demonstrate. Peter Mathieson, a kidney researcher and the University of Hong Kong's vice chancellor, discussed the situation.

    Q:How is your university being affected?

    A:The students have become very significant figures in the deliberations with the government. I imagine this is going to have an effect for years to come in terms of student activism.

    Q:If democratic principles are compromised, would that affect your ability to attract faculty and students from abroad?

    A:It makes me concerned that people might be put of in the short term by a feeling that Hong Kong is now a place of great uncertainty.

    Q:Do you worry that as mainland China's influence grows in Hong Kong, academic freedom might be curtailed?

    A:At the moment, there is manifestly freedom of speech and freedom of association being practiced in the streets of Hong Kong. A critical part of my role is to do everything that I can to defend academic freedom and freedom of speech in the future.

  2. Meet your new co-worker

    1. John Bohannon*

    At the vanguard of human-robot interactions is Baxter, a bot quick on the uptake that even knows how to cheat.

    “We are pleased to confirm receipt of your order for fifteen thousand robots,” says the industrialist Harry Domin in the opening scene of R.U.R., a 1920 play written by Karel Čapek. Čapek coined the word “robot” from robota, which in Czech means forced labor or drudgery. He imagined a future in which the global economy runs on humanoid automatons. Things get stirred up when an activist named Helena appears at Domin's company, demanding equal rights for robots. He dismisses her concerns. Once in a while a robot working on an assembly line goes crazy, “smashing up all the moulds and models,” admits one of the company's human employees, but Domin assures Helena that the robots are soulless and have no desire for rights or freedoms.

    Nearly a century later, at least some of Čapek's vision is starting to come true. Machines are now capable of carrying out certain tasks on an assembly line—such as welding car frames or spray-painting parts—far more efficiently than humans. And they are now developing some of the dexterity and awareness needed to serve as pets and helpers in homes, as caregivers in hospitals, and as co-workers in factories (see p. 188). But for people to truly embrace bots in their daily lives, the machines will need social smarts. “These are exciting times for human-robot interaction,” says Brian Scassellati, a roboticist at Yale University.

    Baxter has a variety of “hands” adapted to specific tasks.


    Like many others in his field, Scassellati has adopted a robot, called Baxter by its manufacturer, as his research subject. The company behind Baxter, Rethink Robotics, is selling it to manufacturing companies. The hope is that assembly lines are poised for disruption by a versatile robot that can take over repetitive, mind-numbing tasks. In the meantime, Baxter is serving as a test subject for robot psychology. In the past, experiments in human-robot interaction were limited to custom-built bots that required careful supervision to avoid harming human or robot. Baxter was built to interact safely with humans. “I can even let my high school interns work with it unsupervised,” Scassellati says. And crucially, Baxter, by design, learns from people.

    WHEN I ASK CHRIS HARBERT if I can play with one of the $25,000 humanoids in the Rethink Robotics offices here, he doesn't flinch. “Go for it,” he says, taking a step back from Baxter. The big cartoon eyes on the robot's face—a computer screen mounted on a swivel neck—stare down at an unmoving conveyor belt cluttered with objects. Its hulking red arms hang open in a shrug, as if to say, “I'm ready for whatever.”

    Nearby is a box of white plastic widgets that look like giant nipples. The goal, I decide, is for Baxter to pick them up and place them on the conveyor belt. Baxter learns by example, so I grab one of the robot's arms and pull it into position over one of the widgets. The plastic-cased steel limb is as thick as a human leg, but moving it requires surprisingly little strength. The robot is paying attention: Its head turns, and its cartoon eyes track my hand. I lower Baxter's hand into position and press a button on the arm, making it seize the widget with its pincer grabber, which is reminiscent of the robot from the 1960s TV show Lost in Space.

    With a gentle tug, I guide the arm to the far side of the conveyor belt, lower the hand, press the button again, and Baxter drops the widget. Scaling up this task takes me a few more steps, with minimal guidance from Harbert. Holding Baxter's hand, I trace an imaginary square over the widgets and then over the destination, defining the pickup and drop-off areas. And then, with an actual nod of its computer-screen head, Baxter gets to work, mimicking my movements and—except for one dropped nipple—efficiently transferring them from box to conveyor belt.

    Humans can infer Baxter's intentions from “emotions” expressed on the bot's flat panel display “face.”


    As I will soon hear from Rodney Brooks, the legendary roboticist who founded the company, getting a robot even to this level of success on tasks like these normally takes an engineer weeks or even months of work. I did it in 5 minutes without an engineering degree. If I can teach Baxter, surely the average factory manager can, too. But would the average factory employee be comfortable with a robot co-worker?

    MAKING A ROBOT that collaborates with people is at least as much about human psychology as it is about robot engineering.

    Just the appearance of a robot can be a barrier—especially if it falls within the uncanny valley, a term introduced by Japanese roboticist Masahiro Mori in 1970. A zone of discomfort lies somewhere between robots that are obviously nonhuman—something cute like the late 1990s toy Furby—and robots like the replicants from the 1982 film Blade Runner, so humanlike that you can't perceive the difference. Robots that look almost-but-not-quite human creep us out, and no one knows why. According to one theory, they are similar to corpses.

    A machine's movements are another barrier. The challenge is to create robots that are less robotic, says Guy Hoffman, a roboticist at the Interdisciplinary Center Herzliya in Israel. Just like the “user interface” that enabled the personal computer revolution, a movement-based “robot user interface” is needed, he says. In some circumscribed domains, progress has been impressive. For example, Hoffman created the world's first robot that can play improvisational jazz in an ensemble with humans. But to live and work side by side with humans, he says, a robot must understand your intentions and act appropriately, “looking you in the eyes, touching your shoulder, coming closer or shying away.”

    Baxter has none of those social graces. But the scientists using the robot in their labs are trying to create a road map to get there. Scassellati is working with a simple system. Baxter has only six facial expressions, ranging from sleepy and content to confused and surprised, each of which signals a different internal state. “The important thing is that you should be able to work with Baxter with as little special training as possible,” says Miri Bauman, the designer of Baxter's user interface.

    One ongoing experiment has generated surprising results. “We've been having Baxter play checkers with humans,” Scassellati says. At a certain point in the game, Baxter cheats. The social dynamics change abruptly. “People interact with the robot like a person,” he says. Without realizing that they're doing it, the subjects start talking to the robot, making eye contact, and maintaining human-appropriate social distance. “It's a powerful effect,” he says. “If you want people to treat a robot like a person, have the robot cheat.” Roboticists hope that more sociable tricks like joking and sarcasm will achieve the same effect.

    With some human subjects, no subterfuge is necessary to elicit profound effects. For example, Scassellati has been working with a 12-year-old boy with autism spectrum disorder. “He is high-function, but if you met him on the street, you'd immediately recognize him as autistic,” he says. The boy avoids eye contact, has difficulty communicating, and makes repetitive movements. “But we put him in a room with the robot and he changes.” The boy gazes right into Baxter's eyes, using his gaze to signal what he's thinking. “We take the robot away and the effect lasts maybe 15 minutes and it's gone,” Scassellati says. “We don't understand what's going on, but dozens of labs have replicated this effect.”

    Scassellati is also using Baxter to study the other side of the equation: getting robots to read our minds. In one experiment, Baxter works with people at a tabletop to assemble Ikea furniture. The robot knows the blueprint, but it also must anticipate what the human will do next. Fellow humans find it effortless to guess why you're looking for all the pieces of a particular shape, but that represents a steep challenge for Baxter. It requires what psychologists call the theory of mind, which has yet to be captured in computer code. “This is the frontier of human-robot interaction,” Scassellati says. At this point, humans are better off muddling through Ikea furniture assembly on their own.

    “I TAUGHT BAXTER a task,” I tell Brooks. “He's impressive.”

    “It's an it. Or if anything, a she,” Brooks says in his broad Australian accent. The original meaning of the word “baxter” is a female baker, he explains. “But everyone makes this mistake.” The misattribution isn't bad news—it indicates that Baxter's “robot user interface” is working—but Brooks worries that “robots shouldn't make promises they can't keep.” Baxter isn't designed to have much of a personality, let alone a gender.

    There is one promise that Baxter is programmed to keep, at all costs: It will do its utmost not to hurt you. (Unlike Čapek's robots, which rise up and exterminate humanity.) “Robots are extremely dangerous,” explains Brooks, whose lab at the Massachusetts Institute of Technology in nearby Cambridge has churned out a large proportion of today's leading roboticists, including Scassellati. The industrial robots on assembly lines today simply aren't safe to be around. “The robots work in their own room, people work in another, and the two don't mix,” he says. “Baxter is meant to bridge that gap.”

    Brooks's team has built several layers of safety into Baxter. It constantly searches for the presence of humans with a 360° sonar system, halting if anything gets too close. If a human insists on getting in the way of the arms, Baxter's plastic armor and control system—the key is a special spring called a series elastic actuator—soften the blow. “When I'm giving demonstrations I like to stick my head in the way to show that it doesn't hurt,” Brooks says. He knows that if Baxter were to cause a single serious injury, its days would be numbered.

    And what about harming livelihoods? “It's not about taking people's jobs,” Brooks says before I even ask the question. Rather than replacing workers, he says, Baxter will free people to do higher level tasks. “Robots have a long way to go before they can completely replace someone working in a factory.” For a start, he says, robots lack “dexterous manipulation.” To demonstrate, Brooks stands up and pulls his keys out of his pocket. “What I just did is nearly impossible for a robot,” he says. Another impediment is robot vision (see p. 186). “And then there's the problem of mobility,” Brooks says. “Stairs are a nightmare for robots.”

    After chatting with Brooks, I walk down the hall to see how Baxter is doing. At the end of my robot tutorial, I'd asked Harbert to instruct Baxter to do something that would drive a human insane: After placing the widgets on the conveyor belt, put them back in the box, and repeat that in an infinite loop. I find Baxter toiling away, unsupervised, delicately moving widgets onto the belt, one at a time, and then putting them back. Like Sisyphus, but without the suffering.

  3. Minds of their own

    1. Robert F. Service*

    Novel neuromorphic chips and software should provide robots with unrivaled perceptual skills.

    EyeRover negotiates obstacles, guided by software that emulates the brain.


    Like a miniature Segway with eyes, a robot built from 3D printed plastic does laps around a cubicle here at Brain Corporation, having learned in a matter of minutes to avoid bumping into walls as it roams. As eyeRover scoots away, Peter O'Connor, a staff scientist at the robotics software company, steps into its route. Using a remote control unit, he briefly takes control of the foot-tall robot and steers it around his feet, much as a parent might help a toddler learn to avoid a coffee table. On the next lap, eyeRover whisks around the obstacle all by itself.

    EyeRover may look like a toy, but it's packed with some of the most advanced robotic technology ever devised, including a prototype computing platform designed to emulate the human brain. Unlike conventional computer chips and software, which execute a linear sequence of tasks, this new approach—called neuromorphic computing—carries out processing and memory tasks simultaneously, just as our brains do for complex tasks such as vision and hearing.

    Many researchers believe that neuromorphic computing is at the threshold of endowing robots with perceptual skills they've never had before, giving them an unprecedented level of autonomy. It “has the potential to be a real revolution” in robotics, says Michele Rucci, a robotics vision expert at Boston University. At the same time, robots could provide the perfect demonstration of the power of neuromorphic computing, helping persuade scientists in fields ranging from computer vision to environmental data analysis to embrace the approach. “We think robotics is the killer app for neuromorphic computing,” says Todd Hylton, Brain Corporation's senior vice president for strategy.

    IF YOUR EYES ROLL at yet another claim that we are on the cusp of a golden age of robotics, you're forgiven. The dream of autonomous robots predates even the dawn of computing. But it has never been realized because of the difficulty of programming robots to learn and adapt. Decades of work in artificial intelligence, computer architectures, and Bayesian statistics—a technique for weighing the likelihood of different outcomes to unfolding events—have failed to produce robots capable of managing more than a handful of mundane tasks in everyday environments.

    Building robots that can make sense of their surroundings is “a particularly tough computational problem,” Hylton says. Take vision, the primary way most of us analyze our environment. “Robotic vision is far behind what we promised 20 to 30 years ago,” Rucci says.

    Teams have come up with various strategies for enabling robots to process and react to what they see (see p. 186). Rucci, for one, has given robots the same type of tiny, involuntary eye movements that humans perform; by providing our brains with constantly shifting images, they help us judge depth and track objects. Still, Rucci's best flitting-eye bots are hampered by their brains, which process information 10 times slower than we do.

    The human eye works so well, in part, because of the sheer complexity of the computer inside our skulls. Our brains contain an estimated 100 billion neurons connected by 100 trillion synapses, and they can distribute different perceptual tasks to different groups of neurons and different brain regions, explains Brain Corporation CEO Eugene Izhikevich. In vision, for example, separate groups of neurons respond to vertical and horizontal features and pass those signals up the chain to other neurons that integrate the signals.

    Brain Corporation's neuromorphic software, called BrainOS, mimics that approach. Like our brains, the operating system segregates visual functions into different networks, analogous to the retina, the brain's lateral geniculate nucleus (a relay station for visual signals), and the layers of the visual cortex. Integrating the output of these networks results in a robotic visual system able to focus automatically on salient features that stand out from background, such as a brightly colored ball rolling across a gray carpet or the shoes of a person crossing a robot's path.

    Another innovation of BrainOS is that it doesn't require a supercomputer to run. A previous neuromorphic software program, developed by IBM in 2012, did run on a supercomputer, using it to mimic the firing patterns of an animal brain containing 500 billion neurons and 100 trillion synapses. But even with an array of 1.5 million computer chips, the program couldn't produce those firing patterns in anywhere close to real time.

    BrainOS, on the other hand, is integrated into a coaster-sized computer circuit board called bStem (short for “brainstem”) that's powered by a Snapdragon mobile phone processor from Qualcomm. Unlike conventional computer chips, mobile phone chips minimize power use by distributing tasks, such as memory, graphics processing, and communication, to specialized subprocessors. That “distributed” architecture dovetails well with the neuromorphic approach.

    A truly neuromorphic architecture, with memory and processing elements distributed throughout the chip, may be coming soon. Qualcomm researchers who sit just upstairs from Brain Corporation have built a neuromorphic chip they call their Zeroth processor—named after science fiction writer Isaac Asimov's Zeroth Law of Robotics, which states that robots are not allowed to harm humans. M. Anthony Lewis, who heads neuromorphic computing efforts at Qualcomm, says the company is nearing commercialization of the processor, which they plan to integrate into their mobile chips to improve handheld devices' audio and visual processing skills.

    Qualcomm has plenty of competition. Just up Interstate 5, researchers at HRL Laboratories LLC in Malibu are working on their own neuromorphic chip, which they've recently shown can process visual data fast enough to pilot a palm-sized helicopter inside an office building and recognize and explore rooms it has never seen before. And in August, a team led by researchers at IBM's Almaden Research Center in San Jose reported a titanic neuromorphic chip, dubbed TrueNorth, that contains 5.4 billion transistors wired to behave like 1 million neurons connected by 256 million synapses (Science, 8 August, p. 614).

    These neuromorphic chips are inspired not only by the architecture of the brain but also by its energy efficiency—the brain, which bests supercomputers on many tasks, uses roughly 20 watts of power. Whereas conventional computer circuits regularly bleed electricity even when they're not sending a signal, neuromorphic circuits use power only when active. And by distributing memory and processing modules throughout the chip, they minimize the power required to send data back and forth during computations. TrueNorth, for example, uses only 1/1000 the energy of a conventional chip to carry out the equivalent visual perception tasks.

    Which real-world applications for neuromorphic robots will emerge first? Last fall, Qualcomm engineers demonstrated that a meter-high neuromorphic robot called Dragon could adeptly clean up scattered toys, sorting blocks into one bin and stuffed animals into another. For his part, Izhikevich believes that neuromorphic hardware and software will at long last give robots enough perceptual skills to be valuable home companions: able to take out the trash, clean the house, and pick vegetables from a garden. Neuromorphically heightened perception, adds IBM's neuromorphic team leader Dharmendra Modha, will give robots the wherewithal to navigate hazardous environments, such as a damaged nuclear reactor, without guidance from a human operator, beaming back data on radiation and other conditions in real time.

    The energy efficiency of neuromorphic computing could open the way to new functions, Lewis says. Efficient chips can run complex—and normally power-hungry—algorithms that enable robots to learn. The Internet, in turn, will allow far-flung robots to share those lessons and skills. A robot that learns how to pick a strawberry without crushing it, for instance, could uplink that skill for the benefit of its kind around the globe. That means neuromorphic computing could offer robots something far more profound than enhanced perceptual skills. When humans collectively pass along life lessons, we call that culture.

    • * in San Diego, California

  4. Getting a feel for the world

    1. Hassan DuRant,
    2. Jia You

    Robots fumble to make sense of their surroundings.

    To be fully autonomous, robots must be able to make sense of their surroundings. Forget joie-de-vivre sensations like Robert Browning's “yellow half-moon large and low” over a “warm sea-scented beach.” Designing mundane, practical sensors for robots has proven to be a formidable challenge.

    PHOTOS: (CLOCKWISE FROM TOP LEFT) Richard Greenhill and Hugo Elias/Shadow Robot Company; EPA/Kiyoshi Ota/Corbis; © Andrew Innerarity/Reuters/Corbis


    In 2003, a stroke left Henry Evans mute and quadriplegic. Last year, Georgia Institute of Technology researcher Charlie Kemp developed a robotic arm that Evans directs by moving his head. It uses tactile “skin” to feel its way without damaging the environment. Software tunes the robot's sense of touch, allowing it to gently make contact with an object and intelligently maneuver within clutter. Kemp says that giving robots the sensation of touch “humanizes” them.


    As we breathe, molecules of all sorts are drawn into the nasal passage, tickling receptors in our mucous membrane that match an odor to a learned scent. A Swedish creation called the Gasbot has the robot version of a nose, though it knows just one smell. Equipped with a laser beam tuned to the wavelength of light commonly absorbed by methane, Gasbot can detect gas leaks at concentrations as low as 5 parts per million.


    Alcohol doesn't give robots a thrill, but one in Spain has an electronic tongue that differentiates between six types of beer. A Japanese team has built a bot capable of distinguishing wines by bathing them in infrared light and analyzing the absorption patterns. The same robot also “tasted” human body odor and pronounced it to be closest to bacon.


    Roboticists once thought that of all the senses, sight would be the easiest to recreate. How wrong they were. Image acquisition is not the issue: Robots can “see” with far more acuity than the 20/20 standard by which we measure our fluid-filled eyeballs. The challenge is getting machines to make sense of what they are seeing. Two new powerful approaches—deep learning (p. 186) and neuromorphic computing, which emulates the human brain (p. 182)—could finally equip robots with the processing firepower to size things up.


    Imagine yourself at a noisy cocktail party, trying to tune in to the guy in the corner inveighing against an inevitable robot uprising. That's a cinch for our brains, which are wired to filter signal from noise. Japanese roboticists have hit upon a sound localization algorithm that disentangles up to four simultaneous sound sources. Our ears also filter out many sounds inside the body, such as breathing and heartbeat. Robot ears have to contend with the whine and whir of machinery.


    Humans effortlessly stay upright with continual, minute adjustments of our bodies. Researchers in Massachusetts have endowed Atlas, a humanoid robot, with a similar dynamic sense of balance. Moving its body with high-powered hydraulics, the robot can walk across unsteady debris and stay balanced on one leg when whacked with a 9-kilogram wrecking ball. But Atlas is still prone to falling, breaking its right ankle during a 2013 walking demonstration in Hong Kong.

    Correction (29 October 2014): The image credits have been added.

  5. Helping robots see the big picture

    1. John Bohannon*

    A computational approach called deep learning has transformed machine vision.

    If you want to see the state of the art in machine vision, says artificial intelligence researcher D. Scott Phoenix, “you should watch the YouTube video of the robot making a sandwich.” The robot in question is a boxy humanoid called PR2. It was built less than an hour away at Willow Garage in Menlo Park, California, one of the most influential robotics companies in the world. But Phoenix is being ironic. When PR2 finally manages to pick up a piece of bread, it drops the slice on the toaster; the bread caroms off, and a human rushes in to help. After stabbing a slice of salami with a fork, PR2 holds it in the air for what seems like an eternity. The sandwich does eventually get assembled, but it happens so slowly that the video is sped up 10-fold to make it watchable. And that was in an experimental kitchen in which “everything is carefully laid out,” Phoenix says. “There's exactly one plate. Exactly one knife.”


    Robots are clumsy because they struggle to make sense of all the data coming in from their cameras. “Vision is the biggest challenge,” Phoenix says. Depending on their angle of view, objects can appear to have millions of different shapes. Change the lighting and each of those millions multiplies again—and this is the simplest case. A cluttered scene with overlapping objects is a nightmare. Although machines easily surpass human ability for certain constrained visual tasks, such as identifying a face among thousands of passport photos, as soon as they venture out “in the wild,” as roboticists call the everyday human environment beyond the lab, machines flounder.

    Two years ago, a powerful new computational technique, called deep learning, took the field of machine vision by storm. Inspired by how the brain processes visual information, a computer first learns the difference between everyday objects by creating a recipe of visual features for each. Those visual recipes are now incorporated into smart phone apps, stationary computers, and robots including PR2, giving them the capability to recognize what is in their environment. But roboticists worry that deep learning can't give machines the other visual abilities needed to make sense of the world—they need to understand the 3D nature of the objects, and learn new ones quickly on the fly—and researchers are already looking beyond deep learning for the next big advance.

    PHOENIX CO-FOUNDED A STARTUP here called Vicarious, one of the myriad trying to capture human sight in code. Their optimism is surging. Over the past 2 years, deep learning has propelled machine vision by leaps and bounds. Where computers once struggled to detect the presence of something like a dog in a photo, with the help of deep learning they can now not only recognize a dog but even discern its breed.

    “The theoretical side of deep learning was actually worked out decades ago,” says Yann LeCun, a machine vision researcher at New York University in New York City who was one of its pioneers. He traces back its inspiration to the research of David Hubel and Torsten Wiesel, who shared a Nobel Prize in 1981 for research on biological vision. Working mostly with anesthetized cats in the 1960s and 1970s, Hubel and Wiesel discovered a hierarchical system of neurons in the brain that creates representations of images, starting with simple elements like edges and building up to complex features such as individual human faces. Computer scientists set about trying to capture the essence of this biological system. After all, LeCun says, “the brain was the only fully functioning vision system known.”

    The deep learning architecture that emerged is called a deep convolutional neural network. Information flows between virtual neurons in a network. And like the real neurons in the brain's visual system, they are arranged in hierarchical layers that detect ever more complex features based on information from the previous layer. For example, the network would first break down a photo of a dog into edges between dark and light areas and then pass that information to the next layer for processing. By the time the last layer is reached, the system can apply a mathematical function to answer the question: Is this a dog or not a dog?

    The problem was getting the dog-detecting function. “We just did not have the computers that we needed,” LeCun says, nor the data. A network needed to process millions of labeled images of a dog to learn what a generic dog looks like, and in the 1980s even supercomputers did not have the speed or memory to handle this training. So researchers turned away from deep learning. Machine vision improved only incrementally—until 2012.

    That year, a team led by computer scientist Geoffrey Hinton of the University of Toronto in Canada entered the ImageNet Challenge, an annual event in which competing computer programs must identify which objects—people, animals, vehicles—are present in thousands of photographs. Hinton's team used a deep convolutional neural network, trained on massive sets of labeled images. Unlike the computers of the 1980s, today's cheap computers have more than enough speed and memory for the calculations. Their system blew the competition out of the water.

    “That changed everything,” LeCun says. “The accuracy was so good that everyone in machine vision dropped what they were doing and switched to deep learning.” Since then, billions of dollars have flooded into deep learning research. Hinton now develops deep learning applications at Google, which is hoping to use the technique to create cars that can drive themselves. And at Facebook, where LeCun now heads the company's artificial intelligence efforts, a project called DeepFace aims to automatically identify any face in any photograph. (Whether the face's owner consents to being identified is another question.)

    The PR2 deftly navigates hallways, but it struggles with sandwich-making and other complex tasks.


    But amid the deep learning gold rush, some researchers are skeptical about the technique's prospects. Take PR2's epic struggle to make a sandwich. Deep learning enables the robot to recognize the objects around it—bread, salami, toaster—but it also needs to know exactly where those objects are in relation to its moving hand. Predicting where the bread will go when it drops requires physics. And what if the toaster is unplugged? The robot wouldn't have a clue why the bread wasn't toasting. “We have a long way to go before machines can see as well as humans,” says Tomaso Poggio, who studies both machine and biological vision at the Massachusetts Institute of Technology in Cambridge.

    The U.S. National Science Foundation (NSF) agrees. Poggio now heads an NSF-funded initiative called the Center for Brains, Minds and Machines. Most of the center's research is focused on understanding how human vision works and emulating it with computers. For example, Poggio says, “I can show a child a couple of examples of something and he will identify it again easily without having to train on millions of images.” He calls this trick object invariance—a representation of an object that allows humans to identify it in any setting, from any angle, in any lighting—and his research focuses on capturing it as a computer algorithm.

    Tech companies aren't waiting on the sidelines. Some are exploring new biologically inspired computer hardware (see p. 182). Phoenix and his colleagues at Vicarious are focusing on a software solution to visual intelligence. Last year, they announced that their algorithms had surpassed deep learning by cracking CAPTCHA, the visual puzzles made up of distorted letters that are used on websites to confound Web-crawling software. Vicarious has kept the details under tight wraps, but according to the company's co-founder, Dileep George, it is not based on deep learning at all. “We are working from how the human brain processes visual information.”

    When asked for something more tangible, George, like all entrepreneurs working furiously in secret, demurs. “It will be a few years before we pull it together,” he says. And what is the goal? “A robot with the visual abilities of a 3-year-old.” That would give robots the ability to do far more than make a sandwich.

    • * in San Francisco, California

  6. In our own image

    1. Dennis Normile*

    Will humans be more comfortable living with robots that look less like machines and more like pets—or ourselves?

    Hiroshi Ishiguro's strikingly humanlike Geminoids speak during a press preview at the Miraikan museum in Tokyo in June.


    The creators of Robovie, a boxy robot the size of a child, had a pretty good idea what would happen when they rolled it out as a conversation partner for the elderly at a Nara day care center in 2009. They expected the humanoid robot with its buglike eyes and mechanical voice would lift the seniors' spirits with cheery greetings, a sympathetic ear for their health woes, and encouraging words while they exercised. But a surprise came after the 14-week experiment ended: The seniors missed Robovie so much that they wanted to visit it in the lab.

    The seniors knew they were participating in an experiment, but they assumed Robovie is autonomous. In fact, a 29-year-old researcher was at work in a control room, steering the wheeled robot through the hallways, triggering its hand waves, and feeding Robovie its lines. The researchers, based at the Advanced Telecommunications Research Institute International (ATR) in Nara, were not engaging in deception for fun. Rather, says Hiroshi Ishiguro, an engineer who heads both the ATR team and a second robotics group at Osaka University, they were studying how humans will interact with the sophisticated robots of the future.

    For 2 decades, Ishiguro's teams have deployed various robots—some with vaguely human forms, others crafted to look indistinguishable from people—as customers in cafes, clerks in stores, guides in malls and museums, teachers in schools, and partners in recreational activities. The roboticists, who use both autonomous robots and ones under human remote control, have come to some startling conclusions. In some situations, people prefer to speak with an android instead of another person, and they feel that robots should be held accountable for mistakes and treated fairly. And as the seniors here showed, humans can quickly form deep emotional bonds with robots.

    Ishiguro's approach is “pretty brilliant,” says Kate Darling, who studies robot ethics at the Massachusetts Institute of Technology's (MIT's) Media Lab in Cambridge. Some find the implications of the work worrisome, however. If interactions with robots can substitute for interactions with other humans, says Peter Kahn, a psychologist who studies the relations between humans and technology at the University of Washington, Seattle, “we'll dumb ourselves down socially even as our technologies advance.” But Ishiguro and others believe robots are more likely to expand rather than narrow our horizons.

    The debate is no longer academic. Simpler robots are creeping into daily life, serving as pets, vacuuming houses, and comforting dementia patients. A new wave of more sophisticated social robots is about to hit the mass market. In the coming days, for example, semiconductor giant Intel is expected to start selling a $1600 bipedal robot kit called Jimmy, for which hobbyists can design a body and fabricate it on a 3D printer. Intel claims Jimmy will be able to fetch a beer from a fridge and play games with a kid.

    In February, a child-sized rolling bot called Pepper will go on the market in Japan. Able to recite stories to children and banter with adults, Pepper has microphones to track the direction of voices, an infrared sensor to measure distances, two cameras to recognize faces, touch sensors scattered over its body, nimble five-fingered hands, and Internet connectivity. And the $2000 price tag means “this is a robot for you,” said Masayoshi Son, CEO of technology conglomerate SoftBank in Tokyo, which developed Pepper, at its unveiling in June. He claimed that Pepper “marks a turning point for humankind.”

    LIKE THEIR MAKER, the Geminoid HI robots dress in black, wear tinted glasses, and are perpetually scowling. The androids bear an uncanny resemblance to Ishiguro, who has won renown among roboticists for his Geminoids. But the robot falls short of fully capturing Ishiguro's stern demeanor, as an experiment in Linz, Austria, showed. On different days, either Ishiguro or Geminoid HI-1 sat at a table in a cafe. Random passersby rated the android friendlier than they did the real Ishiguro.

    The latest Geminoids have 50 motors controlling facial expressions; head motions; and even subconscious movements like breathing, blinking, and shifting position. As they cannot walk, they are typically seated. And, as I learned when coming face to face with a female android, Geminoid F, in Ishiguro's Osaka lab, they can, for a brief unsettling moment, be taken for human—before their slightly unnatural movements and off-the-mark eye contact erase any doubts.

    It's out in the field, however, where Geminoids truly are conversation starters. In one experiment conducted over 2 weeks last fall, Ishiguro's group installed Geminoid F as a sales clerk in an Osaka department store. Operating autonomously, the android answered customer questions and made suggestions regarding a selection of $100 cashmere sweaters. Geminoid F handled 45 customers a day, versus 20 on average for the human sales clerks, in part because the android never took breaks and was a novelty. Some customers also sought out the android—apparently to avoid a human clerk's subtle pressure to make a purchase. “With the android it was easy to reject the [sales] offer,” Ishiguro says. (As a result, Geminoid F's sales success did not match that of the store's best human salespeople.) In an upcoming trial, the Geminoid F will offer to help male customers choose gifts for wives and girlfriends.

    Dementia patients form emotional bonds with PARO, a robotic baby harp seal.


    Longer encounters can forge emotional bonds between human and machine. In the experiment at the elderly care center, some participants said it was more pleasant to converse with Robovie than with relatives. One remarked that the robot never talked back—unlike her grandchildren. For others, Robovie was a welcome substitute for grandchildren they rarely saw. “Even when I felt sad, I could feel brighter by talking with Robovie,” one senior told the researchers. When the trial ended, the seniors held a farewell party for Robovie, giving it a card with handwritten notes of thanks and best wishes for the future. A month later, they visited the robot in the ATR labs.

    Ishiguro's team has noted a similar phenomenon in dementia patients. Many people with brain impairments find it difficult to talk with other people, Ishiguro says, perhaps out of fear of what the healthy conversation partner may be thinking. “But they love to talk to robots,” he says, after observing dementia patients interact with a humanoid robot that his team developed called Telenoid, which has a minimalistic human form and is held on a lap like a small child.

    That's also the premise of PARO, a robotic baby harp seal intended to comfort dementia patients. Developed by Takanori Shibata of Japan's National Institute of Advanced Industrial Science and Technology in Tsukuba, PARO chirps when stroked and looks in the direction of a voice. Many reports from around the world indicate that interacting with it improves dementia patients' mood and social interactions and reduces agitated behavior.

    Such emotional ties with robots disturb some researchers. PARO and similar robots “push our Darwinian buttons, by making eye contact, for example, which causes people to respond as if they were in a relationship,” wrote Sherry Turkle, a sociologist at MIT, in a 2007 paper. Turkle, who studied PARO's use in a Boston-area nursing home, wondered whether it is ethical to encourage relationships based on such a “fundamentally deceitful interchange.”

    Still, widespread use of social robots as assistants is inevitable in many countries, including Japan, where more and more elderly will need care while the number of young people dwindles, says ATR roboticist Takayuki Kanda. Endowing these robot caregivers with social skills will improve acceptance and effectiveness, he says.

    THE RISE OF SOCIAL ROBOTS raises other ethical issues, including whether the bots should be accorded rights and responsibilities deemed irrelevant for less interactive machines. In one experiment probing the ethical frontier, Kahn's team had a remotely operated Robovie interact one-on-one with children in three age groups: 9-, 12-, and 15-year-olds. To break the ice, the kids and the robot exchanged greetings and chatted as they examined the contents of a room. Then Robovie thought of an object in the room and gave clues for the child to guess what it was. When the roles were reversed and it was Robovie's turn to guess, a moderator suddenly interrupted and, over the robot's objections, ordered it into a closet.

    As the team reported in Developmental Psychology in 2012, 89% of the children enjoyed spending time with Robovie, and a majority believed the robot is intelligent and has feelings. Most also said that the moderator acted unfairly in stopping the game when it was Robovie's turn, and 54% thought it improper to put the robot into a closet against its wishes.

    In a second experiment, Kahn's group invited undergraduates to participate in a scavenger hunt, in which they searched a room for specified items while Robovie kept count. Identifying seven netted a participant $20. But Robovie, as programmed, invariably miscounted and denied the prize to those who found seven objects. As the team reported at a robotics conference in Boston in 2012, 65% of participants held the robot “morally accountable” for its miscount. Whereas participants also said they would hold a fellow human more at fault for the same infraction, very few said they would judge a vending machine morally responsible for short-changing someone.

    According to Kahn, these experiments show that humans are already thinking of autonomous robots as being alive technologically—though not biologically—and according them moral rights and responsibilities they would never extend to nonhumanoid machines. Kahn thinks more work needs to be done to understand the implications of living and working with social robots.

    Darling, of MIT's Media Lab, agrees. Fears about an impact on human relationships are “an argument that is made with every new technology that comes along,” she says. As social robots become common, she says interactions with these new, artificial companions should become second nature—like using the telephone.

    • * in Nara and Osaka, Japan

  7. Humans need not apply

    1. Hassan DuRant,
    2. Jia You

    Machines are poised to take over as rescuers and reporters.

    It's hard enough these days landing a job as a flesh-and-blood human. Soon we may be competing with robots. In a 2013 report, the University of Oxford's Oxford Martin School estimated that 47% of U.S. jobs could be taken over by machines in the next decade or two. Here are a few careers where robots may have the inside track.


    PHOTO: © Stephen Barnes/Demotix/Corbis

    Tough, strong, and fearless, robots could make ideal first responders to a natural disaster. In December, 16 teams worldwide competed in a Defense Advanced Research Projects Agency Robotics Challenge trial that required the machines to complete eight tasks including driving into a disaster site, moving debris, breaking down walls, and turning valves. The 11 qualified teams will face of in a final round in June 2015.


    PHOTO: Peter DaSilva/The New York Times/Redux

    Don't be surprised if a robot greets you at a hotel lobby. The Aloft hotel in Cupertino, California, is testing Botlr, a robotic bellhop that can shuttle razors, snacks, and morning papers from the lobby desk to guest rooms in 2 to 3 minutes. Cameras and other sensors enable the bot to recognize opened room doors, and users can enter reviews of the service on the robot's flat panel display. The bot even does a small dance before trundling off.


    PHOTO: Getty Images

    Untroubled by writer's block or caffeine cravings, robots are already writing data-based stories in some newsrooms. Algorithm, developed by Chicago, Illinois–based Narrative Science, now churns out corporate earnings reports for Forbes, and the Los Angeles Times employs similar technology to cover earthquakes. A study earlier this year found that readers could not reliably tell the difference between robot-generated and human-written sports articles.


    PHOTO: Susan Merrell/UCSF

    Researchers estimate that medication errors by health practitioners or pharmacists kill 7000 people in the United States alone each year. In 2011, the University of California, San Francisco, Medical Center introduced a robotic pharmacy that took over all manual tasks associated with 5lling prescriptions. Computers receive medication orders electronically, then command machines to assemble and dispense barcoded prescriptions.


    PHOTO: Google

    Putting a robot instead of flesh and blood behind the wheel could save hundreds of thousands of lives a year by eliminating human error, which accounts for 95% of automobile accidents. Google is building at least 100 fully autonomous, two-seat test cars with a maximum speed of 40 kilometers per hour, which the company expects to be road-ready by early next year—and several states have already opened their roads to tests of autonomous cars.


    PHOTO: Aberystwyth University

    Robots already carry out repetitive lab tasks like pipetting uncomplainingly, but they can make discoveries, too. At the University of Manchester, a lab bot named Adam used algorithms to formulate hypotheses and design experiments to identify genes involved in yeast metabolism, without any human input. At Cornell University, another team devised a robot that observed the swinging of interconnected pendulums and deduced their “laws of motion.”

    Correction (29 October 2014): The image credits have been added.

  8. The accidental roboticist

    1. Adrian Cho*

    John Long wondered how life developed the capacity to evolve—so he unloosed a fleet of robot tadpoles.

    Biologist John Long began tinkering with simple swimming robots to try to explain the evolution of the backbone.


    The Tadro is a primitive creation. The robotic tadpole's palm-sized cylindrical body contains a few wires and resistors and a reprogrammable chip. An electric motor wags the tail, a simple plastic cone tapering to a square fin. On either side of the robot's head perch two light sensors, with a third in the center. That's it. Yet when placed in its environment—for now, a kiddie pool in a darkened locker room here at Vassar College—the Tadro does something that most complex machines cannot: Unlike your car or computer, it guides itself and behaves like a living creature.

    A flood lamp hangs a meter or so over the pool, supplying light that serves as Tadro's “food.” Tipping slightly forward, the robot churns toward the lamp, its motor squeaking weekee weekee weekee, and then circles under it, feeding on the glow. As the Tadro wriggles toward the light again and again, it's hard not to think it's alive.

    And like living things, Tadros evolve. With help from their makers, they change from generation to generation in response to a form of survival-of-the-fittest selection. They are the brainchildren of John Long, a Vassar biologist with a cheerful smile and a scholar's little round glasses who peppers his conversation with references to books and movies. (“Pay no attention to the man behind the curtain!”) He has used Tadros to study the evolution of backbones, testing the idea that by making ancient fish stiffer, backbones made them faster and hence better at collecting food or evading predators.

    Now, he and his team are gearing up for an even more ambitious effort. They plan to use Tadros to probe the font of all life's diversity: the ability to evolve, or evolvability. One key to that ability, he and his collaborators think, may be modular design, especially in the brain. In animals, distinct neural circuits control different functions, such as vocalization and vision. “The grand hypothesis is that modularity will enhance evolvability,” Long says. “It's this capacity for future change that we're trying to get our hands around.” If he is right, modularity itself should evolve within the Tadro's control circuits, under the right conditions.

    That may be a lot to ask of toy tadpoles, but others say that experiments with robots can lay bare the nuts and bolts of evolution in ways that observations with living things cannot. “You're able to set up and test hypotheses that couldn't be tested otherwise,” says Robert Pennock, a philosopher of science and evolutionary biologist at Michigan State University in East Lansing. A. E. “Gusz” Eiben, an evolutionary computer scientist at VU University Amsterdam, agrees. Long's work, he says, “deserves to be more widely known, especially among biologists.”

    IN THE YOUNG FIELD of evolutionary robotics, Long's research swims against the prevailing current. Most researchers use evolution as a tool to develop better robots. They construct otherwise identical robots with a variable trait—say, the length of a limb or some aspect of the robot's circuitry—which is specified in an abstract numerical “gene.” They set the robots loose in some environment to determine, according to some previously decided criterion related to their behavior, which ones are fitter and get to pass their winning traits to more offspring.

    There's no hot sex for the robots, however. Instead, mating occurs entirely within a computer using a “genetic algorithm” to determine the traits of the next generation of robots. To mimic biological reproduction, each numerical gene is divided by two, the algorithm randomly “mutates” the results, and they're stored in virtual eggs and sperm. The program then pairs up the virtual gametes—with fitter robots providing a larger share—to concoct “genomes” for the next generation.

    Often evolutionary roboticists don't build physical robots at all. Instead, computers simulate the robots and their behavior to determine which are fitter. Researchers quickly plow through thousands or millions of generations to optimize the design for a physical robot.

    Long, however, likes to keep his evolving robots real, immersed in a physical environment. He grew up in Rochester, Michigan, but, as a descendent of New England whalers, felt the call of the sea at an early age. Hoping to become a marine biologist, he attended the tiny College of the Atlantic—enrollment 362—in Bar Harbor, Maine, where he worked for Sentiel “Butch” Rommel, a bioengineer who would take students out to slice up beached whales.


    In graduate school at Duke University in Durham, North Carolina, Long studied fish, in particular the blue marlin, which can swim 80 kilometers per hour. He built a rig that would hold marlin backbones, begged from fish shops in Hawaii, and bend them back and forth. He expected that at higher bending frequencies, the backbones would become springier and absorb less energy. Instead he saw the opposite, suggesting the backbone works a bit like a shock absorber at high speeds.

    Long was still pondering backbones when he arrived at Vassar in 1991. He wondered how the chainlike spine of vertebrates evolved from the sinewy notochord of invertebrates—an innovation that has happened at least three times in evolutionary history. Stiffer than a notochord, a backbone may have helped early fish swim faster and gather more food than their floppier peers, he hypothesized. Long had started to build mechanical models of fish, but his collaborator Kenneth Livingston, a cognitive scientist at Vassar, suggested developing them into autonomous robots.

    To try to replicate backbone evolution, Long decided to mimic the tadpolelike larva of invertebrates of the genus Botrylloides, commonly called sea squirts. The larva resembles ancient invertebrate fish, such as the extinct eel-like Haikouichthys. Other researchers had traced the neural circuits that make sea squirt larvae spiral toward light, so the robots could be wired accurately and provide a reasonable model of the larva, which would stand in for the fish.

    Thus the Tadro—short for tadpole robot—was hatched, in 2004. Its simplicity hit the sweet spot for research with undergraduates, who can't afford to spend years constructing a complex robot. Long's team built the first Tadros out of food containers, says Nicholas Livingston, chief engineer in Long's lab and Kenneth Livingston's son. “I remember feeling both clever and silly when I went to the local grocery store and bought a lot of Tupperware and plastic wrap,” he says.

    COMPARED WITH COMPUTER MODELS or biological studies, robots have advantages for reprising evolution. Unlike a computer simulation, a real robot cannot leave out some subtle interaction with its environment or break the laws of physics. And robot experiments can be much faster than biological experiments, at least those with larger animals, says Jodi Schwarz, a bioinformaticist and collaborator at Vassar. “Even with mice, each generation is months,” she says, “so it would take you forever.”

    Even so, studying evolution with robots can be tricky. In his first experiment, Long varied the length and stiffness of the Tadro's tail, molding new tails for each generation out of a tunable polymer. Researchers ran trials with three Tadros programmed to find and circle a light and assessed their fitness. Because the robots did not actually live or die—the gold standard for measuring fitness—the researchers had to find some other measure of fitness. They settled on a metric based on how fast and straight a Tadro swam, as assessed in a frame-by-frame video analysis.

    But over 10 generations they observed no clear trend toward longer, stiffer tails. Instead, the tails' characteristics varied almost randomly. Why? Long and company had tripped on a pitfall of evolutionary robotics. They had defined their fitness metric to reward a Tadro for speed and penalize it for wobble. But in reality, faster Tadros wobbled more than slower ones—so, paradoxically, the fitness measure both rewarded and penalized a Tadro for being fast.

    Even after correcting for that error and reanalyzing their data, they found that in some generations, selection favored bendy, slower Tadros. So Long's hypothesis was wrong: The race for food alone probably did not create a need for speed and account for the evolution of vertebrae, as he explains in his 2012 book, Darwin's Devices: What Evolving Robots Can Teach Us About the History of Life and the Future of Technology.

    That negative result only spurred Long on. By 2007, he had revised his hypothesis. Perhaps vertebrae evolved not only to enable a fish to gather more food, but also to help it dash away from predators, he thought. To test that idea, Long and colleagues deployed a new version of Tadro, this time modeled after the extinct jawless fish Drepanaspis gemuendenensis, a vertebrate that also appeared to have had to fend off predators, as it had a hard shell. And this time researchers made two kinds of Tadros: predators and prey. The prey would still seek the light. However, they would also have infrared sensors that would trigger them to flee whenever a predator got too close.

    The results of the new experiment were more in keeping with expectations. To keep the experiment manageable, tail length was fixed, but prey Tadros' tails could have different numbers of vertebralike rings. More vertebrae meant more stiffness, which presumably meant more speed. The researchers ran two trials of five and 11 generations each. And the Tadros did indeed evolve to have more vertebrae, from a starting average of 4.5 to an average of 5.5 or more—bolstering Long's hypothesis.

    Both experiments were limited to a handful of generations and couldn't replicate the dramatic effects in the fossil record. Nevertheless, the results were clear, says Eiben, the computer scientist from VU University Amsterdam. “I'm amazed that they found such developments in just this few generations,” he says. He credits Long's group with not just observing a trend, but also digging deeper and “asking themselves why.”

    NOW, LONG AND COLLABORATORS aim to study the evolution of the Tadro's “brain,” hoping to induce the appearance of the distinct neural network circuits that they believe may aid further evolution. This time, the experiment also promises a practical payoff, says Josh Bongard, a computer scientist from the University of Vermont in Burlington who is collaborating with Long.

    Despite its stark simplicity, the Tadro exhibits complex and lifelike behavior.


    Bongard uses simulations to develop control circuitry for robots. But he has been frustrated to find that the process runs out of steam: The circuits become so interconnected that changing one connection requires rewiring the whole thing. Nature avoids that tangle by evolving modular circuits, and he hopes to learn how do to that, too. “The roadblock isn't that we don't have big enough computers,” he says. “The roadblock is intellectual—we're not simulating evolution in the right way.”

    The new Tadro's neural network is rudimentary (see figure, p. 193). Inscribed on a reprogrammable chip, it has two “neurons” that take input from the eyelike light sensors on either side of the head. Two output neurons control, independently, the tail's angle and flapping rate. Inputs connect to outputs through a “hidden layer” of neurons. The map of that circuitry will be the heritable trait, and researchers hope Tadros will evolve distinct circuits for controlling tail speed and angle.

    Learning from previous work, Long's team will simplify the experiment. Each Tadro will ply the water alone, and fitness will be determined by how much light it collects as measured by the light sensor in the middle of the head. The researchers may eventually allow some aspect of the Tadro's body, such as its tail, to evolve, too, as modularity may arise from the interplay of body function and the environment, Bongard says.

    But first, the team must get the new Tadros running—a challenge for the four undergrads who will work for 2 years on the project. They had planned to make the Tadros entirely with a 3D printer over the summer. But the printed bodies leaked and sank, even after researchers tried to seal them with paint. “We had five different bodies painted with five different spray paints, and they all leaked,” says Jessica Ng, a biology major at Vassar. “So we just gave up.” The team is now using bodies machined from clear acrylic.

    Long's students also face the daunting task of taking the data during the school year. Long, a passionate teacher who arrives at work at 7 a.m. and likes to squeeze in instruction whenever he can—he regularly reads aloud to his wife and two teenaged daughters at dinner—says he strives to help students succeed on their own terms. “My line to them is ‘You're no good to us if you fail,’” he says. Instead of committing a number of hours per week to the project, the students will aim for monthly research milestones. If that's a recipe for periodic all-nighters, so be it, says John Loree, a physics major at Vassar: “Hey, it's college!”

    It's tantalizing to imagine that Long's modest robots will evolve wildly. If they do, they might even provide insight into deep philosophical issues about the genome, says Kenneth Livingston, the cognitive scientist. “The genome is a set of instructions to do something, but it's not pure because it's about a particular world,” he says. So, he says, the connection between the environment and the genome is “the beginning of meaning.”

    But it's also imaginable that things won't go so swimmingly. For example, the Tadros' evolution will depend in part on randomly tweaking connections within their neural networks. But most randomly wired networks may not work at all, leaving the Tadro dead in the water and evolution at a standstill—although simulations suggest that won't be a showstopper, Bongard says.

    “We really have no idea what will happen,” says Schwarz, the Vassar bioinformaticist. But that's part of the attraction of studying evolution with robots, says Pennock, the philosopher of science at Michigan State. “The thing that is underappreciated in this approach is that it's truly experimental,” he says. “You're often surprised.”

    • * in Poughkeepsie, New York

  9. Q&A: Robots and the law

    1. Dennis Normile

    As robots take on societal roles that were once the province of humans, they are creating new legal dilemmas.

    Should driverless cars be allowed on the roads? Should robots capable of thought be accorded rights as sentient beings? Ryan Calo, a lawyer at the University of Washington School of Law in Seattle, tackles these and other questions in “Robots and the Lessons of Cyberlaw,” a paper that will appear in the California Law Review next spring. In a report for the Brookings Institution last month, he called for the creation of a Federal Robotics Commission that would oversee the integration of robotics technologies into U.S. society. Science caught up with Calo recently on the murky questions surrounding robo rights and responsibilities. His remarks have been edited for brevity.

    Q:The law tends to see a clear dichotomy between persons and objects. Do robots fall in between, and therefore are new laws needed?

    A:Robots tend to undermine this clean distinction between a thing and a person. For example, you get compensated differently when someone else's negligence results in injury to property than to a person. When property is involved, you get market value. With people you have been deprived of that person's companionship. To the extent that people become heavily enmeshed socially with robots, the law will have to decide into what category to sort them.


    Q:Should robots be given legal status as “beings”?

    A:I don't think we'll have stand-alone rights for robots. Rather, the rights of owners may extend to robots in certain ways. Already lawmakers have struggled with cases where a software agent makes a deal on your behalf. Personally, I think we should hold people to contracts formed by their software unless something about the transaction makes it look objectively implausible.

    Q:Robots already generate speech. Does the First Amendment protect this speech? And what if a robot makes defamatory statements?

    A:If an artist creates an art bot that does surprising things, maybe we'll think about free speech as attaching to the creation of that art bot. There are news bots that wait for an event to happen and then report it without human intervention. What if it gets something wrong? Legal precedent requires not just that you intend defamation but that you have actual malice. You're not going to have defamation attach to news bots.

    Q:Will we need laws to protect robots from abuse the way we protect animals from abuse?

    A:I don't see laws actually protecting robots. But the link is so strong between animal abuse and child abuse that a number of jurisdictions have policies saying that if you respond to an animal abuse case and there are children in the same household, you immediately call child welfare services. You could imagine tweaking those policies to apply if you were to have reports that someone was kicking their robot dog.

    Q:If a robot inadvertently injures someone following a command from its owner enabled by third-party software, who is liable?

    A:In the early days of computers, people did sue when the computer froze and they lost something of value. The courts were very quick to say it's just data, so we are going to limit liability to the cost of the computer or software. Once you have platforms that are physical, if they harm someone, victims will sue not just the user, and not just the apps developer, but they'll sue the platform manufacturer. I think the solution will probably end up being statutory limitations on liability.

    Q:Do companion robots raise new privacy issues?

    A:Anything that collects, processes, and shares information is going to have an impact on privacy. No one really cares how you use your dishwasher. But how you ask your robot to interact with you has a great deal of sensitivity around it. Much privacy law is worded broadly enough to encompass these new problems.

    Q:Personal robots may send information on interactions with users back to developers to improve functionality. Could this be abused?

    A:What I worry about is the prospect of manipulating people in the interest of the company, for example getting someone really invested with a virtual girlfriend and then getting the user to buy things for her. We should think creatively about how to interrupt corporate incentives to exploit consumers.

    Q:Could robots be programmed to alert police if they witness their owner commit a crime?

    A:Absolutely! One thing to bear in mind is that anything a robot records could be subpoenaed. And then, what if a robot gets good enough that it can tell if you're doing something unlawful? It might have chemical sensors to help you cook. Could the government direct those sensors also to tell if someone is cooking meth? That's a question the Constitution doesn't obviously answer.