News this Week

Science  16 Nov 2007:
Vol. 318, Issue 5853, pp. 159
  1. AIDS RESEARCH

    Did Merck's Failed HIV Vaccine Cause Harm?

    1. Jon Cohen

    SEATTLE, WASHINGTON—A common cold virus has walloped the already ailing AIDS vaccine field.

    AIDS researchers, who are still staggering from the unexpected failure in September of the most promising vaccine candidate in clinical trials, met here last week to explore an even more alarming finding: The vaccine, made by Merck and Co., may actually have increased the risk of HIV infection in some study participants.

    Working with the academic-based HIV Vaccine Trials Network (HVTN) and the U.S. National Institutes of Health (NIH) in Bethesda, Maryland, Merck researchers stopped the multicountry study after an interim analysis revealed that the vaccine did not work (Science, 5 October, p. 28). Now further analysis suggests that the vaccine may have helped HIV infect a subset of participants who at the trial's start had high levels of antibody to adenovirus 5 (Ad5), which causes the common cold and is also a component of the vaccine. “This is the worst possible outcome in a vaccine trial,” said AIDS researcher Eric Hunter of Emory University in Atlanta, Georgia, one of the study sites.

    The finding is as befuddling as it is frightening, and its implications are far-reaching. The data presented here to some 500 attendees at an HVTN meeting on 7 November found only a “trend” toward what's called “enhancement,” leaving investigators wondering whether the elevated number of infections in vaccinees who had high Ad5 immunity was due to chance, behavior, or a vaccine-induced problem. Despite intensive investigations, no biological mechanism has emerged to explain how preexisting immunity to Ad5 could make vaccinated people more susceptible to HIV. “The data are very complex, and trying to understand what they mean has required an enormous amount of work,” said Merck's Michael Robertson, a co-chair of the study.

    In the first full accounting of the trial results, Merck researchers and their partners reported that, as of 17 October, HIV had infected 83 people in the placebo-controlled trial. Of these, 49 were vaccinated and 34 received saltwater injections. This difference clearly indicates that the vaccine does not protect against HIV, but the increased infections in vaccinees have no statistical import and likely are due to chance.

    Double trouble.

    The vaccine clearly failed (left), but in men with high Ad5 antibodies (right), it may have increased their risk of infection. (Women were excluded from this analysis because only one became infected during the study.)

    SOURCE: MERCK; (INSET) THE STEP STUDY, HVTN

    The discovery of possible enhancement in the so-called Step Study also owes something to chance. The vaccine contains three HIV genes stitched into a modified Ad5 vector that infects cells, creating HIV proteins that teach the immune system how to attack the real AIDS virus. From the outset, investigators worried that high levels of preexisting Ad5 antibodies might attack the vector and cripple the vaccine. So when Step began in December 2004, they enrolled 1500 people at high risk of becoming infected with HIV who had low Ad5 antibody levels. When data then suggested that this concern had been overblown, they doubled the trial size in July 2005 to include people with high Ad5 immunity. Most participants were men who have sex with men, although 38% were women, many of whom were sex workers.

    The interim analysis in September that revealed the vaccine wasn't working looked only at the low-Ad5-antibody group. When the researchers subsequently examined the high-Ad5-antibody group, they were startled to find 21 infections in vaccinees versus nine in the placebo group.

    The statistical analysis is ambiguous. Typically, researchers deem a difference as significant if it has a 95% probability of not being due to chance—a P value of less than 0.05. By these standards, the finding, with a P value of 0.029, was significant. But Steven Self, HVTN's head statistician at the University of Washington (UW), Seattle, cautioned that this comparison merits a more stringent cutoff for significance, between 0.025 and 0.0025, because the study was not designed to assess potential harm, nor did investigators plan to evaluate a subset of the study population. Still, Self said this “trend” deserves close examination.

    Several researchers described their recent efforts to make sense of the trial's results. UW's Juliana McElrath, an immunologist who directs HVTN's lab program, explored what many consider the most likely explanation: that people in the high-Ad5-antibody group were more vulnerable to HIV because of “immune activation.” Specifically, HIV establishes an infection by attaching to T cells that have surface receptors known as CD4 and CCR5. Natural infection with Ad5 creates memory banks of these very T cells, which expand and direct an attack if Ad5 shows up again. Theoretically, the vaccine vector could have activated these memory cells in the same way, creating more targets for HIV. But McElrath's preliminary work found no evidence for this scenario.

    Behavioral changes don't seem to provide an explanation: Study co-chair Susan Buchbinder of the San Francisco Department of Public Health said risk behaviors had decreased across the board and more so in the high-Ad5-antibody group. Buchbinder said investigators still are sorting out many variables related to HIV transmission, including circumcision, coinfection with other sexually transmitted diseases, and genetic factors.

    One thing is clear: The monkey studies that suggested that the vaccine could thwart the AIDS virus, fueling much excitement, misled Merck researchers. “Mice lie, monkey sometimes lie, and humans never lie,” said Peggy Johnston, head of NIH's AIDS vaccine program. “Some monkeys have lied to us this time.” Other attendees stressed that Merck relied on a wimpy strain of the AIDS virus to “challenge” vaccinated monkeys and that challenges with stronger strains predicted that the vaccine would fail.

    Although the mechanism remains elusive, researchers struggled with whether to tell trial participants if they received the vaccine or the placebo. A more recently launched study of the same vaccine in South Africa was stopped and quickly “unblinded” after learning the Step results, notifying everyone of their vaccine status (Science, 2 November, p. 729). After much debate here, Step's scientific steering committee recommended unblinding, and an oversight committee concurred on 13 November.

    The specter of enhancement also affects the AIDS vaccine field's next-best hope. This NIH-made vaccine uses a similar Ad5 vector and was slated to enter a $130 million trial this fall without screening people for Ad5 immunity. “Step's results demand that we reexamine and redesign our study,” said principal investigator and Step collaborator Scott Hammer of Columbia University.

    Merck's Mark Feinberg warned colleagues that “the whole field will come apart at the seams” if it doesn't properly investigate and respond to the Step results. “I've never seen more complicated data to emerge from a study,” Feinberg said. “And this one focuses on as important a question as I've ever known.”

  2. EPIDEMIOLOGY

    Privacy Policies Take a Toll on Research, Survey Finds

    1. Jocelyn Kaiser

    A federal rule aimed at protecting patient data is hindering epidemiology research, adding costs and delays without enhancing confidentiality, according to a study this week in the Journal of the American Medical Association (JAMA). The survey responses from 1500 epidemiologists reflect the first systematic analysis of privacy rules that researchers have complained about for 4 years.

    The problems stem from the Health Insurance Portability and Accountability Act (HIPAA), passed by Congress 11 years ago to make it easier for people to transfer their health insurance. A so-called Privacy Rule that took effect in April 2003 requiring health care providers to protect the privacy of medical records also affects research. Investigators must get permission to use a patient's medical data, even to identify potential participants. If that is not possible, the researchers can try to get by with a data set stripped of identifiers, such as name and address, or they can seek a waiver from an institutional ethics board.

    These requirements have had a major impact on population-based health research, according to the survey, headed by epidemiologist Roberta Ness of the University of Pittsburgh in Pennsylvania. Survey invitations were e-mailed to more than 10,000 members of 13 epidemiology societies, and 1537 of them completed a Web survey. About 68% said the Privacy Rule has made research a great deal more difficult; half reported major delays; and nearly 40% faced much higher costs (see table). Only one-quarter said the rule has greatly improved confidentiality. Of those who modified a protocol to comply with HIPAA, two-thirds said it was much harder to recruit subjects.

    Overprotected?

    A rule meant to ensure the privacy of medical data is hampering research, according to survey of epidemiologists.

    View this table:

    The results support anecdotal evidence that the Privacy Rule has slowed enrollment and threatened some studies, says Ness (Science, 9 July 2004, p. 168; 17 March 2006, p. 1547). For example, at the University of Michigan, researchers were required to obtain consent for a survey of patients with heart disease care by mail rather than by phone, resulting in a drop in the response rate from 96% to 34% and a bias toward older, healthier, married participants. Ness's survey also suggests that U.S. surveillance of infectious diseases may be suffering because hospitals aren't sure what they can report.

    Three years ago, an advisory panel urged the Department of Health and Human Services (HHS), which administers the Privacy Rule, to ease the burden on researchers by revamping the rule. The agency never formally responded. But HHS and other organizations commissioned the U.S. National Academies' Institute of Medicine (IOM) to examine the issue broadly; one of the results is the JAMA survey. Researchers in other disciplines have told the panel of difficulties, too. For instance, clinical oncologist Richard Schilsky of the University of Chicago Medical Center says HIPAA has been “a huge problem” for studies involving tissue samples, among others. Ness says she and her colleagues “really are hoping” that the IOM panel will devise recommendations that produce action. Its report is due by early 2009.

  3. ENVIRONMENT

    Panel Calls for Pilot Program for National Indicators

    1. Erik Stokstad

    U.S. agencies that track the health of the environment should kick-start a pilot project to establish a national system of environmental indicators, a blue-ribbon review panel has recommended. In the same way that the gross national product tracks the state of the economy, environmental indicators, such as the area of wetlands, would monitor progress toward meeting environmental goals. The panel urged the agencies to start now and develop a plan for the remainder of the Bush Administration; they recommended water quantity stored in lakes, aquifers, and snowpacks as a test-bed indicator. Observers last week voiced support for a pilot project but stressed that users and contributors of data must be included in the design process if it is to be politically viable.

    Many agencies monitor aspects of the environment. In 2004, the Government Accountability Office recommended that the White House Council on Environmental Quality (CEQ) figure out how to coordinate federal efforts. After a yearlong series of meetings with states and nongovernmental organizations, CEQ in 2006 asked an interagency team to figure out how to proceed. The team's white paper, finalized in September, laid out several options and recommended the creation of an interagency council to set policy, another interagency team to manage the technical work, and an advisory panel. The Department of the Interior (DOI), one of five agencies involved, had asked the National Academy of Public Administration to evaluate the team's ideas.

    Chaired by Hermann Habermann, former chief statistician of the U.S. Census Bureau, the panel last month sent DOI an advance summary of its final report* due out in December. The panel agreed with the overall approach but stressed the need for immediate action. “What's needed at this juncture is not a new organizational chart but concerted leadership,” said project manager Don Ryan at a workshop held last week by the nonprofit National Council for Science and the Environment.

    A national indicator of water quantity would be a good place to start, Ryan explained, because this measure is relatively straightforward—compared to water quality, for example—and people care about supplies of water for drinking, irrigation, and wildlife. Ultimately, dozens of indicators might cover everything from air quality to biodiversity to outdoor recreation. The panel felt that the agencies should request funds for the pilot project in the 2009 budget; although it didn't come up with a figure, Ryan says “it's not a huge sum to get rolling.”

    Shrinking reservoir.

    Water scarcity, such as Atlanta is experiencing, could be a prototype national indicator of the state of the environment.

    CREDIT: JOHN BAZEMORE/AP PHOTO

    Reaction was mixed. Although the panel recommended that indicators be useful to high-level policymakers, such a “crosscutting indicator of water availability for the whole country doesn't make much sense,” said CEQ's Ted Heintz at the workshop. “Water use is local.” However, he and others emphasized that the main point of a pilot program would be to help federal agencies learn to work together better, and for that, water quantity might suffice.

    Robin O'Malley, who heads the Heinz Center's Environmental Reporting Program in Washington, D.C., and others say that building broad support for a national system of indicators will be especially crucial now, because the political leadership at the agencies will change with the new Administration in January 2009. “The action plan has got to be about making sure it's rooted in the community when there is no one at the transition to save this,” O'Malley said. “You take a big risk by keeping people out,” he says.

    Officials at DOI and CEQ are expected to decide in the next few weeks about whether and how to proceed.

    • *A Green Compass: Institutional Options for Developing a National System of Environmental Indicators.

  4. SCIENTIFIC WORK FORCE

    New Analysis Questions Push for More Degrees

    1. Yudhijit Bhattacharjee

    Academics, business leaders, and politicians have warned repeatedly that the United States risks losing its economic edge unless it produces more scientists and engineers. They also say that the country's system of science and math education is not up to snuff. But a new study* questions two basic tenets of that argument, concluding that work force data do not support claims of a looming labor shortage and that test scores indicate U.S. students are doing at least as well in science and math as their international counterparts are.

    The supposedly sorry state of STEM (science, technology, engineering, and mathematics) education was a driving force behind enactment this summer of the America COMPETES Act, which authorizes $44 billion for a cornucopia of research and education programs across several federal agencies (Science, 10 August, p. 736). The bill drew heavily on a 2005 U.S. National Academies' report, the title of which, Rising Above the Gathering Storm, refers to the impending economic crisis facing the United States unless it bolsters STEM education (Science, 21 October 2005, p. 423).

    But sociologist Harold Salzman of the Urban Institute and demographer B. Lindsay Lowell of Georgetown University, both in Washington, D.C., say that the academies' report paints a misleading picture and that its assumptions are leading to flawed STEM education policies. They note that the annual U.S. production of bachelor's, master's, and doctoral degrees in STEM fields has averaged three times the annual growth of science and engineering jobs between 1985 and 2000. They also point out that fewer than one-third of the 15.7 million workers with at least one STEM degree at any level hold jobs that require such training. Given those numbers, says Salzman, “expanding our production of scientists and engineers just defies market reality.” Last week, Salzman made his case twice on the same day, at a talk at the Urban Institute titled “Houston, Do We Really Have a Problem Here?” and in a hearing before the House Committee on Science and Technology on how globalization affects the U.S. science and engineering work force.

    The authors also say that U.S. students are learning more than critics give them credit for. For example, they note, math scores on the National Assessment of Educational Progress (NAEP) for students in eighth grade rose 15 points from 1973 to 2004. And contrary to popular belief that they trail the pack, says Salzman, U.S. students rank in the middle tier of countries on an international assessment of 15-year-olds in math and science.

    Norman Augustine, former CEO of Lockheed Martin and chair of the panel that produced the Gathering Storm report, does not buy their arguments. In an e-mail to other members of the panel, Augustine notes that “what the [new analysis] does not observe is that an undergraduate degree in [science or] engineering is a prized credential for those who wish to attend business school, law school, medical school or [go into] a number of other fields[.] … If the Gathering Storm report is incorrect, we will end up having devoted additional dollars to improving our children's education and to the discovery of new knowledge. On the other hand, if Drs. Lowell and Salzman are wrong, America may well face a serious growth in unemployment and a commensurate decline in its standard of living.”

    Against the grain.

    Harold Salzman (center) told Congress last week that the United States produces enough technical workers for the economy.

    CREDIT: COMMITTEE ON SCIENCE AND TECHNOLOGY, U.S. HOUSE OF REPRESENTATIVES

    Those who argue for strengthening U.S. science education say that NAEP is not the right yardstick for measuring what today's students need to know. “In a global economy with a global labor pool, it is insufficient to compare American students' past performance to American students' current performance,” says Bill Bates of the Council on Competitiveness, one of several groups that lobbied heavily for the COMPETES Act. Salzman and Lowell say that they are not arguing for the status quo but rather that any new policies should address the real problems in STEM education. For elementary and secondary schools, they call for more resources for the lowest performing students, many of whom are minorities. And within higher education, they say that scholarships should be based on market demand for workers trained in individual disciplines rather than across-the-board support. Salzman also recommends that universities put greater emphasis on teaching communications and teamwork skills. “The iPod's success has had more to do with its creative design rather than its technical guts,” he says.

    Augustine says Salzman and Lowell have raised some important issues but that he is worried their criticism could undermine efforts to boost the research and training budgets of federal research agencies slated for growth in the COMPETES Act. However, David Goldston, the top staffer on the House Science Committee before he retired from the government last year, doesn't think their paper will weaken the case for greater investments in science and engineering. “It's worthwhile to debate what the nature of the investments should be, what part of the social scale they should be targeted toward, and what competitiveness really comes from,” he says. If the new study sparks those discussions, Goldston adds, “that's all to the good.”

  5. U.S. STATE ELECTIONS

    New Jersey Rejects Bonds for Stem Cell Institute

    1. Constance Holden
    Bond slayer.

    Steven Lonegan (in necktie) pumped up voters' fiscal worries to help defeat the stem cell initiative.

    CREDIT: AFP

    A proposal to ratchet up stem cell research in New Jersey was defeated last week by an array of hard-working opponents and a sense of overconfidence by supporters. Despite recent polls showing that the $450 million bond issue would be approved, the state's voters last week rejected it by a margin of 53% to 47%. “[We] were a little bit too confident” and didn't shift into high gear until too late, says Martin Grumet, director of the W. M. Keck Center for Collaborative Neuroscience at Rutgers University in New Brunswick.

    The bond measure, which was on a ballot featuring contests for various local and state offices, would have supplied an additional $45 million a year for stem cell research over the next 10 years to researchers at both public and private entities in New Jersey. It was backed by a succession of New Jersey governors; the incumbent, Jon Corzine, even donated $150,000 of his own money to the effort to get it passed. Last year, Corzine signed into law an allocation of $270 million—New Jersey's share of a national tobacco settlement—for new stem cell research facilities. Of this, $150 million is for the Stem Cell Institute of New Jersey in New Brunswick, for which ground was broken last month.

    As recently as October, polls were predicting a comfortable win. But a combination of religious and fiscal conservatives carried the day. Catholic churches showed a video disparaging embryonic stem cell research, and Bishop John M. Smith of Trenton sent out a letter on 7 October urging Catholics—who make up 43% of New Jerseyites—to pray against the referendum. Antiabortion groups nicknamed the measure “Loan to Clone.”

    The measure also drew the ire of a group called Americans for Prosperity that wants to curb government spending. Steven Lonegan, former mayor of Bogota, New Jersey, who heads the group's state chapter, says the issue caught fire in the past few months. “We engaged more people on the ground than any campaign in the state in a long time,” he says. Citing the state's high tax rates and $30 billion debt, the group offered voters a simple message about three fiscal proposals before them: “Vote No on All Ballot Measures.”

    Rutgers neuroscientist Wise Young, co-founder of the stem cell institute, says that the referendum was fatally hurt by the record-low turnout of 26.6% of eligible voters. Participation was lowest in counties where support for stem cell research was highest, he added.

    Corzine says he will continue to press for more money for stem cell research, both from public funds and the private sector. New Jersey, the first state to direct funds to stem cell research, has spent $15.2 million since 2005, according to the New Jersey Commission on Science and Technology, with $10.7 million budgeted for the current fiscal year. Young says he hopes pharmaceutical companies, which have a heavy presence in the state, will pitch in.

    “We will have to take a different strategy, … but we're pushing ahead,” says Grumet. Supporters regard the defeat as an expression of taxpayer frustration rather than rejection of the research itself. Young says advocacy groups are already rallying around the idea of another referendum next November, to coincide with the presidential and congressional races. And they vow to be prepared this time. “It's going to be a huge fight all the way down the line,” says Young.

  6. CLIMATE CHANGE

    Scientists Say Continued Warming Warrants Closer Look at Drastic Fixes

    1. Eli Kintisch
    Chilling conclusion.

    Rapid arctic melting has stimulated interest in geoengineering.

    CREDIT: NASA

    CAMBRIDGE, MASSACHUSETTS—Should scientists study novel ways to alter Earth's climate to counteract global warming? Last week, a group of prominent researchers who gathered here gave a qualified “yes”—after agreeing that the road to understanding the science is fraught with booby traps and that deliberately tinkering with the climate could make the problem worse. Some even admitted to being surprised by their affirmative answer.

    “My objective going [into the meeting] was to stop people from doing something stupid,” says climate modeler David Battisti of the University of Washington, Seattle. But rising temperatures and carbon emissions, combined with little meaningful action by politicians, convinced him and his colleagues that it was time for mainstream climate science to look more closely at geoengineering. Even so, Battisti suspects that the participants share the hope of many of those who took part in the Manhattan Project to build the atom bomb: that society would never have to use the knowledge they provided. “It would be incomprehensible that we deploy this,” Battisti says, emphasizing the greater need to cut carbon emissions.

    Organized by the University of Calgary and Harvard University, the event allowed 50 elite climate, energy, and economics researchers to explore and debate geoengineering. For decades, the subject has been mostly confined to the pages of science fiction and unfunded by research agencies. But a 2006 paper in Climatic Change by Nobelist Paul Crutzen (Science, 20 October 2006, p. 401) served as an “enabler” to drive discussion among scientists of the once-taboo topic, says Harvard environmental chemist Scot Martin. Harvard geochemist Daniel Schrag and physicist David Keith of the University of Calgary in Canada then decided to organize the Cambridge event.

    One reason most scientists have been leery of probing the topic was the fear that if such technical fixes were taken seriously, public support for cutting carbon emissions would be even more difficult to achieve. “The very best would be if emissions of the greenhouse gasses could be reduced so much that the [geoengineering] experiment would not have to take place,” Crutzen wrote last year. “Currently, this looks like a pious wish.”

    A sea change?

    Some scientists have proposed creating white clouds over the oceans to help cool the globe.

    CREDIT: (ILLUSTRATION) JOHN MACNEILL

    Some scientists, however, have been thinking about geoengineering for quite some time. The field's roots lie in dueling Soviet and U.S. weather-modification programs of the 1960s. Since then, advocates have dreamed up schemes to fight warming by blocking sunlight with giant space shades or by creating sea clouds to increase the albedo of the ocean. In 1997, physicist and Star Wars stalwart Lowell Wood and colleagues affiliated with Livermore Berkeley National Laboratory suggested using aerosols to mimic the cooling effect of volcanoes, and a handful of modeling papers since have simulated that effect.

    One of Wood's central points is that the aerosol method is cheap. In 1992, recalls Harvard physicist Robert Frosch, a National Academies' panel on climate resisted his suggestion to include the cost of geoengineering options in a figure on possible solutions to global warming. One relatively simple option: Inject sulfur dioxide into the stratosphere to reduce the amount of solar energy reaching Earth's surface. “Nobody wanted to put the geoengineering line on the figure because it looked too [economically] easy,” Frosch told participants.

    That cost was a major factor behind the discussions here, with a number of preliminary technical studies hinting that the SO2 option could be deployed for a few billion dollars a year. That amount could make geoengineering attractive to politicians looking for radical fixes in a warming world. “The decision on whether to do this will not be made by this group,” Schrag told his colleagues sitting in the wood-paneled premises of the American Academy of Arts and Sciences. But what scientists can do, he said, is offset the input of groups driven by profit or ideology with solid research on the possible side effects of various geoengineering techniques.

    And to get started, the group certainly suggested plenty of side effects. Atmospheric dynamicists attacked the few modeling studies that have simulated geoengineering efforts for down-playing details such as ocean currents or complex feedbacks. (Modelers defended their studies, which use simplified models, as preliminary.) Ecologists pointed out that artificial cooling could lead to serious drying in the tropics and that any fix that lowers Earth's temperature wouldn't address the problem of the steadily acidifying ocean.

    Modeler Raymond Pierrehumbert of the University of Chicago in Illinois warned that geoengineering could become a global addiction. “I don't actually work on geoengineering,” he told the group. “But now that the genie's out of the bottle, I feel I have to.” In one unpublished experiment, Pierrehumbert simulated a future scenario, presumably in the next century, in which the amount of atmospheric CO2 had quadrupled but Earth was kept cool by a yearly dose of geoengineering. His model showed that a halt in the geoengineering effort—“by, say, a war or revolution”—would result in an 7°C temperature jump in the tropics in 30 years. That rise, he says, would trigger unimaginable ecological effects.

    Sallie Chisholm, an MIT biological oceanographer, urged caution. She told Science that her colleagues are down-playing the difficulty of determining how “inherently unpredictable” biospheric feedbacks will react to “turning the temperature knob. … We cannot predict the biosphere's response to an intentional reduction in global temperature through geoengineering.”

    Other scientists were more willing to entertain the idea of studying climate manipulation but warned about a likely public backlash. Political scientist Thomas Homer-Dixon of the University of Toronto in Canada talked about street protests. “Some people may consider geoengineering to be an act of ultimate hubris,” he says. “It's going to provoke fear, anger, guilt, and despair.”

    Others, however, viewed public alarm about geoengineering as a potentially positive effect. “If they see us talking about this as a last-ditch effort, it might increase their alarm” and drive them to cut emissions, explained Harvard climate dynamicist Peter Huybers during one of the sessions. By the end of the 2-day event, participants were stunned that they had come so far. “In this room, we've reached a remarkable consensus that there should be research on this,” announced climate modeler Chris Bretherton of the University of Washington, Seattle. Nobody dissented.

    Mixed in with his new sense of “responsibility,” Battisti says, is dismay that the climate problem has grown so serious as to drive scientists to contemplate steps that, in theory, might lead to more serious problems than continued warming. After speaking on the phone with his wife from his hotel room, Battisti confessed, “I told her this meeting is terrifying me.”

    (For a discussion of the topic with some of the meeting participants, go to www.sciencemag.org/hottopics/geoengineering.)

  7. BEHAVIOR

    Robot Cockroach Tests Insect Decision-Making Behavior

    1. Elizabeth Pennisi

    Science-fiction writers have long envisioned societies in which the boundaries between humans and lifelike droids blur and man and machine freely intermingle. José Halloy has taken the first steps toward creating that world, at least for insects. His tiny, autonomous robots lack legs, wings, and antennae, but they nonetheless pass muster with cockroaches. Indeed, these wheeled machines are so well accepted by the household pests that the robots become part of the insects' collective decision-making process, Halloy, a theoretical biologist at the Free University of Brussels, Belgium, and his colleagues report on page 1155. The robots persuaded many of their insect “peers” to hide in an unconventional place.

    Halloy's innovative approach puts theories of collective behavior among insects into practice. “We can manipulate these behaviors very easily in a model, but doing so in experiments is often challenging,” explains ethologist Jerome Buhl of the University of Sydney, Australia. Others have used remote-controlled robots to study animal behavior but not autonomous ones that interact with animals on their own. “In many ways, [the work] is a big step in the study of collective behavior in animals,” says animal behaviorist Stephen Pratt of Arizona State University in Tempe.

    Halloy and his Brussels colleague Grégory Sempo picked cockroaches for these robot experiments in part because they had earlier found that cockroaches typically self-organize; within a few hours, for example, they settle together in one place, preferring darker spots when available. For those experiments, and the later ones with the robots, Halloy, Sempo, and their colleagues built a 1-meter-diameter arena with two “shelters,” the roofs of which were made of plastic discs covered by red filters. By adding layers of filters, Halloy and Sempo can make one shelter darker than the other.

    Based on observations of insects in this arena, Halloy and his colleagues developed a mathematical model that predicts which shelter a cockroach should pick depending on the level of darkness of the shelter and the number and activity of its fellow roaches. Halloy's group then used this model to program robots designed by him and Francesco Mondada and other engineers at the École Polytechnique Fédérale de Lausanne, Switzerland.

    The roaches usually ran away from the robots but not if the machines smelled like the insects. For the experiments, Halloy and Sempo covered the robots with a filter paper containing the pheromone equivalent of one cockroach.

    Halloy initially programmed the robots to have the same darkness preference as the cockroaches, and they joined the cockroaches at whatever shelter the majority chose to rest in. Next, Halloy programmed the robots to prefer the lighter shelter. About 60% of the time, the robots tipped the group's preference in favor of the light shelter. “This is a true example of automated leadership,” says David Sumpter of Uppsala University in Sweden. “Instead of the robots rounding up the cockroaches like sheepdogs, they lead through social attraction.”

    Can't we be friends?

    Cockroaches seem to accept this robot as one of their own once it's coated with pheromone.

    CREDIT: ULB-EPFL

    But Coby Schal, an urban entomologist at North Carolina State University in Raleigh, has reservations about the effectiveness of the pheromone guise in convincing the roaches that the robot is just like them. He wonders if the physical presence of the robots made the lighter shelter more attractive simply by increasing the structural complexity of this hiding place. “In my view, the jury is still out” on whether the robots became part of the decision-making, says Schal.

    Nonetheless, roboticist Daniela Rus of the Massachusetts Institute of Technology in Cambridge calls the idea that robots can influence biological group behavior “very powerful.” She speculates that the work could have many applications, such as robots that aid pest control by luring insects into traps or that help herd livestock.

  8. SCIENTIFIC FACILITIES

    Oceanography's Third Wave

    1. Robert F. Service

    Underwater observatories linked by thousands of kilometers of fiber-optic and power cables aim to revolutionize oceanography. But will the big science projects also hamstring the future of ocean research?

    High definition.

    Cabled observatories will scrutinize ocean hot spots such as these black smokers.

    CREDIT: J. DELANEY AND D. KELLEY, SCHOOL OF OCEANOGRAPHY, UNIVERSITY OF WASHINGTON

    SEATTLE, WASHINGTON—When John Delaney first started using ships and submersibles to explore the underwater volcanoes off the coast of Washington state back in the early 1980s, the experience, he says, was exhilarating. Delaney was part of the team that in 1984 discovered the Endeavour vent, a 70-kilometer-long volcanic ridge where magma from Earth's mantle wells up between a pair of tectonic plates. As exciting as those research trips were, they were equally exasperating, says Delaney, an oceanographer at the University of Washington (UW), Seattle, who looks every inch the bearded sea captain and is fond of reciting T. S. Eliot, Ralph Waldo Emerson, and Robert Frost. “They offered snapshots,” he says. “We would get a ship and sub for a couple of weeks, come home, publish our results, and then write another set of grants. We would go and see something interesting. But it was 2 or 3 years before we could come back and see what was going on. That became very frustrating.”

    Delaney's frustration convinced him that there had to be a better way—a means to set up a sustained research presence in the ocean. Ocean buoys outfitted with sensors of course have continuously monitored conditions such as sea-surface temperatures for decades. But that was harder to do 1000 meters or more below the ocean's surface, and deepwater devices suffered from weak power supplies and either could transmit only small amounts of data back to shore or had to be retrieved after months of data collection.

    In 1987, Delaney hatched the idea of using underwater telecommunications cables to wire the sea floor. Such cables already crisscrossed the ocean carrying phone calls and computer data between continents. If researchers could tap into those cables, perhaps they could use them as both a power source and a conduit to link to a new generation of sensors, robots, and autonomous vehicles. The idea lay dormant for a few years but slowly started to gain steam. “It was like a snowball going downhill,” Delaney says. “People said, ‘If you are going to do that, then I can hook into it to look at fish stocks, tsunamis, pollution, and so on,’” he says. In 1998, Delaney successfully pitched the idea for a feasibility study to the National Oceanographic Partnership Program, which fosters collaborations among U.S. federal agencies, universities, and companies to promote ocean issues. At about the same time, researchers at other major oceanographic institutions such as the Scripps Institution of Oceanography (SIO) in San Diego, California, and the Woods Hole Oceanographic Institution (WHOI) in Massachusetts also began pushing the same notion.

    Now, after dozens of meetings, reports, and reviews, ocean scientists are setting up a handful of deep-water cabled observatories and are gearing up for a new wave of ocean research. Beginning in 2010, researchers from UW, SIO, WHOI, and Oregon State University (OSU) in Corvallis plan to string cables to sites scattered around an entire continental plate—the Juan de Fuca Plate off the coasts of Oregon and Washington state (see figure, p. 1057). Cable was recently laid for a Canadian arm of the project as well as for a deep-water cabled-observatory test bed off the coast of Monterey, California. Early next year, the European Union will link instruments to three separate cables currently being used to wire up underwater neutrino observatories, and its members are considering further dedicated cable systems down the road. And Japan and Taiwan have recently installed cabled systems, largely for seismic research.

    “It's really a dawning of a new age of how humans can explore the oceans,” Delaney says. Ship-based oceanography began in the 1870s, when the British ship Challenger conducted the first-ever prolonged oceanographic cruise, he notes. Satellites gave oceanographers a global reach about 100 years later. “What we're looking at now is a third phase,” Delaney says. It's one in which continuous power and data channels offer researchers the ability to develop a new array of instruments that can carry out tasks as wide-ranging as continuously monitoring the steady stream of microtremors at a midocean ridge or sequencing the DNA of underwater organisms on the spot and shipping the data back to researchers continuously. Until now, the ocean has been close to a black box, says Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute (MBARI) in Moss Landing, California. “I will be very surprised if we do not make startling new discoveries, because we will be there 24/7 and can study the ocean on our terms,” McNutt says.

    But although proponents of the system are quick to liken the utility of the new cabled observatories to the revolution satellites provided to oceanography, detractors say a more apt comparison is the international space station: a money sink that provides scientific value to relatively few researchers. The concern, they say, is that the cabled observatories will be so expensive to maintain and operate that they will inevitably siphon money from other areas of ocean sciences. “Ocean scientists are always of one mind when it comes to ships,” says Peter Niiler, a physical oceanographer who retired this summer from SIO. “We are usually of one mind when it comes to satellites. We are not of one mind on this.”

    Growing appetite

    While Delaney has spent decades waiting to complete his vision of a cabled underwater observatory, others have beaten him out of the gate. In 1992, researchers led by physical oceanographer Scott Glenn of Rutgers University in New Brunswick, New Jersey, launched a near-coast cabled observatory called LEO-15, which was built off the coast just north of Atlantic City, New Jersey, in water 15 meters deep and thus could be serviced and maintained by scuba divers. “It really whetted the appetite of the science community to do more,” McNutt says.

    Wired.

    Oceanographers plan to use telecommunications cable to supply power and data connections to instruments in the northeast Pacific.

    CREDIT: THE IMPLEMENTING ORGANIZATION FOR THE REGIONAL CABLED COMPONENT OF THE OCEAN OBSERVATORY INITIATIVE, UNIVERSITY OF WASHINGTON

    Researchers in Hawaii took the next step in 1998, when they were given use of an abandoned undersea telephone cable that runs from the Hawaiian island of Oahu to California. They made it the backbone of the first deep-water cabled monitoring system, known as the Hawaii-2 Observatory. Below 5000 meters of water, researchers inserted a junction box with eight ports for tapping into the cable's power supply and hooked up a variety of seismometers, pressure sensors, and a hydrophone. The network operated for only 5 years but helped persuade oceanographers to push for purpose-built cabled observatories.

    Such observatories, say Delaney and other proponents, have two big advantages. “What these provide is unlimited power and the ability to get data back to shore in real time. That's fabulous,” says McNutt. Ocean sciences instruments—including buoys at the surface and seismometers and other sensors on the sea floor—have traditionally worked with very limited power, supplied either by batteries or by small wave-powered generators and the like. Therefore, they typically haven't been able to record or transmit large amounts of data. That has tended to limit them to taking periodic measurements that were either sent back to shore by a low-bandwidth satellite connection or archived onboard for retrieval months later. “If you think how controlled we have been by the whims of the ocean, this takes away those limitations and will allow us to say for the first time what really happens during hurricanes, earthquakes, and other events,” McNutt says.

    McNutt, Delaney, and other proponents say that cables now being laid off the coasts of California and British Columbia will give researchers the first glimpse of oceanography's future. Off California's coast, the 52-kilometer Monterey Accelerated Research System cable—about the width of a garden hose—will carry fiber-optic data and 10,000 volts of electricity to science “nodes,” essentially underwater transformers that reduce the voltage and sit alongside eight ports. Instruments will connect directly to the ports or be linked to them by underwater extension cords.

    The instruments vary widely. One, called the Eye in the Sea, will use a video camera that amplifies stray photons to image deep-water bioluminescent organisms. Another, a robotic microbiology lab, will analyze DNA and RNA samples to determine which organisms are present and possibly even discover new life forms. Yet another—known as the benthic rover—is a robot the size of a riding lawn mower that will creep across the ocean floor taking measurements in an effort to sort out the longstanding mystery of just how much organic carbon drifts down from above and reaches the deep ocean floor.

    Because these and other such experiments are power-hungry and typically operate continuously, they represent science that can't be done with traditional instrumentation. Another advantage, says marine biologist Ken Smith of MBARI, who heads the team of scientists working on the benthic rover, is that if researchers spot something interesting in the data one day, the next day they can reprogram their instruments on the fly to monitor it. “I'd love to have cabled observatories all over the sea floor,” Smith says.

    Vigorous discussion

    Despite their obvious upside, cabled observatories have long proven a tough sell. The issue, Niiler says, stems from the wide diversity of disciplines in the ocean sciences, including geologists and geophysicists interested in understanding plate tectonics, fisheries biologists gauging fish stocks, and physical oceanographers interested in tracking how carbon dioxide moves between the atmosphere and the oceans. “Oceanography is not one science,” says Niiler. “It's a catchall for scientists that need a common facility that is unique, which is ships.”

    Bottom to top.

    Power will enable new deep-water robots as well as instruments working all the way up to the surface.

    CREDIT: THE IMPLEMENTING ORGANIZATION FOR THE REGIONAL CABLED COMPONENT OF THE OCEAN OBSERVATORY INITIATIVE, UNIVERSITY OF WASHINGTON

    In the United States, however, those communities essentially compete for one pot of money, as the lion's share of funding for basic ocean-sciences research comes from the National Science Foundation (NSF). This year, Congress gave NSF $5 million to begin construction of the $331 million Ocean Observatories Initiative (OOI); NSF hopes for a big funding spike in 2009–11 to finish the job. The largest portion of funds—estimated to be about $170 million—will go to build the deep-water cabled observatory off the Oregon and Washington coasts. When the idea for this expensive observatory first began to gain traction several years ago, the broader oceanography community balked until additional components—a few buoys designed to operate at high latitudes, a network of instruments off the Oregon coast, and a cyberinfrastructure component to handle the expected surge of data—were added. “In order to get community buy-in to the OOI idea, there were some compromises made,” McNutt says.

    However, although Congress agreed to pay for the new infrastructure, it didn't provide any extra money to run or maintain OOI's cables and instruments. That money—expected to total about $50 million a year—will have to come out of NSF's general budget of about $300 million a year for ocean sciences. Niiler and other critics argue that because the cabled observatories are fixed in place on the ocean floor, they will primarily benefit underwater geologists and geophysicists. Yet, because the operations and maintenance funding will come from the ocean-sciences community as a whole, the high price tag could force cuts in other areas. “I worry that small, individual principal investigator-driven science will be harmed in its breadth and depth by paying too much for these observatories,” says a U.S.-based oceanographer who asked not to be named out of concern that it could hurt his chances of acquiring future NSF funding.

    “We agree that operations and maintenance funding is the ongoing struggle of science right now,” says Adam Schultz, a geophysicist at OSU, who is currently serving as a program director for ocean sciences at NSF. However, he and others point out that geologists and geophysicists aren't the only ocean scientists getting new equipment these days. Other researchers have received, from NSF and other sources, a $120 million ship for arctic research, $100 million for a new drilling ship, submersibles, and a global ocean float network called Argo. “Now we are adding new capability that will not appeal to everyone. As an organization, we have to find a balance,” Schultz says. Julie Morris, director of NSF's ocean sciences division, adds that if Congress and President George W. Bush continue their push to double spending on physical sciences research and development, that could obviate much of the funding concerns.

    Creeper.

    This robotic rover will track how much organic carbon reaches the sea floor.

    CREDIT: KEN SMITH/MBARI

    For his part, Delaney is quick to deny that deep-water cabled observatories are of little interest to anyone but geologists. “I flatly disagree,” he says. “There is a tremendous amount of oceanographic science to be done in the waters overlying the Juan de Fuca Plate.” Niiler acknowledges that a cabled observatory offers advantages for studying ocean vents and the chemistry and organisms in the waters around them. But if you're interested in how the exchange of gases between the ocean and atmosphere affects climate change, he says, the cabled observatory has far less relevance. “They just don't go together,” Niiler says. “To pretend they do is just not right.”

    Critics also fear that once major new observatories are built, future research grants will tilt in favor of scientists who propose to work on them. “It will be hard for [funding agencies] to resist the temptation to feed this big facility,” says the anonymous U.S.-based oceanographer. “It's exactly like the [international] space station,” adds Russ Davis, another oceanographer recently retired from Scripps. “We will have to have guys use this thing once we put this together.”

    Delaney and others respond that all the scientific projects associated with the cabled observatories will be peer-reviewed, although Delaney readily acknowledges that the oceanography community is still coming to grips with the facilities. “It's a vigorous discussion, as it should be,” he says. Christoph Waldmann, a member of the European Sea Floor Observatory Network steering committee looking into a new European cabled-observatory system, says the dialogue under way in Europe and elsewhere is much the same. But he and others repeat that proponents of cabled observatories are not out to do away with anyone else's research but rather seek to open dramatic new possibilities to ocean research. “The science community is saying this is a capability we need to have,” says Robert Detrick Jr., a marine geologist at WHOI who chaired a recent National Research Council report that favored building cabled observatories. “They provide a new capability for the community and approach ocean sciences in a different way.”

  9. PUBLIC HEALTH

    In the HIV Era, an Old TB Vaccine Causes New Problems

    1. Martin Enserink

    The only TB vaccine available can be deadly for HIV-infected children. That puts public health officials in a difficult dilemma

    Double-edged sword.

    Experts stress that to prevent TB—which has infected this Nairobi girl—BCG use should continue in HIV-negative children.

    CREDIT: DAMIEN GUERCHOIS/REUTERS/LANDOV

    It's well-known that HIV and tuberculosis (TB) form dual, deadly epidemics that fuel each other. More than 13 million people, most of them in sub-Saharan Africa, are infected with the pathogens that cause both diseases. Recently, researchers have found that HIV-infected children—who need protection from TB more than anyone else—are also much more susceptible to side effects of a widely used anti-TB vaccine. The live vaccine, developed more than 80 years ago and known as Bacille Calmette-Guérin (BCG), can lead to a generalized infection that may be fatal in as many as 75% of cases.

    Based on these data, experts agree that no baby with HIV should be vaccinated against TB. But that's easier said than done; identifying infected infants isn't feasible in many countries with high HIV rates. What's more, researchers worry that discussing these side effects could give the vaccine a bad name and lead to a drop in overall vaccination rates. “It's really a terrible dilemma,” says pediatrician Elizabeth Talbot of the Dartmouth-Hitchcock Medical Center in Lebanon, New Hampshire.

    As vaccines go, BCG was always a mixed blessing. Researchers at the Pasteur Institute in Paris developed it by weakening a strain of Mycobacterium bovis, a cousin of M. tuberculosis, the agent that causes human TB. (The process was inspired by vaccinia, the smallpox vaccine most likely derived from cowpox.) BCG doesn't appear to do much for adults, but it protects children from the most serious forms of TB during the first 15 years of life, although efficacy varies somewhat in different parts of the world.

    BCG is the only TB vaccine that exists, and almost every country in the world uses it. Recently, many Western countries have started limiting their use to high-risk groups, such as immigrants from endemic countries, because dropping TB rates have made the vaccine less cost-effective. But in regions with high TB rates, it's still an important line of defense. In Africa, most children are vaccinated right after birth. Some studies have even shown that BCG reduces mortality from causes other than TB as well, perhaps because it boosts the immune system.

    Researchers have long known that BCG could cause adverse events in immunocompromised people, ranging from local reactions to disseminated BCG disease, a life-threatening infection. “But until now, we didn't have any solid data” on the magnitude of the problem, says T. Mark Doherty of the Statens Serum Institute in Copenhagen, Denmark. In areas with high HIV rates, babies are prone to so many other diseases—including TB, malaria, and gastrointestinal infections—that a vaccine-specific reaction is hard to notice. And diagnosing BCG disease requires culturing the bacteria from infected children—a time-consuming process—and then using a polymerase chain reaction test to determine that they are M. bovis, not M. tuberculosis.

    So when Anneke Hesseling of the Desmond Tutu TB Centre of Stellenbosch University near Cape Town, South Africa, started looking carefully in a hospital in the Western Cape province, where both HIV and TB are rampant, the outcome came as a shock to many. In a paper published in Vaccine in January, Hesseling concluded that disseminated BCG disease may occur in one in every 240 HIV-infected vaccinees at that hospital; that's more than 500 times the risk for healthy children.

    A new, unpublished study based on better surveillance in more hospitals suggests that the risk for HIV-infected children may be two times higher still, she adds. The concerns about BCG are corroborated by an as-yet-unpublished study in Argentina, presented at a 2005 meeting by Aurelia Fallo of Children's Hospital Ricardo Gutiérrez in Buenos Aires.

    The solution sounds easy: Children born to HIV-infected mothers should be tested, and if HIV-positive, should not get BCG. But in many African countries, mothers aren't tested for HIV to begin with. What's more, mother-to-child transmission of HIV occurs mostly around birth, so tests can't be reliably done until at least 6 weeks later. That would mean postponing the decision to vaccinate HIV-exposed children during that period.

    The Western Cape is considering a program to do just that, says Hesseling, but it's difficult enough in South Africa; in many other African countries, it's likely not feasible. Any change in standard vaccination schedules is a major undertaking, she says, and postponing vaccination carries the risk that some of the children will never come back. Even in countries with high HIV rates, a large majority of children are HIV-negative, she points out; if they missed the vaccine because of a policy change, “it would be a disaster.”

    Many are looking to the World Health Organization (WHO) for guidance. In May, after reports from two expert panels, WHO began advising against vaccinating HIV-positive babies. But the new recommendations are “not all that helpful,” says Hesseling. In an e-mail to Science, a WHO communications officer said that “there is concern that recommendations might become counter-productive if BCG use ceases or is discontinued in HIV-endemic populations.” WHO continues to recommend using BCG if nothing is known about a child's HIV status, she wrote.

    The agency is in a difficult position, Doherty says, as it will be blamed if overall vaccination rates drop. “In situations like this, there's always a tendency to err on the side of the status quo,” he says.

  10. ROBOTICS

    Robotic Cars Tackle Crosstown Traffic--and Not One Another

    1. Adrian Cho

    In DARPA's Urban Challenge, cars that drive themselves face off in a strange, soulless rush hour. Are human drivers about to go the way of the buggy whip?

    VICTORVILLE, CALIFORNIA—The Land Rover bristles with sensors like a mechanical porcupine. John Leonard, an engineer at the Massachusetts Institute of Technology (MIT) in Cambridge, ticks off the robot's features. On the roof spins a conical laser range finder called a lidar that sees in three dimensions. A dozen lidars that see in one direction, 15 radars, and six digital cameras look out every which way. Computers fill the back of the truck, and a generator supplies the 3.5 kilowatts of power they need. It's impressive. But all this so the truck can turn left across traffic by itself?

    The robot is one of nearly three dozen vying in the Urban Challenge, a competition sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA). It's the third and most demanding in a series that aims to spur the development of autonomous vehicles, which the U.S. military hopes to press into service by 2015. In 2004 and 2005, robots raced one at a time across open terrain. This time, they must navigate the streets of an abandoned air base here in the Mojave Desert without colliding with one another or with human-guided vehicles.

    The competition showcases some of the world's best talent in robotics. “We were drawn to the Urban Challenge because it requires real-time decision-making in a dynamic environment and in the presence of uncertainty,” Leonard says. It also serves the higher purpose of trying to save lives, as worldwide, 1.2 million people die each year in traffic accidents that robotic cars might help avoid.

    And yet the Urban Challenge is at least slightly absurd. It looks a bit like a real race. Engineers wear bright shirts emblazoned with the logos of sponsors—GM, Ford, Intel, Google. Teams have hauled in tractor trailers full of equipment and plastered their robots with decals. Besides the $2 million first prize, the appeal of the challenge is obvious. It's hard, and by pitting idea against idea and technology against technology, “it determines what technical DNA moves to the next generation” in the evolution of autonomous vehicles, says roboticist William “Red” Whittaker of Carnegie Mellon University in Pittsburgh, Pennsylvania.

    Still, the task the robots will attempt seems so ordinary. They must obey the California traffic laws (although if two collide, they won't have to exchange insurance information as human drivers are required to do). We're all here to watch traffic. But we won't see with our own eyes. Instead, we'll have to watch it on television. It's not even clear what DARPA gets out of this well-crafted media circus. The competition is meant to stimulate the development of cars that drive themselves—and it has—but DARPA does not require winners to reveal to the agency the details of their technologies.

    Comeback kid.

    After a delayed start, Carnegie Mellon's Boss cruises briskly to a victory.

    CREDIT: A. CHO/SCIENCE

    Go ahead, bend the rules

    At the decaying fighter base, across the road from the new federal prison here in Victorville, 11 teams have made the final competition. Three years ago, not one robot traversed more than a dozen kilometers of the 230-kilometer off-road course. A year later, four completed a similar course. And this year's robots are far more capable than last year's crop, possessing better sensors, more powerful computers, and, most important, more sophisticated programming. “Driving is a software problem, not a hardware problem,” says engineer Michael Montemerlo of Stanford University in Palo Alto, California. “At Stanford, we can't build a better car, but we can make a smarter car.”

    Computationally, this year's challenge is much more difficult than the first two, researchers say. In the desert races, the robots had only to identify obstacles in a static landscape and plot a safe path around them. This time, the vehicles will have to avoid other cars, including other robots, while at the same time obeying the relatively arbitrary traffic laws. To do that, each robot's computer must calculate the likely trajectories of all the objects around it and plan to miss them. Of course, a robot cannot know exactly where another car will go, so the machines generally employ layers of probabilistic algorithms to decide their next moves.

    MIT has decked its robot, Talos, with the most sensors. The radars see distant objects, the lidars see at an intermediate range, and the cameras spot things close by, explains David Barrett, a team member from the Franklin W. Olin College of Engineering in Needham, Massachusetts. Talos depends mainly on its sensors to navigate, Barrett says. That's because the team assumed, incorrectly it turns out, that DARPA would not let robots use signals from the satellite-based Global Positioning System (GPS) all over the course.

    Researchers from Stanford, who won the 2005 competition, say that they focus on the algorithms programmed into their robot. Merely encoding the traffic laws can leave the robot stymied, says Stanford computer scientist Sebastian Thrun. For example, when two robots arrive at a four-way stop simultaneously, each may try to yield to the other endlessly. To avoid such deadlock, the team lets its robot skirt the laws. “Our car has a hierarchy it follows,” Thrun says. “At the top, it obeys strict rules. And if it gets stuck, it ignores more and more rules.” Fair enough. Why expect more from a robot than a human?

    Team AnnieWAY, one of two German squads in the final, has taken a minimalist approach to guiding its Volkswagen Passat. The team relies almost entirely on the $75,000 three-dimensional (3D) lidar, which Bruce Hall and colleagues at Velodyne Acoustics Inc. in Morgan Hill, California, developed to compete in the 2005 DARPA challenge. The sensor may be all you need, says Sören Kammel of the Karlsruhe Institute of Technology in Germany. “I think some teams have a lot of sensors because they have a lot of sponsors, and everybody wants their sensor on the car,” he says.

    Even that one sensor is beyond the means of Donald Harper and his six teammates from the University of Central Florida in Orlando. They've outfitted Knight Rider, a 1996 Subaru Outback that belonged to Harper's wife and has 99,257 miles (159,705 km) on it, with just enough gizmos to get around the course—they hope. Instead of the spinning 3D lidar, they use two lidars that see in one direction and rock them back and forth. “If just one wire falls off, something essential is not going to work,” Harper says. Still, the team made the final having invested only $130,000 in the project.

    Robots, start your engines!

    Race day usually brings the intoxicating smell of high-octane fuel and the electrifying scream of engines. But not here. At 8:00 a.m., the robots leave the starting area, one by one, like rental cars leaving a lot. There's a glitch. Interference from a jumbo TV screen knocks out the GPS receiver of first qualifier, Boss, Carnegie Mellon's Chevy Tahoe. The team replaces the unit and has to wait 30 minutes to regain the signal. Meanwhile, Odin, a Ford Escape from Virginia Polytechnic Institute and State University in Blacksburg; Junior, Stanford's Volkswagen Passat; and the others head out, hesitating and swerving as if driven by octogenarians. After a half-hour, all 11 robots—plus their chase cars and 37 other cars—are on the road.

    There's only one curve from which to glimpse the robots, so DARPA has hired a helicopter and is televising the event on three huge screens in a vast tent. Jamie Hyneman and Grant Imahara of the geeky cable-television reality show Mythbusters provide commentary. It's like watching a hybrid of a NASCAR race and the infamous O. J. Simpson lowspeed police chase.

    After you.

    Stanford's Junior and Virginia Tech's Odin negotiate an intersection.

    CREDIT: DARPA

    Each robot has to complete three “missions” comprising six or seven “submissions,” such as parking in exactly the right space in a lot, traversing an off-road passage, or navigating between two places. After each mission, the robots return to the start area to download the specifications for the next, and each machine must travel 60 miles (97 kilometers) in less than 6 hours.

    At first, the action comes fast and heavy. An hour into the race, TerraMax, the hulking vehicle entered by military contractor Oshkosh Truck Corp. in Wisconsin, turns toward a pillar and gets stuck staring at it. Forty-five minutes later, Central Florida runs straight toward a house. Caroline, the robot from Team CarOLO, the other German squad, collides with MIT's Talos and loses sensors. By 11:00 a.m., five robots have either failed or been disqualified.

    Then things settle down. The remaining robots' “personalities” emerge. Carnegie Mellon's Boss zooms confidently away from stops, a hard charger like team leader Whittaker. Stanford's Junior glides around smoothly, so much so you hardly notice it. MIT's Talos is aggressive in traffic—it also clips Cornell's Chevy Tahoe, Skynet—but skittish off-road, stopping and starting like a cat creeping down a steep slope.

    Around 1:30 p.m., three teams have nearly completed their missions, and spectators swarm back to the grandstands. At 1:42, Stanford cruises across the finish line, followed a minute and a half later by Carnegie Mellon. Upstart Virginia Tech cruises home third—even without the 3D lidar. “We knew we were good,” says Virginia Tech's Alfred Wicks. “We'd done our homework.” The University of Pennsylvania's Toyota Prius, Little Ben, straggles in an hour later. Sometime past 3:30 p.m., MIT slips in just before Cornell.

    The outcome seems obvious. Carnegie Mellon spotted Stanford and Virginia Tech a 20- minute head start and made up almost all of it. It seems the victory should be theirs. DARPA officials will make the final call, however. And, some participants grumble, DARPA never fully explains its judgments.

    Make it out to …

    But the next morning brings no surprises. Carnegie Mellon walks off with the win. Stanford takes second and $1,000,000, Virginia Tech takes third and $500,000. “There's tremendous satisfaction in what the whole field accomplished,” Whittaker says. “That was a day that stunned the world.” DARPA Director Anthony Tether also gushes. “Quite frankly, I watched these things and I forgot after a while that there wasn't anybody in there,” he says. “It's a historic day—'bot on ‘bot for the first time!”

    Maybe there's something to the grandiose rhetoric. Now only a Luddite could doubt that soon cars will guide themselves, at least in a pinch to avoid collisions. In fact, the technology already seems ripe for low-risk applications, such as automating farm equipment, and the leading teams are pushing to commercialize their software. “I think it's going to come in bits and pieces,” says Charles Reinholtz, leader of the Virginia Tech team and an engineer at Embry-Riddle Aeronautical University in Daytona Beach, Florida.

    Ironically, the success of the Urban Challenge could reduce the chances that DARPA will stage another competition. “DARPA never finishes anything,” Tether says. “All we do is show that it can be done” in the hope that industry takes over and pushes further development. Clearly, when it comes to making robotic cars, the Urban Challenge has shown that it is possible.

    Still, many engineers are eager for another competition. Their robots aren't nearly ready for the open road, they say, and many already know what they would like to see in the next challenge: a contest for autonomous cars that must communicate and work together. Suddenly, that doesn't seem quite so absurd.

  11. BEHAVIORAL GENETICS

    Evidence Linking DISC1 Gene to Mental Illness Builds

    1. Jean Marx

    Animal studies add weight to the view that an important gene for brain development plays a role in diseases such as schizophrenia and depression

    CREDIT: SPENCER JONES/GETTY IMAGES

    Every clan has its misfits, but an extended family in northern Scotland is extraordinary. More than half have suffered from schizophrenia or some other form of mental illness. A group of Scottish researchers reported in 1990 that the affected people all carried the same genetic anomaly—a translocation, or swap, of two stretches of DNA on the long arms of chromosomes 1 and 11. With modesty, the investigators wrote that this “may be a promising area to examine” for genes that predispose people to mental illness.

    The area turned out to be very promising indeed. By the year 2000, it had led researchers to a gene called DISC1, which may be a key player in the chain of events leading to mental illness. The circumstantial evidence for assigning a major role to DISC1 (Disrupted-in-Schizophrenia 1) is strong. Several studies have linked the gene to schizophrenia, major depression, bipolar disorder, and autism; recent findings on DISC1's biological function appear to support the hypothesis.

    Animal studies have shown that the gene is needed for normal brain development both in the embryo and later in life and that blocking its function produces subtle abnormalities in brain structure resembling those seen in patients with schizophrenia. The protein encoded by the gene also turns out to be part of a nerve cell signaling pathway involved in learning, memory, and mood. “I think this gene is really the first big breakthrough in schizophrenia … and other mental diseases,” says Christopher Ross of Johns Hopkins University School of Medicine in Baltimore, Maryland.

    After decades of following false leads, researchers are cautiously optimistic that they are on the right track with DISC1. But the evidence isn't airtight. Except in the Scottish family, researchers haven't consistently linked any particular DISC1 variant to a mental disease. “There's no smoking gun,” cautions psychiatrist Daniel Weinberger of the National Institute of Mental Health in Bethesda, Maryland. But if the connection of DISC1 to mental disorders holds up, it might lead to better therapies for treating the conditions—especially schizophrenia, a devastating disease that is now poorly controlled at best.

    The hunt begins

    Gene hunters have had a hard time pinning down the genes involved in mental disorders mainly because the diseases are complex, meaning that several genes, as well as environmental factors, contribute to their development. That's why the Scottish family proved to be such a boon. The 1990 study, which was conducted by a team including David St. Clair, Douglas Blackwood, and Walter Muir of the University of Edinburgh, U.K., suggested that the region disrupted by the translocation seen in affected members of the Scottish family held one or more genes involved in the disorders.

    The gene search went slowly at first, but in 2000, a team led by David Porteous and Kirsty Millar, also at Edinburgh, identified two previously unknown genes on chromosome 1 that were interrupted by the genetic anomaly. Attention has focused on the first, DISC1, which normally produces a large protein the structure of which suggests that it interacts with other proteins.

    Shortly after identification of DISC1, a follow-up study by the Edinburgh workers buttressed a causative role for the gene in the mental disorders of the Scottish family; that linkage analysis had a high degree of statistical validity—a LOD score of 7 when 3 is considered good. In this family, “we're as close to causality as you could get,” Porteous says. Even so, environmental influences may still be important, as a few members carry the translocation but remain unaffected.

    The Scottish family is unusual because so many members develop bipolar disorder and depression, as well as schizophrenia, and no one has detected a similar DISC1 abnormality in other families. In 2003, however, Leena Peltonen of the National Public Health Institute in Helsinki, Finland, and her colleagues reported a linkage between a particular set of three single-nucleotide variants (single-nucleotide polymorphisms) in DISC1 and schizophrenia in a group of 458 Finnish families. “This is the first genetic evidence that DISC1 has something to do with the more common garden variety of schizophrenia,” Peltonen says.

    Held in place.

    An RNAi that blocks DISC1 synthesis prevents migration of neurons (green) to the upper layer of the cortex in mice (right micrographs); controls are on the left.

    CREDIT: KAMIYA ET AL., NATURE CELL BIOLOGY 7, 12 (2005)

    Other workers have picked up linkages between DISC1 variants and schizophrenia in a few U.S. and European families. And this year, the Peltonen team linked the gene to bipolar disorder and to autism in their Finnish population.

    Research on DISC1's normal function has strengthened the case that it is involved in mental disease. For starters, the gene is expressed in many tissues, but particularly in brain areas such as the hippocampus and cerebral cortex that are affected in schizophrenia. That puts the gene's protein product in the right locations to influence the development of the mental disease.

    In addition, as predicted from DISC1's structure, researchers have unmasked numerous binding partners for the protein. The current count stands at about 50, Porteous says, including “10 or 12 where the interaction influences function.” Several of these partners suggest a role for DISC1 in brain development and cognition.

    For example, about 5 years ago, three independent groups, those of Porteous, Akira Sawa of Johns Hopkins University School of Medicine, and Christopher Austin at Merck Research Laboratory in West Point, Pennsylvania, found that DISC1 binds to a protein called NUDEL (for NudE-like) that is needed for the neuronal migrations that occur during brain development. Several other partners of DISC1, including FEZ1, LIS1, dynein, and tubulin, are also involved in nerve-cell migrations.

    That suggests that brain development might go awry if DISC1's function is altered or missing. Evidence supporting that idea includes the demonstration about 3 years ago by the Sawa team that inhibiting DISC1 synthesis in mouse embryos with small interfering RNAs causes abnormal migration of neurons to the cerebral cortex.

    More evidence comes from animal models developed in the past year. This fall, two Johns Hopkins groups, one led by Sawa and the other by Mikhail Pletnikov, published reports on two similar mouse models produced by introducing a truncated version of the DISC1 gene into mice. Both lines showed similar changes. “The brain is superficially normal but isn't wired correctly,” says Ross, a member of the Pletnikov team.

    Outgrowth of neuronal projections called neurites, which help guide neuronal migrations, was reduced. In addition, interior brain spaces called ventricles were larger than normal—an alteration also seen in people with schizophrenia. And although it's not possible to diagnose mice as “schizophrenic,” the animals showed certain behavioral changes seen in human patients, such as hyperactivity and social and cognitive impairment. (The Sawa team's results were published online 3 August in the Proceedings of the National Academy of Sciences; those of the Pletnikov team appeared online in Molecular Psychiatry on 11 September.)

    In these mice, the mutant DISC1 protein exerted its effects in the embryos. Another Johns Hopkins group led by Hongjun Song has traced the gene's effects in the brains of adult mice. In these experiments, described in the 21 September issue of Cell, the researchers showed that inhibiting DISC1 expression in newly formed adult brain neurons produces effects opposite to those seen in the other mouse models. Neurite outgrowth increased rather than decreased, and neurons migrated farther than normal. “If you disrupt DISC1 function, everything goes faster,” Song says.

    Sawa notes that there are precedents for the same molecule having opposite effects depending on its context. Indeed, he is now working with Song and Pletnikov to identify the molecular change that can switch DISC1's activity from inhibiting to stimulating neuronal migration. But whatever the outcome, researchers have long thought that schizophrenia is the result of aberrant brain development, and the results with these models buttress the case that irregularities in DISC1 function contribute to that.

    Brain disruption.

    Putting a mutant human DISC1 gene in mice produces enlarged lateral ventricles similar to those of human schizophrenia patients.

    CREDIT: PLETNIKOV ET AL., MOLECULAR PSYCHIATRY 1, 14 (2007)

    A cAMP connection

    Although it may not be possible to help patients by correcting abnormalities in brain development, work by Porteous and Millar, in collaboration with Miles Houslay of the University of Glasgow, U.K., suggests another tack to take. Two years ago, this group identified an enzyme called phosphodiesterase 4B (PDE4B) as one of DISC1's many binding partners. This enzyme is a key regulator of cyclic adenosine monophosphate (cAMP), which transmits nerve signals into cellular responses, including those needed for memory formation.

    The PDE4B enzyme breaks down cAMP after it has done its job in the cell, and further work by the Edinburgh group indicates that DISC1 inhibits this activity until rising cAMP concentrations cause it to drop off the PDE4B molecule. Alterations in DISC1 structure that disrupt the normal DISC1-PDE4B interaction might therefore interfere with learning and memory, among other things. “This is very important work,” Sawa says. “Memory and cognition are both disturbed in schizophrenia and bipolar disease.”

    Additional evidence that disrupting the DISC1-PDE4B interaction can affect mental states comes from work on mouse models developed by Steven Clapcote and John Roder of Mount Sinai Hospital in Toronto, Canada, and their colleagues in collaboration with the Porteous team. (The results appeared on 3 May in Neuron.) By screening a library of mutant mice at the RIKEN research institute in Japan, the researchers identified two lines of mice, each with a different DISC1 mutation that reduces DISC1 binding to PDE4B.

    Behavioral studies further showed that mice with one mutation display symptoms construed as schizophrenia-like, including hyperactivity and impaired learning and memory. Those symptoms were reduced by treatment with the drug rolipram, a PDE4B inhibitor, and also by treatment with two drugs used to treat human schizophrenia. The other mouse strain, Porteous says, had more depression-like symptoms. For example, when placed in water, the animals quickly gave up trying to escape and simply floated. These animals responded to treatment with antidepressant drugs. Developing drugs to regulate an enzyme such as PDE4B might lead to better ways of treating schizophrenia and other mental disorders.

    The fact that DISC1 associates with so many different proteins might help explain the diversity of conditions to which it has been linked. “It seems that DISC1 acts as a scaffold around which other proteins cluster,” Porteous says. Thus, the symptoms that develop in a given individual might depend on which interaction is altered by a genetic variation in DISC1. And conversely, variations in any of DISC1's partners could also lead to abnormal brain development or function. Geneticists have begun hunting for linkages between these other proteins and various mental disorders.

    Neurobiologists are heartened by what they've learned so far about DISC1. At the very least, the work has tapped into what could be a very important pathway for regulating brain neuron activities. As Ross puts it, “the findings support the idea that schizophrenia is a brain disease and can be studied the same way as degenerative diseases.”

  12. Making Machines That Make Others of Their Kind

    1. Adrian Cho

    For decades, self-replicating robots have been a roboticist's dream—and a science-fiction writer's nightmare. Yet engineers haven't found a way to create ‘bots that beget ‘bots

    As any sci-fi fan knows, monkeying with robots ultimately leads to mass carnage. From R.U.R. (Rossum's Universal Robots), the play that in 1921 introduced the word “robot,” to the battles with the Daleks in the television show Doctor Who, to the Terminator movies, the tale has been told time and again. Humans (or humanoid aliens) foolishly make robots that reproduce. The self-replicating robots decide that people are a nuisance and set out to exterminate them. This scenario might seem less farfetched now that robots can make cars and microchips and stalk terrorists from the skies.

    Don't panic just yet. Vicious self-replicating machines resembling Arnold Schwarzenegger won't be breaking down doors anytime soon. Anyone mighty enough to kick a toy or topple blocks can overpower today's self-replicating robots, which actually need a lot of help to make something identical to themselves. Self-replication “is fundamental to nature and at the core of evolution, and yet we have no idea how to do it with synthetic systems,” says engineer Hod Lipson of Cornell University. “That's always been a sore point for robotics.”

    A handful of researchers are striving to change that. Working on shoestring budgets and with materials associated more often with child's play than research, they've developed simple robots that can make others like themselves out of a few relatively complex parts. They're defining more precisely what it means for a machine to self-replicate. And some are striving to emulate nature's knack for reproduction. Progress has been modest—stacks of blocks that stack other blocks won't conquer the world—but researchers are optimistic that, at the very least, they may soon better understand exactly what problem they're trying to solve.

    All agree that progress has been slowed by a lack of funding, as self-replicating robots serve no earthly purpose—although in theory, they could be useful in establishing a base on the moon or on Mars. “The field is, like, three people,” says mechanical engineer Gregory Chirikjian of Johns Hopkins University in Baltimore, Maryland. Researchers face conceptual barriers as well. “There is a great need to come up with the basic scientific principles” of self-replication, says aerospace engineer Pierre Kabamba of the University of Michigan, Ann Arbor. Still, researchers have taken intriguing steps toward making machines that build copies of themselves.

    On track.

    Engineer Gregory Chirikjian's robots must follow a specific path to replicate.

    CREDIT: © 2007 GREG SCHALER

    Easy, in theory

    The notion of self-replicating machines stretches back centuries. But the rigorous theory of self-replication emerged in the 1940s and 1950s, when mathematician John von Neumann, who also laid much of the groundwork for modern computing, analyzed the problem.

    Von Neumann considered a collection of automata: self-guided cell-like entities that interact according to specific rules. He wondered what tasks a clump of them would have to do to replicate from raw materials and basic parts. The thing would have to consist of at least three subunits, he figured: first, a set of instructions for making a device; then, a unit that reads those instructions to make a new device; and finally, one that copies the instructions, which von Neumann envisioned as a coded tape.

    This agglomeration would read the tape, make its progeny, and pass a copy of the tape to its offspring. The scheme bears a striking resemblance to biology, in which cells replicate by reading and copying tapelike molecules of DNA, the structure of which was discovered after von Neumann cooked up his ideas. Spurred by von Neumann's work, computer scientists and others have designed myriad programs that replicate within a computer—including viruses and worms.

    But as a plan for making self-replicating machines, von Neumann's work left much to be desired. Like a true mathematician, he skipped over the practical difficulties a real machine would have in gathering parts. “He doesn't address the physics at all,” Lipson says. “Bringing in the materials, dealing with the errors—the physics is the difficult part.”

    Give a child a Lego set, and she will immediately dump the pieces on the floor and comb through them to find the ones she wants. That's precisely the task that stumps machines. “That's not just the hard part for self-replication, it's the hard part for robotics in general,” Chirikjian says. “The reason you don't have robots doing your dishes and walking your dog is that the world is very complicated, and it's difficult for a robot to handle it.”

    Picking up the pieces

    So some engineers give their robots a helping hand. Two years ago, Lipson and colleagues unveiled programmable blocks measuring 10 centimeters across. Each consisted of two pyramid-shaped halves that could swivel against each other, and each block could grip others using magnets on its faces. Wriggling like a drunken hula dancer, a stack of four blocks could assemble a second stack, if new blocks were fed in at the right place and times, the researchers reported in the 12 May 2005 issue of Nature.

    Although one stack of blocks does form another, it still seems a far cry from a fully self-replicating robot. Instead of some basic part, each cube is itself a fairly sophisticated robot. And the contorting tower requires plenty of human assistance to help it locate the additional blocks. To produce something truer to the spirit of self-replication, Lipson is now experimenting with simpler cubes measuring only 500 micrometers wide that jumble together randomly in a fluid. “What is the smallest building block from which we can make everything?” Lipson says. “That's the crucial question.”

    Chirikjian also began with robots that assembled others from a few complex chunks. Starting in 2002, he and his students began experimenting with robots made of Lego bricks. At first, they built remote-controlled vehicles that could be broken into a few components. When placed in a pen, one robot could push the components of another together—a crude form of self-replication, given that the guts of a robot lay mostly in the one component containing the computerized controller.

    Since then, Chirikjian and his students have striven to make their robots more autonomous and to assemble them from simpler parts. They developed a system of optical sensors that allows a robot to follow a colored stripe to find various parts. They have simplified the robots by replacing the central controller with cruder electronics distributed throughout the pieces. Recently, the researchers demonstrated a self-replicating robot made of six fairly simple modules, and Chirikjian and a grad student are working on one consisting of 100 pieces.

    Chirikjian's robots look more or less self-sufficient, but they do not truly forage for parts. Rather, they depend on a track to guide them. Chirikjian says that he is working to eliminate the track. But he notes that even biological systems depend on their environment to reproduce. “If you take the DNA out of the environment of the cell, it's no longer self-replicating,” he says.

    Doing what comes naturally

    Given the challenges of step-by-step, or deterministic, assembly, some researchers are opting for chaos instead. Rather than making their robots fetch pieces, they're relying on random collisions to bring parts to the robots in efforts that mimic the mingling of biomolecules in cells.

    Basic parts?

    A stack of Hod Lipson's cubes stacks more cubes, but each is itself a complex robot.

    CREDIT: CORNELL UNIVERSITY

    For example, as a graduate student at the Massachusetts Institute of Technology (MIT) in Cambridge, materials scientist Saul Griffith developed smart tiles that can latch onto one another as they glide and jumble on an air table. Whether two tiles latch depends on how they are already connected to other tiles. When the tiles were properly programmed, a chain of them could form another chain, Griffith and colleagues reported in the 29 September 2005 issue of Nature. “In many respects, self-replication is just a party trick,” says Griffith, now president of Makani Power in Alameda, California. “You don't even need much logic.”

    The random, or “stochastic,” approach may have a key advantage. Ironically, jumbling huge numbers of pieces together should be easier than putting them together one by one, says engineer Eric Klavins of the University of Washington, Seattle, who has developed a similar set of triangular tiles. “If you want to do self-replication with billions of parts, you're not going to get away with determinism,” he says. The stochastic approach presents its own challenges, however. For example, researchers must figure out how to form larger useful structures from the pieces while preventing them from glomming together in undesirable ways.

    A few molecular biologists are even pushing to develop artificial cells. For such research, the emphasis is a little different, says Jack Szostak of Harvard Medical School and Massachusetts General Hospital in Boston. In chemistry, self-replication is fairly common, as any chemical that catalyzes its own production does it. Szostak and colleagues are striving for something more. “What we're trying to do is to develop a self-replicating chemical system that can evolve,” Szostak says.

    For the membranes for his artificial cells, Szostak employs molecules called lipids, which can form fluid-filled shells. Within the shells, he hopes to store a length of DNA, RNA, or a related molecule that can store coded information, replicate, and mutate. The researchers have already shown that they can make the shells grow and divide—by forcing them through a small pore—and they are working on the material to store within the shells.

    Researchers have a long way to go, however. For example, molecular biologists have been searching for a strand of RNA called a ribozyme that can catalyze the replication of itself and other strands. Such a ribozyme would have to churn out strands a couple of hundred chemical letters long, but so far the best candidate can string together only about 20 letters. “Twenty years ago, I thought this would be a 20-year project,” Szostak says. “Maybe it still is.”

    Waiting for the Terminator

    Where research on self-replication will lead remains unclear. Some say that practical considerations will inevitably force researchers toward biomolecular systems. “Self-replicating robots are going to be made out of biomolecules long before bulldozers start copying themselves,” Griffith says. Others say it's not so clear that self-replication in synthetic biology is easier than in mechanical robotics. “You're comparing two very difficult things,” says molecular biologist David Bartel of MIT. “So which one is more difficult may not matter.”

    Meanwhile, some say that the concept of self-replication needs a rethink. Researchers have thought that a system is either self-replicating or it isn't, Lipson says. But given that even biological systems rely heavily on their environment, it seems there are different shades of self-replication. Both Lipson and Chirikjian have developed mathematical tools to quantify them. Using them, researchers might analyze a system to figure out how to make it more self-replicating, Lipson says.

    Of course, employing such scales, one might argue that self-replicating robots already exist. Machines are typically made by other machines these days, albeit with plenty of help and guidance from humans. So perhaps the entire industrial enterprise constitutes a swarm of self-replicating robots. That seems plausible. But it also seems to be a disappointingly long way from the grand vision of machines that don't need people. Maybe that's a good thing.

  13. Robots' Allure: Can It Remedy What Ails Computer Science?

    1. Benjamin Lester

    Faced with sagging enrollments in the field, school and university instructors are engineering a deus ex machina to turn things around

    When Elizabeth Sklar's class starts, the first thing her students do is reach for their Lego bricks. This isn't kindergarten; it's an introductory computer science course at Brooklyn College, one of 19 campuses of the City University of New York, and the Lego bricks are a far cry from the ones you built houses and towns from as a child. They're Mindstorms RCX, Lego's programmable robotics kit, and for Sklar, a computer scientist at the college, they are bait to get students hooked on computer science. In some ways, she says, it works too well: “When they get the robots in their hands, they don't want to do lectures, I can't get them to leave, and they want to take the robots home with them.”

    Her first assignment has the students, most of whom have never programmed before, code their wheeled robots to drive forward in a straight line for 6 seconds. By the end of the first lab, some are tracing spirals, and by the end of the robotics unit, a month and a half later, the robots are using sensors to see and feel their environment.

    The same things are happening in college and secondary school classrooms across the United States and in the Middle East, Europe, Asia, and South America. Everywhere the goal is identical: to attract more students to computer science.

    Over the past 30 years, undergraduate computer science enrollments at universities in the United States have followed a roller-coaster-like trajectory. After peaking during the PC revolution of the early 1980s and the dot-com boom of the late 1990s, the number of students pursuing a computer science major has fallen significantly. At Stanford University in Palo Alto, California, for example, the number of undergrads declaring a computer science major dropped from 171 in 2000–2001 to 86 in 2006–2007. Women, always a small minority in the field, have become even scarcer than before. The Higher Education Research Institute at the University of California, Los Angeles, reports that in 2004 fewer than 0.5% of female college freshmen were interested in computer science as a major—a low not seen since the early 1970s.

    Experts trace the drop in numbers to students' concerns about job prospects in the wake of the bursting of the dot-com bubble, increased offshoring, and the influx of cheap skilled labor from abroad. Many, however, think students are unduly pessimistic about their opportunities. And some, including Owen Astrachan, a computer scientist at Duke University in North Carolina, say the lack of interest has another key component: The classes are boring.

    Bots got game.

    In Botball tournaments, automata from around the world compete in miniature arenas.

    CREDIT: BOTBALL EDUCATIONAL ROBOTICS PROGRAM

    “The classic way to teach computer science is to [give] a really dry assignment like ‘Write a program to print the Fibonacci sequence,’” says Tucker Balch, a computer scientist at the Georgia Institute of Technology (Georgia Tech) in Atlanta. “Students don't get turned on by this.” In response, both Brooklyn College and Georgia Tech have spiced up their computer science classes with gaming, media manipulation, applications to other disciplines such as biology and economics—and robots.

    “We're using the sexiness of robots to lure students into computer science,” says Douglas Blank, who teaches introductory computer science at Bryn Mawr College, an all-women's college outside Philadelphia, Pennsylvania. Blank, Balch, and their colleagues at Bryn Mawr and Georgia Tech used a $1 million grant from Microsoft as seed money to set up the Institute for Personal Robotics in Education (IPRE) in 2006. Now, when the 180 students in Georgia Tech's introductory computer science course go to the bookstore at the beginning of the semester, each shells out $68.75 for a shrink-wrapped, lunch box-sized blue robot called the Scribbler.

    On the first day of class, students tackle their robots' Python programming language, naming the ‘bots and driving them in circles. By mid-term at Bryn Mawr, robots named Henry Johnson, Borg, Igor, Mintyfresh, Myro-0001, TevBot, and RJ Nivram, among others, have progressed far beyond circles. Mr. Johnson (operated by freshman Danielle Pan, who has decorated him with stalked eyes and red-felt crab claws) and company are on a mission to seek out an orange pyramid placed in the middle of the Bryn Mawr computer lab, using the robots’ cameras. Mission accomplished, the students' computer speakers erupt with customized sound files: the “Hallelujah Chorus,” lines from the classic comedy movie Airplane!, and simple, loud gloating.

    “I came here to be a German major, but now I'm really into computer science—definitely going to be a major,” says freshman Stephanie Viggiano. Freshman Rebecca Rebhuhn-Glanz had been planning a math major, but “now I'm thinking computer science—it's too much fun,” she says.

    Filling the pipeline

    Elsewhere, robots are enthralling even younger students. STEM (science, technology, engineering, and mathematics) advocates have been using them for years to reach out to students aged 6 to 18, often through leagues as competitive as any high school varsity sport.

    Botball is one such activity, run by the KISS (Keep It Simple, Stupid) Institute for Practical Robotics (KIPR), headquartered in Norman, Oklahoma. The 2007 competition pitted 301 teams of secondary school students from the United States and elsewhere in 14 regional Botball tournaments, followed by a championship at the National (now Global) Conference on Educational Robotics in Honolulu, Hawaii. In 90-second rounds, contestants' machines scurried around a 1.2-by-2.4-meter playing field, vying to save a “village” (sections of PVC pipe, felt balls, and drink umbrellas) from a “volcano” (a tower spewing bright orange plush balls) by moving and sorting objects using a variety of scoops and pincers. Each team had 7 weeks last winter to design, build, and test two fully autonomous robots using Lego bricks, gears, motors, and sensors—all controlled by a KIPR robot controller called the XBC whose brain is a Game Boy Advance. The game stresses innovative design and programming in the XBC's Interactive C language.

    Scenes from all over.

    (Top, left to right) Assembling a robot at the U.S. FIRST high school competition; a British entry takes on “Rack and Roll.” (Bottom, left to right) Botballers from Qatar; a crab-'bot at Bryn Mawr College.

    CREDITS (TOP TO BOTTOM): ADRIANA M. GROISMAN; CS CARNEGIE MELLON, QATAR; B. LESTER/SCIENCE

    The competitors included several teams from the Middle East. Botball spread to the region after the rulers of Qatar invited U.S. universities to set up branch campuses on a new 1000-hectare complex called Education City (Science, 5 December 2003, p. 1652). One taker was Carnegie Mellon University (CMU) in Pittsburgh, Pennsylvania. Charles Thorpe, a CMU computer scientist, moved out to become dean and took along his son, Leland, a former Botball player taking a year off before beginning college. With assistance from the university, the Thorpes set up a Qatari Botball league. Interest exploded—from four teams in 2004 to 18 at the 2007 competition, including three each from the United Arab Emirates and Kuwait, says Mohamed Mustafa, who took over as Botball's Middle East coordinator when Leland Thorpe returned to the United States.

    The 2007 competition included four all-female teams, one of which, a group of middle-school students from the Al Maha English School for Girls in Doha, placed fourth. Mustafa estimates that the 2008 tournament will see 36 teams, including eight from Bahrain and Egypt. As hoped, he says, some Botballers from years past have ended up at the CMU Qatar campus.

    The Hawaii regional, held at the end of April, was won this year by a team of 10- to 15-year-olds from Earl's Garage in Kamuela on the Big Island. Whereas most Botball teams are organized by schools, Earl's Garage is a community club specifically geared to promote STEM. “There's this perception that science is hard or uninteresting,” says Earl's Garage founder and Wizard of Wonder Michelle Medeiros. “Robots are a great way to remove some of that fear.” After 4 years of Botball competitions, Medeiros is organizing a team to enter a more grueling contest: the FIRST Robotics Competition (FRC).

    If Botball is the robotics equivalent of a fencing match, then FRC is more like a really foul-tempered game of ice hockey. The brainchild of Dean Kamen, the inventor of the Segway, FRC is high-energy and heavily sponsored. Whereas Botball robots are small and the focus is on programming, FRC uses 70-kilogram metal robots and concentrates on engineering. After a short “autonomous” period at the beginning of each round, the ‘bots are remote-controlled.

    Kamen started FIRST (For Inspiration and Recognition of Science and Technology) in 1992, with 28 teams at a single event in New Hampshire. In 2007, 1307 teams comprising 32,500 high school students from Brazil, Canada, Israel, Mexico, the Netherlands, the United Kingdom, and the United States competed in “Rack and Roll”—culminating in world championships at the Georgia Dome in Atlanta. The game tasked two teams of robots with hanging red or blue pool inner tubes on pegs on a large circular tower in a kind of three-dimensional ticktacktoe while trying to fend off ‘bots from the opposing team. FIRST calls its high school competition “a varsity sport of the mind.” It has spawned two competitions for younger students.

    Both Botball and FIRST are expensive to play. Botball materials and registration costs each team $2300, and an FRC team costs between $6000 and $8000. Both organizations have opportunities for financial aid to encourage participation from poorer schools, funded by corporations and governments concerned about STEM shortages.

    Besides Microsoft's $1 million grant to IPRE, the U.S. National Science Foundation (NSF) recently announced $6 million in grants to revamp computing education at 25 schools nationwide. FIRST counts GM, Motorola, Xerox, and the Central Intelligence Agency among its 2000 sponsors, and half of all Botball teams receive financial aid through KIPR sponsors that include NASA and the Naval Research Laboratory.

    Without a doubt, Botball, FIRST, and a slew of other competitions from around the world—including the World Robot Olympiad, the International Robot Olympiad, and the India Robot Olympiad—have succeeded in funneling students into the STEM pipeline. A Brandeis University study found that FIRST participants were twice as likely to pick college science or engineering majors as were non-FIRST participants who took similar high school math and science courses.

    “We need to inspire the best and the brightest to go into computing,” says Jeannette Wing, assistant director of NSF's Computer and Information Science and Engineering Directorate. And for better or worse, the way to students' hearts seems to be through their robots.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution