News this Week

Science  21 Sep 2007:
Vol. 317, Issue 5845, pp. 1660

    Continuing Indonesian Quakes Putting Seismologists on Edge

    1. Richard A. Kerr
    A milder shock.

    A suprisingly small quake generated only small tsunami waves.


    How predictable are earthquakes? Most seem to barge in unannounced, but many seismologists have long seen evidence that—at least in some places—the same faults break over and over, producing similar quakes at roughly similar intervals. This regularity, they say, may offer a chance of forecasting earthquakes. The recent run of large quakes off the Indonesian island of Sumatra, which began with a megaquake and tsunami in 2004 and continued last week, is providing fodder for both sides in the debate over whether earthquakes behave consistently enough to be reliably anticipated.

    Both sides now have their eyes on one section of the fault offshore of the coastal Sumatran city of Padang. If that part of the fault follows the pattern some geologists expect, a tsunami-generating magnitude-8 or larger quake could well strike there in the coming months. “It's really looking ominous to me,” says geologist Kerry Sieh of the California Institute of Technology (Caltech) in Pasadena, who is watching developments while on sabbatical in nearby Singapore.

    The great Asian quake and tsunami of December 2004 certainly caught researchers by surprise. But once alerted, they knew where to look for the next big one—just to the south of the December break. The earlier rupture would have transferred stress down the fault to the next 350-kilometer-long segment, which last broke in 1861. Sure enough, a magnitude-8.7 quake reruptured that segment in March 2005. If segments of the offshore Sunda fault continued chatting in this “language of stress,” the researchers believed, the next quake should break all or part of the adjacent 700-kilometer segment to the south. That segment hadn't ruptured since 1833 (in the south) and 1797 (in the north).

    Second-generation quakes.

    The 2004 great earthquake (red star and light-blue zone) on the Sunda Trench fault (red line) triggered the 2005 quake where the fault ruptured in 1861, but the next rupture came far to the south, leaving the fault off Padang as a lingering threat.


    Last week's quakes off Sumatra offer some support for the idea that similar earthquakes repeat on the same fault segment at similar intervals, perhaps after feeling a nearby quake. The classic characteristic quake repeatedly struck Parkfield, California, from the 19th century into the 20th century. But as seismologists were reminded by Parkfield's misbehavior in its last appearance (Science, 8 October 2004, p. 206), fault behavior and communication can be frustratingly subtle, even perverse. Last week, on 12 September, a magnitude-8.4 quake did indeed strike south of the 2005 break. But it wasn't the quake called for in the stress communication script. The sequence of 2004, 2005, and now 2007 quakes skipped over the segment that last broke in 1797. Instead, much of the 1833 segment broke farther to the south.

    Even if the latest quake ruptured the same part of the fault as before, that fault slipped only a couple of meters, Sieh says, not the 10 meters it slipped in 1833. That made the quake one-twentieth the size in terms of energy released. So the falling dominoes from the north for some reason passed over a fault segment that had accumulated 200 years'worth of stress. And a more distant fault did fail, but not in the characteristic quake that might have been expected.

    Some seismologists see the magnitude-8.4 quake and an accompanying pair of magnitude-7 quakes as another sign that quakes mostly pop off in a range of sizes and break a largely unpredictable assortment of fault segments. “This sure looks like a large distribution of magnitudes” generated by a random process, says geophysicist Eric Geist of the U.S. Geological Survey in Menlo Park, California, who doubts that the 2005 quake could have reached this far south to trigger the next one.

    Others are more open to a link to the previous rupture. “It's hard for me to imagine that after 170 years of no [larger-than-magnitude-8] earthquakes, the 2004, 2005, and 2007 earthquakes aren't somehow involved in triggering” subsequent earthquakes, says Sieh, who, with colleagues, published a forecast for a post- 2005 quake. His Caltech colleague, seismologist Jean-Philippe Avouac, agrees. Stress from the 2005 rupture may have propagated farther but more slowly along the deep parts of the fault to trigger last week's quake, he says.

    However this fault is working, last week's events are both good news and bad news for people in the region of Padang, says seismologist John McCloskey of the University of Ulster in Coleraine, U.K. Good news in that the magnitude-8.4 quake relieved some of the stress on the fault; the next quake could be in the low 8s rather than pushing a 9. But bad news in that last week's activity could only have increased the stress on the segment off Padang, making another rupture more likely. The threat of another huge quake has receded there, says McCloskey, “but in terms of an 8, it's still the most likely place on the planet.”


    Tsunami Warning System Shows Agility--and Gaps in Indian Ocean Network

    1. Dennis Normile

    Within 15 minutes after a magnitude-8.4 earthquake rocked Indonesia's Sumatra Island last week, tsunami warnings were blaring from mosque loudspeakers in local villages. By then, many residents had already fled. “We tell people that if the shaking is strong, don't wait for a warning, evacuate,” says Patra Rina Dewi, executive director of the Komunitas Siaga Tsunami (Kogami), a nonprofit in Padang, Indonesia. For several hours after the quake, small tsunamis hit Sumatra's west coast, causing minor damage but no casualties.

    Indonesia's warning and similar alerts across the region were the fruits of a nascent Indian Ocean Tsunami Warning and Mitigation System. Officials are pleased with the performance of the partly deployed operation, which relies on a network of sensors. But the experience also reinforced the notion that technology is just one facet of a strategy, along with boosting awareness and developing evacuation and response plans. The need for tsunami preparedness was driven home by the 26 December 2004 magnitude-9.3 earthquake off Sumatra that triggered a monster tsunami. Many of the 230,000 deaths could have been prevented had coastal dwellers been warned of the impending waves.

    In response, Indian Ocean nations, buttressed by the United Nations Educational, Scientific and Cultural Organization's Intergovernmental Oceanographic Commission and donor countries, started stitching together the Tsunami Warning System, a $130 million network of seismometers, buoys for measuring swells, sea-bottom pressure sensors, and tide gauges, as well as mechanisms for sharing data (Science, 9 December 2005, p. 1602). Only a handful of the deep-ocean sensors are in place, and countries are still installing seismometers and tide gauges.

    Relying solely on seismic data to determine the magnitude and location of the 12 September earthquake, Indonesia's Meteorological and Geophysical Agency issued a tsunami warning to authorities near the epicenter “within 5 minutes of the earthquake,” boasts Fauzi, an official in charge of tsunami warnings, who, like many Indonesians, has only a given name. Residents fled low areas on their own initiative—a good response, says Kogami's Dewi. But she would like to see accurate and timely follow-up advice based on an analysis of sensor data on whether a tsunami is actually on its way. Fauzi says that's his agency's goal.

    Other countries, with the advantage of distance from the epicenter, were able to forecast the threat more accurately. The Indian Tsunami Warning System, at India's National Centre for Ocean Information Services in Hyderabad, picked up the quake on its seismometers and put authorities on the Andaman and Nicobar Islands on alert but stopped short of issuing a warning. After weighing data from tide gauges and recently placed ocean-bottom pressure sensors, the Hyderabad staff concluded that neither the islands nor the mainland faced a tsunami. Three hours after the earthquake, India issued an “all clear.” The network “performed admirably,” says Vinod Menon of the National Disaster Management Authority in New Delhi. But Menon admits that poor communications left people in some coastal regions in a panic. “Messaging and outreach really needs to be fine-tuned,” he says.

    DARTs anyone?

    Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys in the Indian Ocean aid tsunami warning efforts.


    Thailand's National Disaster Warning Center also concluded early on that the chance of a tsunami was small. It alerted officials to stand by but issued no public statements. Smith Dharmasaroja, chair of the National Disaster Warning Administration committee, says the center worried about losing public trust if it broadcast warnings unnecessarily. But many residents heard from the news that warnings had been issued in other countries and evacuated on their own. The center has revised its procedures and, when the next big quake strikes, will issue bulletins to keep the public informed.

    Sri Lankan officials, meanwhile, played it safe and evacuated people from areas hardest hit by the 2004 tsunami. Writing in Sri Lanka's Sunday Times on 16 September, Dulip Jayawardene, former director of the Geological Survey Department, called on the government to “train seismologists and geophysicists in interpretation of seismic data” for more accurate tsunami forecasts.

    The latest earthquake to rattle South Asia provided “the best drill ever,” says Tavida Kamolvej, a disaster management expert at Thammasat University in Bangkok. Now, she says, it's time to find the gaps in the system and fill them.


    Arecibo Advocates Agitate to Reverse Planned NSF Cut

    1. Andrew Lawler

    Supporters of the world's largest radio telescope in Arecibo, Puerto Rico, gathered last week in the shadow of the U.S. Capitol in hopes of keeping the observatory alive. But time is running short, and the advocates face an uphill struggle to keep the 40-year-old observatory, whose receiving dish extends 305 meters across a natural sinkhole, operating into the next decade.

    Last fall, the National Science Foundation (NSF) endorsed the recommendations of an independent panel to reduce the astronomy division's support for Arecibo from $8 million to $4 million to free up funds for new projects such as the Atacama Large Millimeter Array being built in Chile (Science, 10 November 2006, p. 904). All NSF funding for the telescope would cease after 2011 if outside donors can't be found, although the observatory's planetary radar could close as early as October 2009, says Robert Brown, director of the National Astronomy and Ionosphere Center (NAIC), based at Arecibo and operated by Cornell University.

    The radar is especially important for imaging planets such as Venus and computing exact trajectories of near-Earth objects that might pose a hazard. NASA decided in 2001 to halt funding for the radar, which now costs $1 million a year to operate, in order to focus on space-based observatories.

    Brown, who helped organize last week's meeting in Washington, D.C., argues that the independent panel's recommendations are outdated because their premise—that NSF's astronomy budget would remain static—now seems unlikely, given the strong political support for increased federal spending in the physical sciences. “The plan … should be rethought,” he says. One New York legislator who sits on the powerful appropriations committee and is a leader of the Hispanic community is already convinced. “The Arecibo Observatory is an important scientific tool for our nation,” says Representative José Serrano (D-NY). “I will be monitoring its funding situation and making sure that the correct decisions are being made.” Serrano declined to say how and when he might push for more money.

    That appeal to Congress doesn't sit well with NSF officials. “We commissioned a panel to determine scientific priorities,” says Wayne van Citters, who heads the agency's astronomy division. “To involve Congress in one aspect of it is not a productive way to go.” Van Citters fears that legislators might shift money from newer, more promising projects. “We have to recognize [there are] limited funds,” says Garth Illingworth, a University of California, Santa Cruz, astronomer at Lick Observatory, who also serves as chair of the Astronomy and Astrophysics Advisory Committee that advises NSF, NASA, and the Department of Energy. “Focusing on whole new wavelength areas may be more appropriate.”

    Venus finder.

    This radar image of Venus is made up of data from both NASA's Magellan spacecraft and Arecibo, which peered beneath the planet's total cloud cover.


    One participant at the meeting, held in a building near the Capitol, complains that the NSF panel underestimated the importance of the radar. Steven Ostro, an astronomer at the Jet Propulsion Laboratory in Pasadena, California, says that Van Citters “effectively stonewalled” its advocates because its mission fell outside NSF's purview. But Van Citters says that NAIC and Cornell proposed to cut the radar to cope with the decreasing budget, without prodding from NSF.

    Brown says that the Puerto Rico government, private individuals, and commercial companies have expressed some interest in supporting the telescope's operations, including the radar. And although Van Citters says he's optimistic that a white knight will appear, he warns that any plan to keep Arecibo going “has to be sustainable.”


    A New Body of Evidence Fleshes Out Homo erectus

    1. Ann Gibbons

    The long-legged human ancestor Homo erectus is known for breaking records: It has been seen as the first globetrotter, the first inventor of stone hand axes, and the first human to dramatically expand its brain and to reach the height of people today. But such views of the body of H. erectus rely heavily on a single partial skeleton of a strapping youth from Nariokotome, Kenya. Now the discovery of incredibly rare trunk and limb bones of early H. erectus shows that the species wasn't always so tall and brainy—and, according to some interpretations, suggests that it may have emerged in Asia, not Africa.

    In this week's issue of Nature, researchers unveiled 32 postcranial bones from three adults and a teenager who lived 1.77 million years ago at Dmanisi, Georgia. The hominids resembled the Nariokotome Boy but would have stood only as high as his shoulders. “All of the individuals are small—they are not NBA players,” says team leader David Lordkipanidze of the Georgian National Museum in Tbilisi. Although their feet and body proportions are modern, the Dmanisi skeletons had more primitive shoulders and arms and are considered the most primitive members of H. erectus yet found.

    But not everyone agrees. The bones are so primitive that a few researchers aren't even sure they are members of Homo. “They are truly transitional forms that are neither archaic hominins nor unambiguous members of our own genus,” says paleoanthropologist Bernard Wood of George Washington University in Washington, D.C.

    The debate reflects how little is known about the murky period at the dawn of our genus, partly because there are so few fossils of postcranial bones. The famous partial skeleton of Lucy offers a view of Australopithecus afarensis, which lived 3.6 million to 3 million years ago. But the next good window into body anatomy doesn't appear until 1.55 million years ago, with the 12-year-old Nariokotome Boy from Kenya. He had a dramatically bigger brain and would have stood about 180 centimeters tall had he survived to adulthood. “We've got Lucy's body and then Nariokotome, and this gap in the middle with a lot of scrappy stuff in between,” says paleoanthropologist Susan Antón of New York University. The earliest of those in-between fossils have been called H. habilis, which is something of a grab bag species for specimens too small or primitive to be considered H. erectus.

    The remarkably well-preserved Dmanisi fossils, among the earliest members of H. erectus found anywhere, fall into that gap. The postcranial bones, some of them articulated with each other, fit nicely with four previously published small skulls. The skeletons suggest that the Dmanisi people ranged from 145 to 166 centimeters tall and weighed between 40 and 50 kilograms—bigger than an australopithecine but on the very low end of the range for modern humans. These small specimens also fit with a tiny H. erectus skull from Kenya published last month (Science, 10 August, p. 733). But another African H. erectus skull is both 225,000 years older and larger, so the species now encompasses a wide size range, perhaps because of sexual dimorphism between males and females or adaptations to varied habitats.

    Short people.

    Skeletons from Dmanisi, Georgia, shown on a forested landscape are surprisingly short-statured.


    The Dmanisi skeletal bones also have other primitive traits: The bone of the upper arm is straight rather than twisted, and the shoulder blades might have been closer to the sides rather than the back. Those traits are seen in australopithecines and also in the tiny 18,000-year-old H. floresiensis from Indonesia (Science, 19 May 2006, p. 983). Nariokotome Boy's upper arm bone is incomplete but looks relatively straight. In any case, the Dmanisi bones suggest that a dramatic reorganization of the orientation of the upper arm and shoulder, which allows overhead throwing (and piano playing), came relatively late in the evolution of humans. Yet the small-brained Dmanisi people were adept at using their archaic arms to butcher meat with stone tools, says co-author Christoph Zollikofer, a neurobiologist at the University of Zurich, Switzerland.

    The fossils' small size might suggest they belong to H. habilis or a new species, but their more modern traits, such as long legs and modern body proportions, place them in H. erectus and show they were adapted for long-distance locomotion, says Zollikofer. Their feet are also quite modern, including a big toe that was not grasping, as in apes and australopithecines.

    Lordkipanidze thinks the fossils were either very early H. erectus or “the best candidates to be the ancestors of H. erectus.” He suggests that they arose in Asia from an early Homo that was part of a very early radiation out of Africa. Some of the Dmanisi fossils' descendents returned to Africa while others spread out later into Asia as full-fledged H. erectus. Paleoanthropologist Alan Walker of Pennsylvania State University in State College doesn't buy that scenario. He and Antón prefer a model in which the species arose in Africa and continued to evolve separately on different continents—including at Dmanisi—giving rise to variation as it adapted to different habitats. Either way, “the real story here is variation, variation, variation,” says co-author Philip Rightmire of Harvard University.


    Questions Remain on Cause of Death in Arthritis Trial

    1. Jocelyn Kaiser

    An investigation into the death of a 36-year-old woman in a gene therapy trial has revealed a complex tragedy but reached no firm conclusion on whether the experiment was to blame. At a meeting this week, the National Institutes of Health's Recombinant DNA Advisory Committee (RAC) largely discounted a theory that a vector used in the trial multiplied out of control, among other hypotheses. Participants noted that the patient apparently died because she had a severely compromised immune system and succumbed to a fungal infection.

    Gene therapy, which has been blamed for two deaths in the past 8 years, did not get charged with a third. Some concern had focused on the popular vector used in this experiment, adeno-associated virus (AAV). Results presented at the meeting did not challenge the consensus that it is relatively safe. However, questions remain about how well the patient was informed and how she was selected for the trial; her disease was not life-threatening. Concluded Arthur Nienhuis, president of the American Society of Gene Therapy: “We see areas in which we need to be concerned.”

    The patient, Jolee Mohr of Springfield, Illinois, had suffered from rheumatoid arthritis since she was 21. Through her rheumatologist, she enrolled in a phase I/II safety trial of a new treatment for arthritis sponsored by Targeted Genetics Corp. in Seattle, Washington. The study involved injecting into joints AAV that carried a gene coding for a protein that inhibits a proinflammatory cytokine called tumor necrosis factor α (TNF-α). Mohr received an initial injection in her right knee on 26 February and a second on 2 July.

    After the second injection, she developed flulike symptoms. Ten days later, she was admitted to the hospital and was later transported to the University of Chicago Hospital. She died there after massive organ failure on 24 July. The Food and Drug Administration immediately put the trial on hold (Science, 3 August, p. 580).

    The main cause of death was apparently infection with Histoplasma capsulatum, a common fungus, University of Chicago doctors said at the RAC meeting. Mohr's liver, lungs, and other tissues were “loaded with” the organism, which was not picked up in lab tests during her hospitalization, reported pathologist John Hart of the University of Chicago Medical Center. In addition, she had a large abdominal hematoma, or a blood clot.

    Tests done after Mohr died show that she already had a mild Histoplasma infection on 2 July. Because she also tested positive for herpes simplex virus, some experts have suggested that herpes proteins, along with wild AAV, could have helped the AAV vector to replicate and weaken her immune system. But although the AAV vector did escape from the injected joint to other tissues, the levels were extremely low, making it unlikely the virus was replicating, RAC found.

    Medical mystery.

    It's not clear whether gene therapy contributed to the death of 36-year-old Jolee Mohr, shown with her family.


    A more likely culprit is an immune-suppressing drug she was taking. This drug, called Humira, is a TNF-α blocker like the gene therapy product and has been associated with Histoplasma infections. One possibility is that the combination pushed Mohr over the edge. The RAC members hope an experimental assay can tease apart levels of gene product and drug in Mohr's blood.

    More definitive results should be available by the next RAC meeting in December, said chair Howard Federoff of Georgetown University in Washington, D.C. Meanwhile, however, Federoff and others say other questions remain unanswered—such as whether Mohr realized that patients weren't expected to benefit from this study.


    Panel Gives U.S. Program Mixed Grades

    1. Richard A. Kerr

    An expert panel says the Bush Administration deserves “a pat on the back” for advancing the science of climate change. But the scientists assembled by the National Academies' National Research Council (NRC) have serious concerns about the management, funding, and emphasis of the $1.7-billion-a-year Climate Change Science Program (CCSP).

    President George H. W. Bush created the U.S. Global Change Research Program in 1990 to bring under one umbrella the government's efforts to understand climate change. In 2002, his son reshuffled the climate deck to create CCSP. Last week, the NRC panel took the first outside look at that program and concluded, says chair V. Ramanathan, an atmospheric scientist at the Scripps Institution of Oceanography in San Diego, California, that its efforts to understand how and why climate has changed and to make predictions are “proceeding well.”

    At the same time, the 15-member panel identified serious shortcomings. The country's ability to monitor climate is being eroded, it says, by a projected one-third reduction in the number of operating instruments by 2010. “The loss of existing and planned satellite sensors is perhaps the single greatest threat to the future success of the CCSP,” the report concludes.

    Wants more.

    NRC committee chair V. Ramanathan sees progress and shortcomings in U.S. effort.


    The report also criticizes the program's management structure. “Interagency working group members often have little budgetary authority to implement the research directions that they define,” the panel wrote, because the program only controls the $1.4 million needed to manage its nine-person office. Ramanathan says CCSP often can make progress only when its interests happen to coincide with those of the 13 participating agencies.

    Ramanathan says he's troubled by the program's limited success in “assessing [climate change] impacts on human well-being and adaptation capacities.” Those assessments would require reliable forecasts of climate change at the regional if not the local level, Ramanathan notes, a capability the world's climate modelers are still struggling with. But gauging impacts on humans and figuring out how humans might adapt to climate change will take far more than the $20 million per year now spent within the program on social science studies, the committee said. It will also take better communication between the program and business interests, other agencies, and the general public. For starters, 21 synthesis and assessment reports were due from CCSP by now, but only two have been delivered.

    John Marburger, the president's science adviser, called the review “thoughtful” and said the Administration would “take their recommendations very seriously.” The same NRC committee is already working on a second report, due out early next year, on how to improve the program.


    Australian Scientist to Head California Stem Cell Effort

    1. Constance Holden

    California has landed one of the biggest fish in the Pacific to head its stem cell venture. Late last Friday, the board of the California Institute for Regenerative Medicine (CIRM) approved the appointment of Alan Trounson, one of Australia's premier researchers, as the institute's president.

    Trounson, director of stem cell research at Monash University in Melbourne, Australia, thus joins the international roster of big-name researchers who have been lured by California's $3 billion stem cell program, launched in 2004. “Things are booming in California right now,” says former CIRM president Zach Hall, who says Trounson's varied experience makes him “almost uniquely qualified for the job.”

    Trounson says things happened fast. “I was passing through California 6 weeks ago” when CIRM's board chair Robert Klein, whom he had met in Melbourne last spring, encountered him at a dinner and asked if he was interested in the job. “I thought he must have been joking,” says Trounson.

    Trounson, 61, began his career working on sheep and cattle reproduction but quickly made a name for himself in human in vitro fertilization. He was neck and neck with James Thomson of the University of Wisconsin, Madison, in the race to cultivate the first human embryonic stem cell lines in 1998. He is also well-versed in managerial and entrepreneurial issues as founder of a number of Australian biotech companies and co-founder of the Singapore-based ES Cell International.

    Big fish.

    Alan Trounson expects to start by year's end.


    Those who know Trounson say his personal skills should make for smooth relations with Klein, who has more than once ruffled colleagues with his propensity to take unilateral actions. “Alan is charming, … a lovely human being,” says stem cell researcher Evan Snyder of the Burnham Institute in San Diego, California. Johns Hopkins University researcher John Gearhart says he has “political savvy” on top of a “very broad base of knowledge.” Trounson himself likes to talk about “partnerships”—in this case with Klein, whom he calls a “visionary financier.”

    Trounson is one of several stem cell scientists Australia has lost to the United States, including Martin Pera, now at the University of Southern California in Los Angeles, and Paul Simmons, now at the University of Texas, Houston. Trounson has made it clear he regards the CIRM presidency as the capstone of his career. “I thought would another two or three Science papers make a difference, or would taking this job really make a difference? It's a no-brainer to come here and do this.”


    The Geneticist's Best Friend

    1. Elizabeth Pennisi

    Dogs are helping to hunt down more than foxes and lions: Researchers are increasingly relying on them to track down genes and pathways involved in canine and human diseases


    Talk about blind faith. Twenty years ago, Gustavo Aguirre and his colleague Gregory Acland were struggling to understand a common cause of inherited blindness in dogs. They had bred affected and unaffected individuals and traced the inheritance patterns in the offspring, but “there was no hope of finding the gene,” recalls Aguirre, a veterinarian at the University of Pennsylvania's School of Veterinary Medicine in Philadelphia. At the time, researchers hadn't even assigned numbers to the canine chromosomes, let alone begun to map the locations of genes. Nonetheless, “I decided that in the future, someone somewhere would come up with [a way] to come up with the gene,” he says. So they banked blood from their dogs and waited.

    Their patience paid off. A decade later, their freezers provided the raw material for a linkage map of the dog genome and, eventually, the discovery of the long-sought gene for progressive rod-cone degeneration. With that map as a starting point, researchers have built a community that has proven the value of dog genetics not just for veterinarians and dog breeders but also for human geneticists.

    Dogs are a geneticist's dream. Pure breeds, as the name implies, are often highly inbred for specific traits. They have large families and well-documented genealogies, all of which greatly simplifies the task of tracking down mutations that cause disease or genes that underlie traits such as size, coat color, or even behavior. And the link to humans can be direct: The top 10 diseases in dogs include cancer, epilepsy, allergy, and heart disease—disorders that affect many millions of people. Also, because dogs live in the same environment as people, they share some of the same environmental risk factors. As a result, more and more researchers, including a consortium about to be announced in Europe (see p. 1670), are turning to the dog for clues to human genetics. “All of a sudden, people from a wide range of disciplines can see the value, power, and practicality of genetic studies in dogs to shed light on issues of concern to them,” says Acland, a geneticist now at Cornell University.

    Pooch politics

    It wasn't always that way. In fact, it took years of work by a small but dedicated band of researchers for the dog's scientific value to be appreciated. Jasper Rine got the ball rolling almost 20 years ago. A yeast geneticist at the University of California, Berkeley, he recognized that dogs were bred for specific behaviors and that those behaviors probably had a very strong, and perhaps easily identifiable, genetic basis. Rine crossed Border collies with Newfoundlands to see if he could pinpoint the genes underlying the former's predilection for herding and the latter's love of swimming. But he lacked a key tool: a map of genetic markers—known stretches of DNA prone to variation—that he could track from parents to offspring to determine which were passed along with the best swimmers and which were associated with herding. The markers associated with a particular behavior should lie near genes that contribute to that behavior. To develop such a tool, Rine conceived the idea of a dog genome project.

    Dogged pursuit.

    Elaine Ostrander helped jump-start canine genomics.


    For Elaine Ostrander, the timing was fortuitous. She had just come to Berkeley in 1990 as a postdoc to study plant genetics, and to tide her over until her new fellowship kicked in, she took a temporary job with Rine. Her assignment: to begin building the dog map. She never made it to the plant lab. When she left in 1993 for the Fred Hutchinson Cancer Research Center in Seattle, Washington, she was a complete convert, as comfortable soliciting blood samples from breeders at dog shows as she was poring over gels. Her puppy, a Border collie named Tess, became her lab's mascot, providing Ostrander and her students with welcome distractions and, once, posing for the cover photo on Mammalian Genome. Ostrander's home became a meeting place for colleagues interested in promoting dog genetics. “She drove this whole canine genome initiative,” says Matthew Breen, a cytogeneticist at North Carolina State University's College of Veterinary Medicine in Raleigh.

    Eventually, Rine closed down his dog-behavior studies for lack of funding and to avoid being hassled by animal-rights activists. Ostrander persisted on the map, but progress was slow until 1996, when she, Acland, and Aguirre joined forces. “We figured out we needed to check our egos at the door and pool our efforts,” Ostrander recalls. Because Aguirre and Acland had samples from a large number of dogs with known genealogies spanning several generations, Ostrander and colleagues were able to determine the relative order of scores of markers by following which ones were inherited together in these dog families. A year after they began working together, they produced the first map showing the positions of 150 of these markers on the dog genome. With this tool, they quickly narrowed down the location of the gene for progressive rod-cone degeneration to a region of chromosome 9, although it would take several more years to get to the gene itself.

    Very quickly, that map was superseded by a much more comprehensive one, a joint effort by Breen, Ostrander, and Francis Galibert, a human geneticist a t the University of Rennes, France. Galibert, who became convinced of the value of dogs for gene hunting after hearing Ostrander give a talk at a Cold Spring Harbor Laboratory meeting, had located markers along the genome in a so-called radiation hybrid map. Ostrander and Breen added their markers to it to create a detailed atlas for tracking genes. With it, researchers “could pull DNA samples from their pedigrees out of their freezers and begin to do genome scans,” Ostrander says.

    Dog DNA deciphered.

    Researchers have sequenced this boxer's genome.


    Ostrander and Aguirre had long suspected that dog studies could lead to the discovery of genes important to humans. “But at the time, we were fighting the perception that dogs couldn't tell you anything,” recalls Aguirre. That changed in 1998, when Emmanuel Mignot of Stanford University in Palo Alto, California, and his colleagues tracked down a gene that causes narcolepsy in dogs. After a decade of work, they found causative mutations in the Hcrtr2 gene in narcoleptic dachshunds, Labrador retrievers, and Doberman pinschers, a discovery that clued researchers in to a new molecular pathway involved in sleep. With these results, “it was crystal clear that by studying dog genetics, we were going to learn things we couldn't learn in mice,” says Ostrander.

    That realization helped Ostrander achieve her ultimate goal: a complete genome sequence of the dog. She and her colleagues had their appetites whetted when J. Craig Venter, then president of Celera Genomics in Rockville, Maryland, turned his sequencing machines on his pet poodle, churning out a very rough sketch of its genome in 2001. An analysis was published 2 years later (Science, 26 September 2003, p. 1898). About the same time, the National Human Genome Research Institute (NHGRI) at the National Institutes of Health in Bethesda, Maryland, began supporting the deciphering of other vertebrate genomes, and Ostrander and her collaborators wasted no time writing up a white paper arguing that the dog be given high priority. She convinced the Broad Institute of the Massachusetts Institute of Technology and Harvard University in Cambridge, Massachusetts—a leader in high-throughput genome sequencing and analysis—to sign on. In 2002, the dog beat out the cat and the cow for a spot in the sequencing pipeline.

    Community outreach.

    NHGRI's Heidi Parker gets DNA from a dog-show competitor.


    Ostrander put out a call to dog breeders for a highly inbred candidate. (The more inbred the individual, the more similar the animal's two sets of chromosomes and the easier it would be to piece the genome together.) After studying hundreds of samples of dog DNA submitted by owners, Ostrander and her colleagues chose a boxer named Tasha as the first canine to have its genome deciphered.

    As part of that sequencing effort, researchers at the Broad Institute compared Tasha's genome with the rough draft of the poodle genome and DNA sequences from nine other dog breeds and five wild canids and came up with 2.5 million single-nucleotide polymorphisms (SNPs): points on the genome where a change in a single nucleotide frequently occurs. The analysis, published in the 8 December 2005 issue of Nature, “is a masterpiece,” says geneticist Greg Barsh of Stanford University. Broad's Kerstin Lindblad-Toh, who led the dog genome sequencing project, has since worked with Affymetrix in Santa Clara, California, to develop a SNP chip, a microarray that allows researchers to screen samples rapidly for these variations across the genome. Dog genetics studies were primed for takeoff. These tools, says Aguirre, “put us into the 21st century.”

    The power of inbreeding

    The dog's power in tracking genes comes largely from inbreeding. “Each [breed] is a mini Iceland or Finland,” explains Ostrander, now at NHGRI. Many breeds stem from a few individuals whose progeny were interbred, not because of geographic isolation but to select for specific features and behaviors. As a result, members of a given breed have extremely long stretches of identical DNA in common—millions of bases long compared to the typical tens of thousands of bases in humans. In humans, “there's a lot of background cackling, but when you look at the dog genome, the message is loud and clear, without a lot of background noise,” Breen explains. That means researchers can track down recessive genes involved in disease using many fewer animals. They also need an order of magnitude fewer SNPs than are needed for human studies.

    The inbreeding in dogs is especially valuable in studies of genetic risks in complex diseases. In people, different mutations, even different genes, may be at fault in the same disease. Dozens of genes contribute to the risk of common cancers, for example, with no single one jumping out as key. But because each dog breed is an isolated, inbred population that typically dates back just a few hundred years, not a lot of time has passed for the dogs in any one breed to develop multiple mutations for the same disorder, or for several mutations to have been introduced from outbreeding. “In dogs of one breed, you will have exactly the same mutation in the same gene,” geneticist Mark Neff of the University of California, Davis, explains. Breeds derived from a common ancestral breed might share a mutation, but distantly related breeds will not, offering the potential of revealing other genes, perhaps in the same pathway involved in a disease.

    Born to point.

    Even as puppies, German shorthaired pointers live up to their names.


    To help sort out how different breeds are related to each other, Ostrander, Heidi Parker in her lab, and their colleagues have looked for genetic variants that are shared across breeds: The more shared variants, the more closely the breeds are related. They initially studied 85 breeds, and the results helped them home in on a gene for short stature (Science, 6 April, p. 112). They have now expanded the work—called the PhyDo Project—to 130 of the 155 breeds recognized by the American Kennel Club. In work in press at Genome Research, they report that breeds cluster into five genetically defined groups, with some subdivisions within each group.

    As these researchers will report, they have used these relationships to track down a gene causing collie eye anomaly, which is the canine equivalent of a human birth defect in which part of the eye does not form properly. In 2003, they had narrowed the gene's location down to a 1.7-million-base stretch of dog chromosome 37. Then they went further: “By comparing different herding breeds with the same disease, we whittled down [the possibilities] to four genes,” Ostrander explains.

    A close look revealed the likely culprit: a gene called NHEJ1 missing 7800 bases. They found the same mutation in two more distantly related breeds, Nova Scotia duck tolling retrievers and longhaired whippets, which likely had farm collies in their family trees. But unrelated soft-coated wheaten terriers, which also get this disease, don't have the mutation. “We think we will be able to play that trick [of looking for mutations in closely related breeds] again and again,” says Nathan Sutter, who recently left Ostrander's lab for Cornell University.

    PhyDo is also helping Ostrander's group make sense of bladder cancers. Five breeds tend to develop this cancer. The most susceptible, with a 30-fold increase in risk, is the Scottish terrier. Three other at-risk breeds are terriers, which likely share the same mutation. But PhyDo data verify that the fifth susceptible breed, the beagle, is not at all related, so its bladder tumor risk probably stems from a different genetic abnormality.

    Dog fanciers

    Ostrander's group is working with Carlos Bustamante of Cornell University to make SNP profiles of individual breeds available in a public database called “CANMAP.” It should provide researchers and breeders with powerful tools to track down the traits in which they are most interested. Already, researchers are intrigued by the possibilities.

    The data set includes wolves, the common ancestor of all dogs, which should give evolutionary biologists a “wonderful perspective on the evolution of the dog,” says collaborator Robert Wayne, an evolutionary biologist at the University of California, Los Angeles. Since the 1980s, he's wanted to find the genes underlying the evolution of dogs and dog breeds. Back then, “it was clear we didn't have the technology to address the basic questions,” he recalls. Now, says Sutter, by comparing SNP profiles of dogs and wolves, “we will know what changes occurred in the dog during domestication.”

    Stanford's Barsh has a different quarry in his sights. Over the past 15 years, he has worked on the genetics of coat color in mice, but recently he's focused on dogs and is well on his way to understanding the genetic basis of the stripes in the brindle coat of Great Danes. Like Barsh, Danielle Karyadi in Ostrander's lab is also a recent convert to dog genetics. She has spent much of her career chasing down genes that make people more susceptible to prostate cancer. Now, she has turned her attention to squamous cell carcinoma, a cancer found only in solid black dogs, such as poodles. “It will be really exciting when we can identify genes in dog cancers” and use them to understand human cancers, says Karyadi.

    Lindblad-Toh has also been won over to the hounds. She specialized in human genetics when she came to Cambridge but by chance got roped into running the mouse genome sequencing project and eventually all mammalian sequencing projects at the Broad Institute. Working with European collaborators, Lindblad-Toh's group has two papers in press demonstrating the power of genomewide association studies in dogs to pin down genes. In one, they used fewer than two dozen boxers to track down the gene for white coat color. The gene is the same one that is mutated in humans with Waardenburg syndrome, an inherited disease characterized by hearing loss and skin and pigmentation abnormalities, and in several mouse pigmentation disorders. A second study on Rhodesian ridgebacks has led the group to a mutation possibly involved in neural tube defects; that study required fewer than two dozen animals as well. “It shows how little material you need,” says Åke Hedhammar, a geneticist at the Swedish University of Agricultural Sciences in Uppsala.

    He and Lindblad-Toh expect to make progress understanding more complex diseases as well. For example, boxers, bull terriers, and West Highland white terriers are prone to atopic dermatitis, a skin disease that in both dogs and humans has environmental and genetic components. Genetic studies are under way in these breeds to find the genes that influence this risk.

    Common ills.

    Dogs and humans suffer from many of the same diseases, such as atopic dermatitis (left).


    A window on behavior

    The rapid progress in dog genetics is prompting some researchers to get back to studies that motivated a canine genome project in the first place: tracking down genes associated with behavioral traits. Neff has teamed up with Illumina Inc. in San Diego, California, to use a microarray to look for SNPs associated with “pointing.” About 40 breeds point—freezing and lifting a paw in the direction of a rabbit or other quarry. “I finally feel we have a chance to understand the behavior,” says Neff, who worked with Rine in the 1990s.

    Even with the best genetic resources, however, the work is still challenging. “The problem is identifying the phenotype and separating what is learned versus what they are born with,” says Parker. Adds Barsh: “We know whether [a dog] is yellow or black, but sometimes it's not so easy to tell how good a dog is at herding versus retrieving.” In Ostrander's lab, Tyrone Spady—who used to study visual behavior in fish—depends on DNA from dogs that live in different places, with different upbringings and lifestyles. So Spady asks owners to fill out questionnaires: how frequently does the dog chase other animals, what quarry gets it running, and so on. By pooling surveys from many people, he corrects for bias in the owners' responses and uses the data in genomewide association studies.

    At the Norwegian School of Veterinary Science in Oslo, Frode Lingaas is taking a similar tack in looking into “cocker rage.” In this syndrome, generally amiable pets turn on their owners, exhibiting frighteningly aggressive behavior. He and his European colleagues assess the dogs' personalities through interviews with the owners and questionnaires. Several hundred samples will come from English cocker spaniels, but a few will come from English springer spaniels, which are also prone to this mental disorder. These dogs should get the researchers close to the gene, and a comparison with golden retrievers, which can also be four-legged Jeckylls and Hydes, should get them within striking distance. Because rage is often a symptom in schizophrenia, bipolar disease, obsessive-compulsive disorder, and other mental problems, finding this gene could help researchers understand why it develops in patients with mental illness.

    Swedish researchers have a leg up on Spady and Lingaas. In Sweden, breeders often evaluate their young dogs using a standardized personality test. By looking at aggression, boldness, shyness, and sociability, they are able to better assess which ones should be trained as working dogs and for what jobs. So they have a large data set to work with and are moving forward with those studies, says Hedhammar. And in Russia, a long-term breeding program in foxes has yielded docile, doglike animals whose genes might yield insights into what makes dogs so affectionate and loyal.

    Although Ostrander is curious about the genetic basis of why dogs get sick, grow tall, or excel at hunting, the intimacy she and others share with their canine pets is not something she cares to be intellectual about. Just the thought of the death of her lab's mascot a year ago still brings tears to her eyes. “There's some genetics buried in that,” she says, “but I am going to leave that to someone else.”


    Europe Going to the Dogs

    1. Elizabeth Pennisi
    Best of Show.

    Olle Kämpe, Göran Andersson, Kerstin Lindblad-Toh, Åke Hedhammar, and Leif Andersson (left to right) have teamed up to tackle human diseases through dog studies.


    While U.S. researchers were beginning to piece together the first genetic maps for dogs in the 1990s, University of Copenhagen pig researcher Merete Fredholm was wringing her hands in frustration at the lack of funds to support a European consortium with the same goal. But now, she and collaborators from more than 20 institutions are poised to make a champion showing in this field, thanks to a pending European Union award for about $16 million.

    Named after the legendary wolf who nourished the founders of Rome, the LUPA consortium plans to get DNA samples and health histories from 8000 dogs and hunt down genes for 18 diseases, including four cancers, four inflammatory disorders, and three heart diseases. “We have decided to focus on certain areas and to standardize the clinical characterization of these diseases,” explains Leif Andersson, a geneticist at Uppsala University in Sweden. In this way, they use the DNA from animals across different countries, as well as across different breeds, to find genes. Once they have found the gene, they plan to see what role it plays in humans, says LUPA coordinator Michel Georges of the University of Liege in Belgium.

    Georges is a recent convert, having never studied dogs before. But joining him are key players in canine genomics. Kerstin Lindblad-Toh, who headed the dog sequencing project at the Broad Institute of Massachusetts Institute of Technology and Harvard University in Cambridge, Massachusetts, is now spending three-quarters of her time at Uppsala University. “Her recognition, competence, and working capacity will change [European dog genomics],” says collaborator Åke Hedhammar, a long-term dog researcher at the Swedish University of Agricultural Sciences in Uppsala, who has used kennel club records and records from companies that provide health insurance for pets to sort out which individual dogs and breeds are best suited for particular gene searches.

    The European effort promises to push the field to warp speed. “We're going to learn a lot more about a lot of diseases,” says Elaine Ostrander, a geneticist at the National Human Genome Research Institute in Bethesda, Maryland. Until recently, much of the high-profile work on dog genetics came from Ostrander and her collaborators, but already Europe is churning out a slew of key papers. With LUPA, “there will be a lot of competition,” says Fredholm.


    Is Mars Looking Drier and Drier for Longer and Longer?

    1. Richard A. Kerr

    The more closely planetary scientists scrutinize the Red Planet, the less wet and weathered its surface appears to be, even in the solar system's earliest days

    In recent years, “water, water everywhere” might have been the motto for Mars exploration. Shallow, salty seas ruled on early Mars, and water has been gushing down gullies in the geologically recent past, some even in the past few years. But the tide is now receding, at least a ways.

    As highlighted in the 21 September issue of Science and elsewhere, more leisurely consideration of observations from rovers and orbiters and the unprecedented detail afforded by Mars Reconnaissance Orbiter (MRO)—the latest arrival at Mars—are bringing into question many earlier geologic interpretations involving surface water. Mars has been “a desolate place for a long time,” concludes geochemist Scott McLennan of Stony Brook University in New York state.

    The drier touch is the result of terabytes of data coming down from MRO's High Resolution Imaging Science Experiment (HiRISE), which can image features such as boulders as small as a half-meter in size. With that powerful new resolution, HiRISE principal investigator Alfred McEwen of the University of Arizona, Tucson, and his team went gully hunting.

    On page 1706, they report imaging the bright-looking deposits that appeared in two gullies between passes by the Mars Orbiter Camera (MOC) onboard the now-defunct Mars Global Surveyor. From the way the bright material appeared to have flowed, the MOC team concluded that it had been charged with a fluid, presumably water. HiRISE has now discovered four other distinctively bright gullies, but “we've seen nothing to confirm” the presence of water, says McEwen. “We could change our minds, but for now, there's nothing here that isn't explained just as well by a dry flow” of loose debris down Mars's steepest slopes.

    Not so wet?

    Water may have had nothing to do with bright gully deposits (middle, above) or bouldery lowlands (right).


    New gullies aren't the only young, watery features that the HiRISE team is calling into question. In a paper in press at Geophysical Research Letters, Frank Chuang of the Planetary Science Institute in Tucson, Arizona, McEwen, and colleagues report seeing a slight hollowing out of dark streaks running down martian slopes. A few researchers thought water might have caused such streaks (Science, 30 March, p. 1788), but the relief detected through late-day shadowing suggests dry dust avalanches instead, the group says.

    Further back in geologic time, the HiRISE team is questioning some of its own aqueous interpretations. Most researchers believed that water, gushing from a great crack in the ground, sliced through lava flows to form Athabasca Valles (Science, 30 November 2001, p. 1820). McEwen still thinks that the original valleys were probably cut by water, but now with HiRISE's greater resolution, the valleys and the sediments deposited there—possibly containing traces of subsurface life—appear to be buried beneath several meters of lava, as Windy L. Jaeger of the U.S. Geological Survey in Flagstaff, Arizona, and HiRISE colleagues report on page 1709.

    The observation might help explain why so many Mars landers put down on putative lakebeds and outflow channels have found nothing but lava, says McEwen. It would also flatly contradict the contention that expanses of flat plains—where waters from Athabasca Valles supposedly drained millions of years ago—are still a “frozen sea” of ice (Science, 4 March 2005, p. 1390).

    Finally, new HiRISE data from the northern lowlands of Mars cast doubt on one hypothesis that there was an ancient ocean there. Lingering shorelines have been reported around the broad, flat basin. But HiRISE images reveal ubiquitous boulders up to 2 meters in size strewn across the northern lowlands where a deep, long-lived ocean should have left nothing as large as a grain of sand. Either an ocean sediment layer is far thinner than proposed, say McEwen and colleagues, or there was never any ocean.

    Not that Mars has dried out entirely. Sixty-kilometer Mojave crater looks for all the world as though drenching rains had gouged its slopes and dumped the debris in great conical fans (Science, 9 April 2004, p. 196). With a closer look through HiRISE, McEwen and his team think they see a deeper source for the water. Rather than water falling from the sky, they think an impactor may have hit a water-rich crust, creating muddy debris that flowed down to form fans. And HiRISE imaged gullies possibly millions of years old that are “most easily explained as flowing water,” says McEwen.

    But the tendency of HiRISE observations to diminish the role of liquid surface water may reinforce recent trends. Ice and snow have obviously moved around the planet when Mars periodically tilted on its axis, but observations from the Opportunity and Spirit rovers reveal excruciatingly slow rates of erosion (Science, 8 April 2005, p. 192). Running water could not have been much of a player in the past 3 billion years or more, researchers are concluding. And the rust-colored martian visage long attributed to moist weathering seems to be only a thin patina on fresh rock untouched by the eons (Science, 22 August 2003, p. 1037). A slight dampness may have sufficed.

    Even the early days of Mars—often called “warm and wet”—are looking drier. The apparent remains of “shallow, salty seas” where Opportunity roved have shrunk to ephemeral puddles in a hyperarid desert (Science, 5 January, p. 37). When planetary scientists gather next month to narrow their landing site options for the next Mars rover, their search for signs of martian water will need be well focused indeed.


    Read My Slips: Speech Errors Show How Language Is Processed

    1. Michael Erard*
    1. Michael Erard is the author of Um…: Slips, Stumbles, and Verbal Blunders, and What They Mean.

    Researchers are analyzing spoonerisms and other slips of the tongue to help understand how humans—and even apes—can comprehend and use language

    Say what?

    Panbanisha ponders lexigrams.


    Kanzi, a 27-year-old bonobo, knows the difference between a blackberry and a hot dog. But sometimes, when researchers asked him to touch the abstract visual symbol, called a lexigram, that means blackberry, he touched the lexigram for hot dog, blueberries, or cherries instead.

    Kanzi's errors weren't random mistakes, nor an indication of apes' language limitations, says Heidi Lyn, a comparative cognitive scientist at the University of St. Andrews in Fife, U.K. Rather, they show the complex way in which his mind had organized the lexigrams. For example, if Kanzi made a mistake when asked for “blackberry,” he was more likely than chance to choose a lexigram for another fruit, much as you or I might say “red” instead of “black,” says Lyn, whose paper on Kanzi's mistakes was published online in Animal Cognition in April and will appear in print later this year or early next.

    Analyzing errors for insight into the covert mental processes of animals is a new direction for a technique that language scientists have used for 40 years to study language processing in humans. For all its power, human language remains something of a scientific mystery. Researchers are still struggling to understand exactly how humans hear, comprehend, and produce words and sentences. Slips of the tongue, or linguistic mistakes made inadvertently by speakers who do know the correct form, offer potent clues about language processing in the brain. Speech error research is currently on the upswing with new methods and theories and increased attention to groups such as children and users of sign language—and, now, animals. “We have a long way to go before we understand how to put the multiple pieces of language systems together in the seamless way that we experience it,” says psycholinguist Merrill Garrett of the University of Arizona, Tucson, who has studied slips of the tongue since the 1970s. “Error profiles that arise during spontaneous conversation are going to be an important part of the agenda.”


    German signers can catch more of their mistakes than speakers can.


    Barn doors and darn bores

    Early in the 20th century, collecting speech errors was chiefly a hobby, especially for people who found Freud's emotional explanations lacking. (Psychoanalysis had no way to account for the diverse, often mundane slips of the tongue that people make.) In the 1960s, Noam Chomsky sparked a wave of grammatical theorizing that transformed speech errors into theoretical gold. Linguist Victoria Fromkin, among others, argued in the late 1960s that speech errors showed that abstract mental units of sounds and words were also concrete symbols in speakers' minds.

    Using speech errors as scientific data posed some problems: Waiting for speakers to make an error required an inordinate amount of time, and some questioned the reliability of what listeners heard. But the field got a boost in the 1970s when researchers created ways to elicit many (but not all) types of speech errors in the lab. One method involved giving people word pairs like “duck bill,” “dart board,” and “dust bin,” then asking them to say “barn door.” About 10% of the time, subjects said “darn bore.” By eliciting speech errors, researchers can control for higher frequency sounds (in English, “s” is more frequent than “k”) and words (“latrine” is more frequent than “tureen”). Words used more frequently are less likely to be involved in speech errors. For example, more errors occur with content words (“cat,” “hat”) than grammatical words (“the,” “in”), because grammatical words are used more frequently. The effect of frequency also implies that what one usually talks about affects how one slips.

    Lyn was the first to apply the study of errors to bonobos. Kanzi and a female bonobo, Panbanisha, who now live at the Great Ape Trust in Des Moines, Iowa, can comprehend instructions and descriptions in spoken English, and they can respond by using 384 lexigrams, which they touch on a keyboard. From 1990 to 2001, researchers tested the bonobos thousands of times, showing them a photo or lexigram or saying an English word. The bonobos then had to select the matching lexigram. The apes chose correctly 12,157 times and made 1497 incorrect choices, although no one thought to consider the errors as data until now.

    Lyn found that Kanzi and Panbanisha have arranged hundreds of lexigrams in their minds in a complex, hierarchical manner based mainly on their meaning. She coded the relations between all 1497 sample-error pairs along seven dimensions, including whether the lexigrams looked alike, had English words that sounded alike, or referred to objects in the same category. She found that the errors were not random but patterned. If the lexigram stood for “blackberry,” the error was more likely than chance to sound like blackberry, be edible, be a fruit, or be physically similar. Errors were also more likely to be associated with more than one category. For example, “cherries” are both edibles and fruits, and the word sounds like the correct one, “blackberries.” All this indicated to Lyn that mental representations of the lexigrams must be stored not as simple one-to-one associations but in more complex arrangements. This suggests that, given the chance, bonobos and other apes can acquire systems of meaning that are closer than anyone has thought to what humans do, and that some aspects of language acquisition are not unique to humans. “We begin to see that the biological or species variable is far less important than we thought,” says Susan Savage-Rumbaugh of the Great Ape Trust.

    Out of the mouths of babes

    Lyn's analysis is not the first to study errors in creatures that haven't mastered all the complexities of human speech: For about 20 years, researchers have also used speech errors to study language acquisition in children. Kids do say the darnedest things, but by definition, the true errors are the ones they make with linguistic levels and units they know, explains linguist Jeri Jaeger of the University at Buffalo in New York state, who in 2005 published a book that capped 20 years of collecting kids' slips, many of them from her three children. It was the first study of the same children's speech errors over a long period, allowing her to match their errors with their stages of language development. Jaeger's collection is “unique,” says linguist Annette Hohenberger of Middle East Technical University in Ankara, Turkey, and shows how slips change over time.

    Kid talk.

    Children make slips in stages, as they acquire language.


    Distinguishing true slips took a linguist's ear and a mother's patience. Jaeger's youngest daughter's exclamation that “She already showed me tomorrow!” wasn't a true slip, because she didn't yet know the meaning of “yesterday.” On the other hand, at 16 months, her eldest daughter said “one two three, one two three, one tuwee”—a fusion of “two” and “three,” which was a true slip because she knew the two words were distinct and had regularly pronounced them correctly. This anchors Jaeger's point that children only make slips with what they know.

    Analysis of such speech errors can provide a novel perspective on how children acquire language. Linguists have debated, for instance, whether children need syntactic knowledge to speak in two-word clumps. Jaeger says no. Her data show that when children begin to combine words, at about age 2, they don't blend phrases or confuse intonations. Such slips require a mature knowledge of syntax. Not until children speak in sentences of three or more words do syntactic errors, such as “sit down this immediately!” (a blend of “sit down this minute” and “sit down immediately”) appear.

    It's long been known that children make more speech errors than adults, but it wasn't known how or if aging affected error rates. In 2006, Janet Vousden and Elizabeth Maylor at the University of Warwick in the U.K. published the first study tracking speech errors across the life span and reported no significant increase in total errors between young and older adults. However, compared to children, adults made proportionately more errors in which a sound segment was anticipated (frive frantic fat frogs) rather than perseverated (five frantic fat fogs).

    That fits with a widely used model of speech errors developed in the 1980s by cognitive scientist Gary Dell of the University of Illinois, Urbana-Champaign. Most linguists think that words and sounds are stored in a kind of network in the brain, connected by variables such as how they sound, their parts of speech, and their meaning. Dell proposed that when sounds or words stored in such a network are selected, this also strengthens or “activates” neighboring words or sounds, which may be misread as the right ones. In his model, people forced to speak quickly make more errors not because they have more opportunities to do so but because the stimulation of neighboring units has less opportunity to fade. Dell also proposes that practice tends to activate present and future units more than past ones. As a result, the more practice a speaker has, the higher the proportion of anticipatory errors, although overall errors decrease. “Whatever makes you more error-prone makes your errors more perseveratory,” explains Dell. Caroline Palmer, a psychologist at McGill University in Montreal, Canada, has found the same effect (among others) in piano performances.

    Language need not be spoken, and linguists have long been interested in whether speech and sign are processed the same way. German linguists Hohenberger and Daniela Happ and Helen Leuninger at the University of Frankfurt used a newer method for eliciting slips from German speakers and signers of Deutsche Gebärdensprache (DGS, or German Sign Language), in the first slip study of signers in a language other than American Sign Language. In a series of papers, the most recent published in 2007, they asked speakers and signers to narrate a series of pictures under various stress conditions, such as putting pictures out of order.

    They found that all types of slips found in spoken German are also present in DGS, although in different frequencies. The slips also occur with the same basic units. This indicates that signs and words are both stored in the brain as clusters of primary elements that can be flexibly recombined, and it underscores that humans possess a single language faculty regardless of how they deploy it, says Hohenberger.

    But there are some differences. For instance, both signers and speakers catch and repair utterances that include mistakes. But signing is relatively slower, so signers catch more errors involving exchanges of individual signing elements, such as hand shapes or location of the sign.

    Because of this, Hohenberger speculates that slips of the hand may next contribute to an emerging question in slip-of-the-tongue research. Based on ultrasound studies of speakers' tongues as they make sound exchanges (better known as spoonerisms, such as “jeef berky” instead of “beef jerky”), phonetician Marianne Pouplier of the University of Munich, Germany, has suggested in several recent papers that speakers don't substitute one whole sound segment for another as was previously thought. Rather, they attempt to pronounce the two sounds at the same time. This way of thinking about speech errors—as a collision of motor commands rather than as substitutions of mental symbols—might be more reliably investigated in slips of the hand, Hohenberger says, because researchers can capture the slower hand movements more clearly than tongue movements.

    Although error studies offer intriguing data, their implications are not always clear. Take the bonobo findings. The apes confused fewer target-error pairs that were either both nouns or both verbs, implying that they don't take note of parts of speech. “This result argues against the claims made elsewhere that Kanzi has spontaneously developed an elementary grammar,” says primatologist Robert Seyfarth of the University of Pennsylvania.

    But Lyn says the error results don't directly address the question of grammar and don't contradict earlier findings in which bonobos appeared to prefer certain semantic sequences. Instead, she says, “the results support the idea that [apes'] representation of semantic information is much more complex than has been shown to date.”

    Still, the study of bonobo errors does rebut two frequent criticisms of ape language research: that the apes have simply been trained to respond, and that researchers may inadvertently shape the bonobos' responses. Errors can't be trained, nor can patterns of errors be deliberately produced. And if researchers were subtly guiding the apes by eye gaze or body posture, Kanzi and Panbanisha might have made far more errors based on simple proximity in the keyboard.

    Lyn plans to continue analyzing the error data for other insights into the bonobos'conceptual world. “For me, the error analysis was not to just study one aspect of their symbolic representation,” Lyn says, “but to get a glimpse of how it all hangs together.” Such a big question hasn't been answered for human language, either, but speech errors will likely be central to the search. Says the University of Arizona's Garrett: “We have most certainly not reached the limits of that kind of research.”


    Barroso's Brainchild

    1. Martin Enserink

    Thanks to the influence of José Manuel Barroso, Europe may soon have its new European Institute of Technology. But few scientists are celebrating

    “Just think what Europe could be. … Think of its untapped potential to create prosperity and offer opportunity and justice for all its citizens.” Thus began a February 2005 memorandum from José Manuel Barroso, president of the European Commission—the European Union's (E.U.'s) executive branch—aimed at salvaging the Lisbon strategy. This ambitious plan to create economic growth and jobs in Europe was faltering and, Barroso wrote, “immediate action” and “a new start” were needed.

    Next week, the European Parliament (EP) is slated to vote on one of the remedies Barroso presented in his memo: the European Institute of Technology (EIT). As Science went to press, one major issue—where to find the €300-plus million now budgeted for the project—remained unresolved, and the commission and the Parliament were in frenetic discussions. A commission spokesperson said that a solution was “imminent.”

    Yet few in the European research community are elated. A supposed engine for Europe's economy, EIT was envisioned as bringing together the smartest minds around top-notch research that will lead to new industries. Its name echoes that of the Massachusetts Institute of Technology (MIT) in Cambridge to show that the commission is aiming high. A series of reports, however, has argued that the plan will do very little for innovation in Europe.

    The League of European Research Universities, for instance, concluded in a study last year that the EIT plan was “misconceived and doomed to failure,” and Euroscience, a Europe-wide movement of scientists and policy experts, called it “a politically motivated idea, starting from a wrong premise.” EIT, says former U.K. science adviser Robert May, “is based on a misunderstanding” about innovation: namely, that it can be bought with government money.

    And 2 years of political wrangling have watered down the EIT proposal to the point that critics say it will only add a layer of bureaucracy to the E.U.'s funding system. Indeed, the real MIT could be forgiven for not recognizing itself in its European competitor. EIT won't have a campus; it will be a virtual institute made up of scientists based at universities, research labs, and companies across the continent. It also won't award diplomas, as was originally proposed. And although more money is supposed to flow from various sources, the amount promised so far until 2013 boils down to about €50 million a year, less than 3% of MIT's annual budget and about 13% of what Barroso had proposed.

    Homeless institute

    EIT is meant to address what is sometimes called the European paradox. Although Europe's scientific output, measured in papers, is bigger than that of the United States—and arguably of the same quality—Europe seems far less able to turn knowledge into thriving industries. The Lisbon strategy, launched in 2000, aimed to reverse the trend and make Europe the world's most competitive and dynamic “knowledge-based economy”—that is, based on high-tech. But a 2004 report by Wim Kok, former prime minister of the Netherlands, concluded that Lisbon was stalled—hence Barroso's call for action.

    Despite widespread skepticism, the plan had political legs—apparently because it had been presented by Barroso himself, European policy watchers say. “He put his entire weight behind it,” says Helga Nowotny, vice-chair of the European Research Council (ERC) and another critic of the idea. “Governments just couldn't just say no.”

    The EIT plan has changed considerably, however. Initially, the commission entertained the idea of a brick-and-mortar, degree-granting institute somewhere in Europe. Poland and other new E.U. members began lobbying for the prize, offering significant financial incentives. After a wide consultation made clear that countries were unlikely to agree on a site—and that universities didn't like a new competitor—EIT went virtual. In the commission's formal proposal in October 2006, EIT became a small governing board plus six or more Knowledge and Innovation Communities (KICs), independent networks that would each focus on a different field and employ scientists at existing institutes and companies around the E.U. Its budget, however, would still have been a healthy €2.36 billion for the period 2008-2013.


    Laws and new initiatives such as this begin with the E.U. Commission, but they need to be approved by the Council of the E.U.—in which ministers of member states meet—and by the Parliament. Insiders say that in the council, several countries, including the United Kingdom and Germany, were very skeptical about the EIT plan. Several Parliament committees were divided as well. But intense diplomacy by the German government, which chaired the E.U. during the first 6 months of this year, led to a compromise in June. A key Parliament committee endorsed EIT in July.

    Negotiations chipped down its scope, however, leaving a pilot phase of 4 years and a gradual start with just two or three KICs. (Climate change, energy, and information technology were suggested as their themes.) They also took away EIT's right to give out diplomas, replacing it with an EIT label on existing university degrees. To top it off, “Innovation” was added to EIT's name—although the acronym was left intact.

    The money issue remains unresolved. Even if the commission does find the €308 million, it's a long way away from what Barroso wanted, and it's not clear what other sources will be tapped. The KICs can seek aid from Europe's Framework Programme 7 (FP7), the E.U.'s main source of research funding. But Parliament members want guarantees that the centers won't get preferential treatment. “We don't want them to cannibalize” other programs, says Reino Paasilinna of Parliament's Industry, Research and Energy committee.

    Barroso's plan for EIT has always anticipated a significant financial contribution from industry—the intended beneficiary of the institute's output—but that support has not yet materialized. The problem is that industry cannot commit to an institute that doesn't exist, says Horst Soboll, chair of the now-defunct European Research Advisory Board and a former director of technology at DaimlerChrysler. That might change once the KICs take shape and concrete business opportunities come up, he says.

    To EIT critics, the final compromise isn't an institute but a funding mechanism, and one that looks remarkably like existing ones. Most of the FP7 money also goes to topic-specific Networks of Excellence; other E.U. funding streams favor large collaborative efforts as well. As to innovation, such scattered organizations are unlikely to generate strong ties with industry, says Peter Tindemans, an independent consultant who studied the commission's plan together with Luc Soete of the United Nations University in Maastricht at the request of EP. “There's not a shadow of a chance that this will have any major impact,” says Tindemans.

    But a senior official at the commission's directorate-general for education—who, in accordance with commission rules, declined to be named—says that analysis is wrong. Already, there has been an “unending procession” of companies telling the commission that EIT is a “wonderful idea” and that they would like to participate, says the official. Still, Nowotny predicts KICs will have trouble hiring excellent staff, because researchers with thriving careers are unlikely to jump to an organization whose future is so uncertain.

    Solving a paradox

    Is there a better alternative? There's no consensus among the innovation experts, although there are some other proposals. In their study, Tindemans and Soete launched the idea of a Cluster EIT, a series of smaller institutes with 300 to 500 researchers, each focusing on a specific field and big enough to attract company interest. But Parliament has done nothing with the idea.

    Frank Gannon, former director of the European Molecular Biology Organization in Heidelberg, Germany, believes the answer is even simpler. His solution is to bet more on ERC, the funding agency launched in February that picks investigator-initiated projects based on merit without any political or geographical considerations (Science, 2 March, p. 1205). “If we had more high-quality research, I think the European paradox would just disappear,” says Gannon, who is trying the same recipe in his new job at the helm of the Science Foundation Ireland.

    May, meanwhile, believes the problem isn't in the science funding at all—it's Europe's “culture of coziness and adversity to risk” that makes people shy to invest in new technologies. To him, changing tax and bankruptcy laws to reward entrepreneurship might be a better solution.

    For now, the research community has largely resigned itself to the idea of EIT. Whether the slimmed-down version of the grand idea proposed in 2005 can help make Europe a “beacon of economic, social, and environmental progress to the rest of the world,” as Barroso hoped—well, they're not holding their breath.

Log in to view full text