News this Week

Science  27 Feb 2004:
Vol. 303, Issue 5662, pp. 1268

    Georgia Backs Off a Bit, But in Other States Battles Heat Up

    1. Constance Holden

    Georgia school officials took a big step back from opening the door to creationism last week. They provisionally restored evolution and some other key scientific concepts to the state's proposed curriculum standards, after dropping them from earlier drafts. But although science educators see it as a victory, the Georgia dispute is just one of several ongoing battles over the teaching of evolution in the nation's schools.

    The flurry of fights at both local and state levels reflects the pervasiveness of resistance to evolutionary theory, says biologist Randy Moore of the University of Minnesota, Twin Cities. “It's relentless. It comes up just about everywhere. And it's not going away,” he says. Eugenie Scott, director of the National Center for Science Education (NCSE) in El Cerrito, California, believes that the timing is not a coincidence. “It's an election year,” she says, meaning that there is a heightened awareness of hot-button issues among both politicians and the public.

    The current battle lines are the result of a 1987 decision by the U.S. Supreme Court that creationism is religion and can't be taught in science class. Since then, the antievolution movement has gathered adherents under the rubric of “intelligent design” (ID). Instead of going to court, ID supporters are trying to build grassroots support. And their success, says Moore, is premised on the perception that, “on its face, ID is not linked with religion.”

    On 19 February, the Georgia Board of Education approved proposed curriculum standards consistent with support of evolution after initially proposing standards that not only left out the word “evolution” but omitted major concepts in both physical and biological sciences. The ensuing uproar (Science, 6 February, p. 759) drove State Superintendent Kathy Cox to restore the “e” word. Scientists continued to press for restoration of key features such as plate tectonics and the age of Earth, however, and last week the board approved a version that contains most of the omitted material. A final vote is set for June.

    Coast to coast.

    Proposals to encourage teaching creationism and “intelligent design” have been advanced in 37 states since 2001.


    Evolutionary theory has been suffering an exceptionally bumpy ride in a number of other states this year. In Ohio, where ID promoters were beaten back 2 years ago, the state Board of Education this month voted 13–4 to approve a chapter called “Critical Analysis of Evolution” in the model teaching guide for 10th grade biology. Critics have complained that the chapter relies heavily on a popular ID text, Jonathan Wells's Icons of Evolution, and refers students to Web sites that promote the concept. A final vote is scheduled for next month.

    The issue has also raised its head in neighboring Michigan, where Grand Blanc school officials are weighing proposals that would add both creationism and Bible study to the curriculum. A petition asking for equal time for creationism and evolution was presented to the school board by a high school student who is also the daughter of a board member. In Darby, Montana, a nasty dispute has broken out over a proposal by a local minister, Curtiss Brickley, to encourage teachers to look at evidence for and against various scientific theories, evolutionary theory in particular. “We've been told that fights have actually broken out on the school grounds,” says Skip Evans of NCSE, which monitors the issue.

    A religious group, The Alliance Defense Fund in Scottsdale, Arizona, has offered to defend the school district if it is sued. “This could turn out to be the test case that the ID advocates want,” says Evans, noting that a victory could validate the argument that opponents of evolution deserve a chance to air “scientific” criticisms of Darwin's theory. There's a lot of support out there for this view, says Scott: “The ‘Teach the strengths and weaknesses of evolution’ language appeals to the spirit of ‘fairness’ in American culture.”

    State legislators have also been busy this month. Missouri Representative Wayne Cooper has introduced a bill, HB911, that would require “equal treatment” for ID and evolution, starting in 2006, and would sack teachers who refuse. An Alabama bill, SB336, would protect teachers from getting into trouble for teaching creationism. “I think there is a tremendous ill balance in the classroom,” says the bill's sponsor, Democratic Senator Wendell Mitchell.

    Moore says the issues are percolating in many other states. In Minnesota, for example, the latest state science teaching standards may be weakened if the legislature chooses to include a minority report authored by ID supporters. The current commissioner of education, Cheri Yecke, believes the decision on whether to teach creationism should be left up to local school districts. And in Texas, a citizens' group this week alleged that antievolution members of the state board of education have been ordering textbook publishers to correct “errors” identified by creationist groups.

    Scientists should not underestimate the threat to science from such grassroots efforts, says Moore: “In every survey that I've seen data for, 15% to 20% of high school biology teachers teach creationism. University faculty have no idea what is happening in high school classrooms across the country.”


    Climate Plan Gets a Qualified Go-Ahead

    1. Richard A. Kerr

    It took two tries, but the Bush Administration has now gotten its research plan for solving the climate puzzle basically right, according to a review released last week by a committee of the National Research Council (NRC). But the broadened, highly ambitious Climate Change Science Program (CCSP) “will require a concomitant expansion in funding” beyond the $1.7 billion per year the U.S. government currently spends on climate research, the committee cautioned. No one sees where the additional funding will come from.

    The White House plan for the CCSP* “articulates a guiding vision, is appropriately ambitious, and is broad in scope,” says the NRC report, all qualities that the same committee said the draft version of the plan lacked (Science, 7 March 2003, p. 1494). “Our feeling is that this is a perfectly good framework for moving forward with the research in the plan,” says committee chair Thomas Graedel, a professor of industrial ecology at Yale University.

    In the manner of such critical reviews, the NRC committee did not dwell on the plan's best attributes. Reviewers did note that the CCSP had responded “constructively” to their earlier criticisms. In particular, they added, the plan now includes a clear focus on understanding not just how climate might change but also the effects of changing climate on ecosystems and human systems. It also emphasizes research into how climate change might be prevented or how humans might adapt to it. And it now has a strategic management framework “that could permit it to effectively guide research on climate and associated global changes over the next decades,” the report says.


    This climate plan “should be implemented with urgency,” advises NRC.


    The plan is looking so good that it “should be implemented with urgency,” said Graedel. The committee has lingering concerns, however. One is the four-level management structure intended to oversee CCSP and its 13 participating agencies, which range from the Smithsonian Institution to the Department of Defense. Above CCSP, which is directed by a politically appointed Department of Commerce assistant secretary, is an intermediary committee composed of departmental under- and deputy secretaries, and above that is a committee of cabinet members. Topping it all is policy and program review within the Office of the President. This proposed oversight structure “is very complex, will require significant interagency cooperation, and is essentially untested,” the committee warns.

    A management structure rich with political appointees has its advantages, notes committee member Anthony Janetos of the Heinz Center in Washington, D.C.: “Scientists and the agencies can talk to people who can acquire the resources needed.” On the other hand, even a perception of political influence could jeopardize the scientific independence and credibility of the program, he adds. The committee “still believes (as in its first report) that establishing a standing advisory body charged with independent oversight of the entire program will be more effective,” the report says.

    And then there's the question of the bottom line. “If you're doing new things, and you're not going to give up anything, it's clear it's going to cost more,” says Janetos. As the report observes, “there is no evidence in the plan or elsewhere of a commitment to provide the necessary funds.”

    No one is saying where the additional funding would come from in these times of fiscal constraint, but “the first thing if you want money is to have a good plan,” says Conrad Lautenbacher, administrator of the National Oceanic and Atmospheric Administration and the CCSP director's boss. The planned program “will compete for resources well,” he says. As for the management structure, “they ought to have a lot of confidence in it,” he says. “It's a pretty direct-line structure.” And independent review of the program and its products “was always envisioned,” he adds.

    Graedel is philosophical about prospects for the CCSP plan. If it “were implemented as discussed, we would be quite pleased,” he says. “Programs seldom are.”


    FDA's Popular Chief to Take on Medicare

    1. Jennifer Couzin

    After just 15 months as Food and Drug Administration (FDA) commissioner, Mark McClellan is departing for a new federal job. President George W. Bush last week announced plans to nominate McClellan for a politically high-profile post: head of the Centers for Medicare and Medicaid Services, the $500 billion agency that is implementing sweeping new regulations that expand prescription drug coverage. FDA Deputy Commissioner Lester Crawford will take McClellan's place while the Administration hunts for a successor.

    McClellan's exit marks a return to limbo for FDA, which went for 20 months with no chief before McClellan took over. The 40-year-old economist and physician, who hails from a prominent Texas Republican family (his younger brother Scott is the president's press secretary), was a generally popular figure among drug regulators and pharmaceutical companies.

    Leaving so soon?

    FDA Commissioner Mark McClellan.


    “What a great loss this is to the agency and the industry,” says Kenneth Kaitin, director of the Tufts Center for the Study of Drug Development in Boston. McClellan “set a lot of things in motion,” adds Kaitin. His actions included encouraging drug companies to integrate pharmacogenomics into drug studies, focusing on food safety, and introducing a risk-management program for new drugs. The White House's stringent 2005 budget rewarded FDA with a proposed increase of almost 9%, to $1.8 billion.

    McClellan leaves before resolving one of the touchiest issues of his tenure: whether to allow over-the-counter sales of “morning-after” contraceptives. In December, an FDA advisory panel recommended approving the pill, but FDA has postponed action until May.


    Light From Most-Distant Supernovae Shows Dark Energy Stays the Course

    1. Charles Seife

    The brightest objects in the universe are revealing the darkest mystery of the cosmos. In a press conference on 20 February, astronomers announced observations of distant supernovae that hint at the properties of the baffling “dark energy” force that pushes galaxies apart and stretches the very fabric of spacetime. The new observations, which include six of the seven most distant supernovae yet discovered, give the first glimpse of how a key property of dark energy is changing over time.

    “Dark energy is about 70% of the universe, and we don't have a clue what it is,” says Mario Livio, a theorist at the Space Telescope Science Institute (STScI) in Baltimore, Maryland. “What is the strength of the repulsive force?”

    To tackle that question, Adam Riess of STScI and colleagues used the Hubble Space Telescope to study exploding stars known as type Ia supernovae. Because their brightness is known, these supernovae act as cosmic yardsticks that tell researchers how far away distant galaxies are; meanwhile, their color reveals how fast the galaxies are speeding away. These two bits of information allow scientists to measure how fast the universe expanded during different eras of its 13.7-billion-year history, and that tells them how quickly that expansion is speeding up because of the push of dark energy.

    Speed traps.

    Stars that exploded in the same way billions of years apart (lower panels) gauge the universe's expansion.


    The latest group of supernovae, 16 in all, give a sense of how “squishy” the dark energy is—how much force it exerts at different pressures, a property known as dark energy's equation of state—and whether that squishiness has changed over time. That information is critical for determining which of several rival models best describes how the universe has evolved so far and where it is headed. Riess says the observations are consistent with an unchanging dark energy force—a cosmological constant. “If it's changing, it's not changing very quickly,” he says. But he cautions that “these are very crude measurements.”

    Just as important, Riess and colleagues see a moment, about 5 billion years ago, when the cosmological constant began to win a cosmic tug of war with the force of gravity. As galaxy clusters got more and more distant from one another, the force of gravity played a smaller role in shaping the universe, allowing dark energy to dominate. The slowing effect of gravity gave way to the accelerating expansion of dark energy. “We see the turnaround point very well,” says Riess. “About 5 billion years ago, the universe temporarily coasted when it changed from slowing down to speeding up.”

    As important as these observations are, Livio says, they're too preliminary to rule out theories such as quintessence, which posits a changing strength of dark energy, or something even more exotic: “There's still a lot of wiggle room for there to be a varying field there.” Further studies by the Hubble or its planned successors should narrow the possibilities, theorists say.


    Testing Pesticides on Humans Given Qualified Endorsement

    1. Jocelyn Kaiser

    Federal regulators should be allowed to use data from controversial studies in which people are deliberately dosed with pesticides and other chemicals, an expert panel has concluded. The National Research Council (NRC) report* says that even though some consider such tests morally wrong because they expose healthy volunteers to risks, they may be acceptable under certain conditions. The panel recommends that the Environmental Protection Agency (EPA) hold human testing to strict new standards, including review by a new agency ethics panel.

    Environmental groups charged that the report is full of loopholes and urged EPA to place a moratorium on exposing humans to pesticides and other toxicants. The report's recommendations may “sound good,” says Richard Wiles, senior vice president of the Environmental Working Group (EWG), “but that's not how they'll play out.”

    Pesticide makers funded human exposure studies in response to child protection rules in a 1996 law: It requires EPA to consider lowering the official safe level of pesticide exposure on the theory that children are especially vulnerable. EPA already sets levels at 1% of the safe level established by animal studies. Companies feared that an additional 10-fold reduction would be imposed to protect children, and to head this off they began sending EPA data from human testing.

    EWG issued a report slamming several such studies in the United Kingdom, and EPA put a hold on using data from them. However, a 12-member advisory panel concluded, with two dissents, that certain studies should be allowed, and the Bush Administration moved to lift the hold in 2001. Activists protested, and EPA asked NRC to take a look at all human dosing studies, including those exposing volunteers to air and water pollutants (Science, 17 January 2003, p. 327). The stakes were raised last year when a U.S. court ordered EPA to consider the human tests on a case-by-case basis.

    Green light.

    An expert panel says EPA should consider studies that expose human volunteers to pesticides.


    The NRC panel, chaired by ethicist James Childress of the University of Virginia in Charlottesville, found that these tests are not inherently unethical: Improving the science behind a regulatory decision “constitutes a societal benefit that can justify the conduct of a human dosing study,” the report says. EPA should accept such data, however, only if they are scientifically valid, the information can't be obtained in other ways, and even private studies comply with federal ethical standards known as the Common Rule.

    In addition, one kind of test—feeding chemicals to people to reduce the uncertainty in animal data—should be accepted only if subjects will experience no harmful effects, the report concludes. In general, this would allow metabolism studies in which volunteers are given tiny doses that cause no symptoms but result in changes in enzyme activity that can be detected in blood or urine.

    The report recommends creating guidelines and a strict approval process, however. It says EPA, which now has no ethics board of its own, should set up a Human Studies Review Board to offer companies advice before they sponsor a study and help EPA decide which studies to accept.

    As for the 19 pesticide tests already submitted to EPA, many of them metabolism studies, they could be used, the report says, if they are found to meet the report's standards. According to Patrick Donnelly of the pesticides group CropLife America, which praised the NRC report, “they all will [meet the standards].” But toxicologist Jennifer Sass of the Natural Resources Defense Council disagrees. She examined several and says, “None of them has any [scientific] validity.”


    Look, Up in the Sky! It's a Threatening Asteroid!

    1. Richard A. Kerr

    It was all over in 6 hours, but the commotion triggered last month when a threatening asteroid popped up still has astronomers buzzing. The episode revealed that they have little idea how to respond when they detect an object that might hit Earth within days. “Things worked out right, but it was more or less good luck,” says planetary scientist Clark Chapman of the Southwest Research Institute in Boulder, Colorado.

    As Chapman explained to an impact hazard meeting on 23 February, NASA funds a search for potential civilization killers—objects 1 kilometer and larger in size—that are almost certain to be detected years if not decades before impact (Science, 19 September 2003, p. 1647). The International Astronomical Union (IAU) has a formal process through which discoveries of these large near-Earth objects (NEOs) would be evaluated over the weeks and months following discovery. But “the system isn't designed to search for imminent impacts,” notes Chapman.

    Intended or not, an automated telescopic search first discovered a small, fast-moving NEO, now named 2004AS1, on the night of 12 to 13 January. Search operators routinely passed those observations along to IAU's Minor Planet Center (MPC) in Cambridge, Massachusetts. There staff posted a notice on a public Web page predicting where a half-dozen newly discovered objects should soon be, in the hope others could relocate them that night and refine their orbits.


    Asteroid 2004AS1 is probably half the size of the rock that devastated part of Siberia in 1908.


    A German amateur astronomer was the first to realize that the 2004AS1 predictions implied an imminent impact, news he passed to a Web chatroom. From there, a semiretired professional astronomer passed the word to other NEO professionals, triggering a flurry of orbital calculations into the night at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, and back at MPC. Calculations at JPL were giving 2004AS1 a 25% chance of hitting somewhere in the Northern Hemisphere in a few days.

    That was more than enough to set astronomer David Morrison of NASA's Ames Research Center in Mountain View, California, wondering, “Who do I call?” Morrison is chair of the IAU Working Group on NEOs, but this “was an event none of us was prepared for,” he says. A 25% chance seemed like enough to prompt a call to somebody, though, perhaps soon. Brian Marsden, director of MPC, saw the risk differently. “There was an enormous range of possibilities depending on what you thought the uncertainties [in the observations] were,” he says. “Under these circumstances, we have to be sure it's going to hit us” before calling anyone in authority. Lacking a plan, no one knew whose perception of the risk should prevail.

    Late that night, a Colorado amateur astronomer averted an embarrassing false alarm by failing to find 2004AS1 on its predicted collision course. That hasn't resolved matters of NEO risk perception, but many agree with the IAU working group's 19 February statement that “the interested community should decide how cases like [2004AS1] should be handled in the future.”


    Interferon Shows Promise in Monkeys

    1. Martin Enserink

    During last year's outbreak of the deadly respiratory disease SARS, doctors tried all kinds of treatments, from antiviral drugs and antibiotics to compounds that slow down or jazz up the immune system to traditional Chinese medicine. Many think some of these may have done more harm than good. But during the emergency, researchers were unable to set up rigorous clinical trials that could have separated wheat from chaff.

    Now, a team led by Albert Osterhaus at Erasmus University in Rotterdam, the Netherlands, reports the first animal evidence that a well-known antiviral drug called interferon-α may work against SARS—if given in time. Interferon-α is already registered as a drug for the treatment of hepatitis C and several cancers; a trial in human patients could start almost immediately if SARS were to return.

    Eager to do something—anything—for their SARS patients, doctors early on started treating them with an antiviral drug, ribavirin, as well as with steroids, which dampen the immune response and are often used in other pulmonary infections. The combination quickly became the standard of care in many countries. But although some studies suggested that patients responded well, “they lacked robustness to draw firm conclusions,” says Simon Mardel, a medical officer working on SARS at the World Health Organization (WHO) in Geneva, who lauds the Rotterdam study.


    A small trial during the Toronto outbreak first suggested that interferon-α helped SARS patients.


    Interferon-α, which comes in more than a dozen different varieties with different potencies, not only blocks the replication of several viruses but also activates the immune system. It was first tried in some 30 of the earliest patients in the Chinese province of Guangdong but appeared ineffective, a group of Chinese researchers reported recently in the Journal of Medical Microbiology. A small trial in Toronto, published in December in the Journal of the American Medical Association, suggested some benefit. In that study, nine patients received a different version, called alfacon-1, in addition to corticosteroids, while 13 did not.

    In the Rotterdam study, published online this week in Nature Medicine, cynomolgus macaques that were given so-called pegylated interferon-α (a form designed to last longer in the bloodstream) 3 days before infection with the SARS virus excreted far less virus from their throats, and their lung damage was reduced by some 80%. When the animals were given the compound 1 and 3 days after exposure, lung damage was also reduced, although not as much. That would probably make the compound most effective as a prophylactic for, say, patients' family members or health care workers who were exposed to the virus, says Jindrich Cinatl of Frankfurt University Medical School in Germany. How well the drug would work on people with full-blown SARS remains to be seen, he says. Cinatl says he'd also like to see animal tests of interferon-β, a drug his own work has shown to be a more potent inhibitor of SARS replication in the test tube.

    The lead author of the small Canadian study, Eleanor Fish of the Toronto General Research Institute, says that she's “encouraged” by the new results. Already, Health Canada has approved a protocol for a trial with alfacon-1, in case SARS reemerges, that does not include steroids or ribavirin. Several other Western countries are considering the same course, but researchers in China generally have a more favorable opinion of steroids, Mardel says.

    Meanwhile, other approaches are emerging as well. In a paper published in the Proceedings of the National Academy of Sciences earlier this month, a group led by Wayne Marasco of the Dana-Farber Cancer Institute in Boston showed that human monoclonal antibodies against the so-called spike protein, selected from a vast antibody library, can inhibit virus replication in cell cultures. Additional, as-yet-unpublished work with researchers at the U.S. National Institute of Allergy and Infectious Diseases shows that the antibodies block viral replication in a mouse model of SARS as well, Marasco says.

    Because evidence is so scant, WHO does not make recommendations on how to treat patients or even which drugs to test first, says Mardel. But it does promote the kind of controlled, randomized trial that can give more solid answers—while hoping there will never be an opportunity to conduct them.


    Potent HIV Blocker Identified in Monkeys

    1. Jon Cohen

    AIDS researchers have long known that HIV cannot copy itself in monkeys, but they had only a vague idea why. Now, a critical piece of this puzzle has fallen into place: Monkey cells make a protein that specifically derails HIV. Aside from helping solve a long-standing mystery, the finding may lead to new strategies for drugmakers, as well as an improved monkey model to test anti-HIV drugs and vaccines.

    Researchers led by virologist Joseph Sodroski of the Dana-Farber Cancer Institute in Boston report in the 26 February issue of Nature that they fished a protein from monkey cells, TRIM5-α, that powerfully restricts HIV's ability to establish an infection. Although HIV can easily enter monkey cells, the virus must convert its RNA into DNA before it can weave itself into the host's chromosomes and copy itself. TRIM5-α blocks the transcription of RNA into DNA, the same part of the viral life cycle that AZT and several other anti-HIV drugs interrupt. It's “a natural preventive to HIV,” says Sodroski.

    Nearly a dozen other labs have hunted for this blocking agent, and some close competitors are swallowing hard as they're applauding. “I'm sick as a dog, because it's a beautiful piece of work,” says Paul Bieniasz of the Aaron Diamond AIDS Research Center in New York City. “Whether it can be harnessed to combat HIV is uncertain at present, but even if it is just a novel basic research finding, it's a completely new system of antiviral cellular activity.” Stephen Goff, whose lab at Columbia University reported the discovery of a rat cellular protein with antiviral activity (Science, 6 September 2002, p. 1703), says he's particularly excited by the Sodroski lab's finding. “It opens up all sorts of new things,” says Goff, who wrote a Nature News and Views article that accompanies the report. “The evidence is strong that this is a major restriction gene.”

    Genetic block.

    TRIM5-α disrupts the shedding of HIV's capsid (purple), preventing the release of its genetic material.


    Sodroski and co-workers uncovered the protective protein by putting a wide variety of monkey genes into clones of human cells that they knew could support HIV's growth. Two of the clones they engineered became resistant to HIV, and they had only one monkey gene in common: the one coding for TRIM5-α. Subsequent experiments demonstrated that TRIM5-α targets HIV's capsid, a sheath of proteins that protects the viral genetic material and “uncoats” after the virus enters a cell, a key step in the conversion of its RNA into DNA. By some unknown mechanism, TRIM5-α appears to disrupt the proper uncoating of the capsid. To show that TRIM5-α is not just sufficient but also necessary to block HIV, the Sodroski group did another experiment in which they shut down production of the TRIM5-α protein, which allowed HIV to again establish a robust infection in their “resistant” clones.

    Translating basic research findings into drugs is always a long shot, but Sodroski has started pursuing the possibility. “We have a very clear example of something extremely potent and specific,” he says.

    Sodroski also has begun to explore whether his lab can engineer an HIV capable of bypassing the monkey TRIM5-α, which could provide an extremely valuable new animal model. Currently, the main monkey model used by AIDS researchers relies on infecting rhesus macaques with SIV, a simian cousin of HIV, or SHIV, a hybrid that Sodroski helped concoct that has HIV's shell and SIV's core. Most anti-HIV drugs have little activity against either SIV or SHIV, and AIDS vaccine developers also would welcome a test virus that more closely resembles the human version. Although researchers suspect that TRIM5-α is but one of several factors that stop HIV from replicating efficiently in monkeys, removing this block still could tip the scales in favor of the human virus. “Animal models are never perfect imitations of the real thing, but if we had an HIV-like virus that could infect macaques, it would allow us to test hypotheses with respect to pathogenesis and vaccine development that you can't do with other models,” says Sodroski. “It's certainly a reasonable goal, and it's much more achievable now.”


    Aging Research's Family Feud

    1. Jennifer Couzin

    Lenny Guarente and his former postdoc David Sinclair can dramatically extend the life span of yeast. They're battling over how this works, and competing head-to-head to grant extra years to humans

    BOSTON AND CAMBRIDGE, MASSACHUSETTS—At 34, David Sinclair is a rising star. His spacious ninth-floor office at Harvard Medical School boasts a panoramic Boston view. His rapidly growing lab pulled off the feat of publishing in both Nature and Science last year, and it made headlines around the world with a study of the possible antiaging properties of a molecule found in red wine. In a typical day, he fields calls from a couple practicing a radical diet to extend life span, and from an actor hunting for antiaging pills and the chance to invest in Sinclair's new company.

    There is, however, another side to this glossy picture of success. Sinclair is engaged in a tense and very public battle with his mentor, a renowned scientist based across the Charles River at the Massachusetts Institute of Technology (MIT). Leonard Guarente, 51, is an undisputed leader in the field of aging, an author of major discoveries about genes that prolong life. For 4 years, Sinclair all but lived in Guarente's lab as a postdoctoral fellow, and the two grew extremely close.

    Thanks in part to a glowing recommendation from Guarente, Sinclair nabbed a tenure-track spot on Harvard's faculty in late 1999. He then made clear that his old professor's pet theories weren't off limits. At a meeting at Cold Spring Harbor Laboratory in late 2002, Sinclair surprised Guarente by challenging him on how a key gene Guarente discovered extends life in yeast. That sparked a bitter dispute that crescendoed this winter, when the pair published dueling papers.

    Researchers who study aging are finding the quarrel both intellectually provocative and a lively source of gossip. And the reverberations extend well beyond that community. At its core, the argument involves one of the hottest topics in longevity research: how cutting calories may increase life span and how its effects can be translated into antiaging therapies. But the dispute, which remains unresolved, involves molecular biology so intricate that many scientists are uncertain how to assess it. It's also not clear how it applies to other organisms, such as mammals.

    Guarente and Sinclair are also rivals in business. Both have bold ideas for translating yeast studies into ways to stall mammalian aging and treat age-related diseases. Sinclair recently announced plans to launch a biotechnology company that he says will compete directly with one co-founded by his mentor. “They're doing exactly what we're doing, and it's a race,” says Sinclair, clearly relishing the prospect.

    “They're doing exactly what we're doing, and it's a race.”

    —David Sinclair


    The two have spoken little since their Cold Spring Harbor falling-out. Some who know the pair say the clash is unsurprising; despite the generation gap, Guarente and Sinclair share many traits common among successful scientists. Both are deeply ambitious, relentlessly competitive, and supremely self-confident. Both savor the limelight. Both love science and show little interest in other pursuits.

    Still, they've retained something of the parent-child relationship that can shape interactions between senior researchers and their students. Like a father dismayed when his son joins a punk rock band, and then dismayed further when it attracts devoted fans and favorable reviews, Guarente exhibits a mix of pride, anger, and disappointment when the conversation turns to his former postdoc. At the same time, Guarente confesses that Sinclair's choices aren't altogether startling. “There's a side of me that identifies with him,” he says. “The young Lenny Guarente was not all that different.”


    A native of Massachusetts, Guarente's academic life has revolved around two of the most high-powered institutions in the country. He attended MIT as an undergraduate—where biology at first “felt squishy to me”—and completed graduate and postgraduate work at Harvard. Then he moved two subway stops back up Massachusetts Avenue and settled again into MIT, where he has remained ever since.

    With tenure under his belt at 34, Guarente began thinking beyond the mainstream assignments to which he had gravitated early on, like studies of gene regulation. In the early 1990s, two of his graduate students, Brian Kennedy and Nick Austriaco, sat down with him to discuss a project they wanted to pursue: dissecting the causes of aging.

    At the time, aging was considered fringe science, a topic few reputable researchers would touch. But “Lenny likes a challenge,” says Kennedy, now at the University of Washington, Seattle. “He said, ‘You've got a year to learn something.’”

    The clock ticking, Kennedy and Austriaco focused on yeast, a single-celled organism that lends itself to laboratory manipulation. They began hunting for mutant yeast cells that lived abnormally long life spans, measured by the number of daughter cells they produce. (A mother cell will typically produce a daughter every 1 to 4 hours; an average cell generates about 20 daughters.) In those early days, when Kennedy and Austriaco were testing hundreds of strains, they organized round-the-clock vigils to track their yeast cells; one of them was always there, gazing at the cells under a microscope and delicately counting off the daughters.

    It quickly became obvious that some strains lived unusually long, which piqued Guarente's interest. He hovered nearby, checking in with the students two or three times a day and grilling them on what they'd found. The news was good: About 1 in 1000 strains both produced an abundance of daughter cells and survived well under the stress of a chilly refrigerator, underscoring known links between longevity and stress tolerance. Of these, one mutant caught their attention. It was sterile—unable to mate with other yeast cells—and lived 50% longer than normal.

    Guarente canceled golf games with Kennedy to spend more time in the lab. Gradually, the group homed in on a trio of genes called SIR genes; deleting them seemed to shorten a yeast cell's life.

    It was around this time, in early 1996, that 26-year-old David Sinclair boarded a plane in his hometown of Sydney, Australia, and flew halfway around the world to Cambridge. “Before I even knew I got the [postdoctoral] fellowship, I said to Lenny, ‘I'll take out a loan, I'll sell my car’” to come to the lab, says Sinclair. His wife-to-be, a German biologist, was moving to Australia on a fellowship of her own, but Sinclair couldn't pass up the opportunity to study with Guarente, whom he idolized. Still, he knew that the science Guarente favored was then heretical: “The idea that you could use yeast to study human aging was a joke.”

    Just as heresy had not deterred Guarente, it didn't stop Sinclair. He already knew yeast: As a graduate student at the University of New South Wales in Sydney, Sinclair had studied genetic regulation in yeast—he and Guarente first met after a genetics conference in Australia—and Sinclair's dissertation was the thickest in the lab. And he had already earned a reputation for pushing limits. He racked up traffic violations in his red sports car, regularly skating close to losing his license and once having it confiscated altogether. “You're allowed to get 12 points, and at one stage he had 14,” recalls Geoff Kornfeld, the lab manager in Ian Dawes's lab at New South Wales, where Sinclair studied.

    The atmosphere in Guarente's lab when Sinclair joined was electric. Other students were building on the yeast work by Kennedy and Austriaco, and they pinpointed a specific SIR gene, called SIR2, that could dramatically extend the cells' life span. Extra copies of SIR2 also enabled worms to live longer.

    “This has run me through so many emotions, some of which I didn't know I had.”

    —Lenny Guarente


    Furthermore, Guarente's lab had found that activity of the protein generated by SIR2, called SIR2p, was dependent on an enzyme called NAD. All cells carry NAD, which helps govern metabolism. SIR2p, it appeared, could sense the metabolic state of a yeast cell.

    This was significant because it dovetailed with what scientists had observed for decades in animals: Curtailing calories and altering metabolism dramatically extends life span. Biologists theorize that, in the face of scarce resources, organisms trade off reproduction, which is hindered by calorie restriction, for survival. No one knows, though, precisely how a meager environment slows aging.

    Guarente turned the connections over in his mind. Like calorie restriction, SIR2 lengthened life span. Its connection with NAD and metabolism suggested something more: Maybe it was a gene that could explain why cutting calories slowed aging. Furthermore, when manipulated, perhaps SIR2 could mimic the antiaging effects of calorie restriction without a near-starvation diet.

    SIR2 wasn't the only project stirring excitement in Guarente's lab. A graduate student, David Lombard, had just cloned the mouse gene for Werner syndrome, a rare disease that mimics accelerated aging. Along with postdoc Robert Marciniak, Lombard was trying to understand how the Werner's protein behaved in mouse cells. “There were ideas and debates flying through the air constantly,” says Brad Johnson of the University of Pennsylvania in Philadelphia, who was then a postdoc in the lab. Guarente, never one to coddle his students, pushed for results.

    Sinclair proved to be a brilliant and prolific researcher, often the first to arrive, at 8:00 a.m., and the last to leave, at 12:30 a.m., running to catch the final subway train of the night. He quickly became a favorite of Guarente. But many lab members began regarding him warily, especially after the 1997 publication of a Cell paper by Sinclair and Guarente. The paper reported that buildup of ribosomal DNA, a kind of repetitive DNA sequence, in yeast cells caused them to age (Science, 2 January 1998, p. 34). Although Sinclair had conducted the experiments himself, Johnson had also proposed similar studies. When lab members learned that Johnson was not a co-author on the paper, they began guarding their work more closely.

    Sinclair has a ready reply. He says that when he first thought of the Cell experiments, he was concerned that others might accuse him of poaching the ideas. So, to prove that the concept was his, he wrote and mailed himself a letter describing the experiments before they'd been done. Sinclair still has that letter, its seal intact.

    Sinclair agrees, though, that he was unpopular in Guarente's lab, but he explains it differently. “Lenny didn't hide his favoritism” for his Australian postdoc, says Sinclair.

    Molecular elixirs

    Guarente's arms are sore from shoveling after a December blizzard dumped nearly 2 feet of snow on Boston, but that doesn't stop him from gesticulating to underline his side of the story. “This has run me through so many emotions, some of which I didn't know I had,” he says of the falling-out with Sinclair.

    Some facts are not in dispute: Calorie restriction extends life in nearly every species tested so far. In yeast, restricting calories boosts activity of SIR2 proteins, and extra SIR2p slows aging. The critical question on which Guarente and Sinclair disagree is how calorie restriction makes the SIR2 protein more active.

    Guarente believes the answer lies in the ratio of NAD to a related molecule, NADH. Like NAD, NADH is found in species from yeast to humans, and it helps cells translate food into energy. Metabolic reactions in cells convert NAD to NADH, and vice versa. In 2000—soon after Sinclair joined Harvard's faculty—Guarente's lab reported in Science that with less NAD than normal, calorie-restricted yeast don't outlast regular yeast (22 September 2000, p. 2126).

    Then, in 2002, a Nature paper by Guarente's lab pulled NADH into the picture. It described how knocking out electron transport prevented calorie-restricted yeast cells from living longer. Electron transport is governed in part by NADH. “That really made it look like the NAD:NADH ratio would be a good candidate” for stimulating SIR2p activity, says Guarente, “but there was still no evidence one way or the other.”

    Sinclair's announcement at Cold Spring Harbor Laboratory that his data diverged from this theory took Guarente by surprise. In his talk, Sinclair presented an alternative model. His candidates were not NAD and NADH. Instead, he focused on a vitamin B precursor called nicotinamide, which is also a breakdown product of NAD, as well as a gene, PNC1, that converts nicotinamide into another molecule, nicotinic acid. Nicotinamide was already known to inhibit SIR proteins.

    Sinclair found that without PNC1, calorie-restricted yeast didn't live longer than normal. Furthermore, adding copies of PNC1 to yeast receiving normal amounts of glucose extended their life span. They behaved, in other words, as if they were on a low-calorie, low-glucose diet, even though they weren't.

    Sinclair explains his model this way: PNC1 senses when yeast cells are exposed to low glucose. That boosts the gene's expression, which depletes nicotinamide, which boosts activity of SIR2p, which extends life span. Mammals don't have a PNC1 gene. But Sinclair believes that nicotinamide and genes that deplete it may guide SIR2p and related proteins in those organisms. PNC1, he notes, could also explain why other stressors such as heat shock extend life in yeast: The gene is upregulated by many environmental stresses, not just low glucose.

    Devil in the details.

    Guarente and Sinclair have different explanations for how slashing calories makes yeast live longer. Their colleagues say the jury's still out on whether one, both, or neither of them is right.


    Last May, Sinclair's lab published this work in Nature (Science, 9 May 2003, p. 881). And to hammer home his point, he tested Guarente's theory and published a separate paper on the subject in Science last December. Its title didn't bode well for Guarente: “Yeast Life-Span Extension by Calorie Restriction Is Independent of NAD Fluctuation” (19 December 2003, p. 2124). The battle lines were drawn.

    Guarente shot back. With his postdoc Su-Ju Lin, who recently moved to the University of California, Davis, he published a counterpoint in January's issue of Genes and Development. It addressed this question: Did the yeast strains Sinclair used, which lacked PNC1 altogether and hence accumulated substantial levels of nicotinamide, mask fluctuations in the NAD:NADH ratio that occur in normal, although calorically restricted, yeast cells? In other words, Guarente wondered whether what Sinclair saw in his cells, while accurate, wasn't what happened in nongenetically altered yeast exposed to low glucose.

    To test Sinclair's theory against their own, Guarente and Lin created yeast that lacked the PNC1 gene. Everyone agreed that the cells' extra nicotinamide would inhibit SIR2p regardless of other factors, so they depleted the excess nicotinamide. When calorically restricted, the yeast still lived extra-long. The absence of PNC1 didn't stop the cells from sensing calorie restriction and living longer, as Sinclair's theory supposed.

    Guarente did admit defeat in one arena: He agreed with Sinclair that NAD fluctuations weren't mediating SIR2. But he and Lin reported in their Genes and Development paper that the NAD:NADH ratio is crucial nonetheless. To their surprise, they say, calorie restriction appears to lower NADH levels rather than increase NAD. The drop in NADH, in turn, boosts the NAD:NADH ratio and extends life.

    The response from outsiders to this burst of studies has mostly been bafflement. To begin with, no one can agree on whether Sinclair's and Guarente's theories are mutually exclusive, or whether they can coexist. (Even Guarente, Sinclair, and Lin don't agree on this score.)

    “My gut view is that one can't be right,” says Steven McKnight, a biochemist at the University of Texas Southwestern Medical Center in Dallas. A longtime fan of Guarente, his tendency is nonetheless to side with Sinclair—partly, he confesses, because the high ratio of NAD to NADH and the high NAD levels that Sinclair reports jibe with McKnight's own work.

    The molecular biology that Guarente and Sinclair are tackling is so complex that biochemists have spent decades squabbling over some important details. Among them is a normal cell's ratio of NAD to NADH. McKnight falls into the school that endorses a high ratio of at least 20, similar to what Sinclair reports; some others favor a much lower ratio of 1 to 3, which Guarente stands behind. For NADH fluctuations to significantly impact the ratio, as Guarente postulates, the ratio must be low.

    Richard Veech, a metabolism researcher at the National Institutes of Health in Bethesda, Maryland, takes issue with parts of both studies. “It's been well known since 1958 to any biochemist that [a ratio] of two or three is nonsense,” he says of the Guarente paper. And as for Sinclair: “Our primary observations differ from his primary observations” when it comes to Sinclair's report that the ratio of free, unbound NAD to unbound NADH molecules has no impact on SIR2 activity. (Guarente's paper measured the total levels of NAD and NADH, which includes molecules bound to structures in the cell.) Ultimately, though, Veech and some others conclude that more than one mechanism must be regulating SIR2. “You're going to control life span with one enzyme for one effect?” he says. “Please!”

    In business

    Beyond advancing his own case in the SIR2 clash, neither Guarente nor Sinclair is keen to discuss it. Both are now eyeing a world beyond yeast, pursuing mechanisms of aging in mammals. And both are chasing pet theories they hope will combat diseases of aging and potentially extend life.

    Four years ago, Guarente and his colleague Cynthia Kenyon, a worm researcher at the University of California, San Francisco, helped found Elixir Pharmaceuticals, which is a short walk from Guarente's lab in Cambridge. Roughly half the company's research is focused on SIR2 and molecules that modulate its effects. (The other half revolves around a separate pathway identified by Kenyon in worms.) One of the toughest challenges in targeting the protein made by SIR2, however—or SIRT1, as it's known in mammals—is that “this protein is all over the body,” says Peter DiStefano, the chief scientific officer of Elixir. The company is currently experimenting with various animal models and has raised more than $40 million from investors.

    Elixir has also operated with almost no direct competition; pharmaceutical companies have hesitated to enter this market, and very few other biotechnology firms are devoted to aging research. If Sinclair has his way, that won't last long. Last fall, he asked Andrew Perlman, a 28-year-old millionaire who made his money selling two technology companies he founded, to help Sinclair build a new company called Sirtris Pharmaceuticals. Sirtris, which hasn't yet raised funds, will focus largely on Sinclair's most recent obsession—a compound called resveratrol, an antioxidant in red wine and other foods.

    In a paper published last August in Nature, Sinclair and his colleagues reported that in yeast, resveratrol appeared to stimulate SIR2, hence mimicking calorie restriction and slowing aging (Science, 29 August 2003, p. 1165). Various studies in animals also suggest that resveratrol protects against cancer. It's “as close to a miraculous molecule as you can find,” says Sinclair. “One hundred years from now, people will maybe be taking these molecules on a daily basis to prevent heart disease, stroke, and cancer.” A Montreal company, Royalmount, is beginning human trials of resveratrol in herpes and colon cancer prevention; Sinclair hopes Sirtris will partner with it. He's also experimenting with modified versions of the compound.

    Mother-daughter division.

    Days-long vigils in Guarente's lab identified yeast that bud more than usual, a measure of longevity.


    Because resveratrol occurs naturally, it's already widely advertised in health food stores and over the Internet. Sinclair purchased a dozen samples peddled as resveratrol and tested them in his lab. Only one passed the test—the compound is quite unstable at room temperature—and Sinclair briefly became a paid consultant to the company that makes it, Longevinex. In late December, he announced that he had severed ties with Longevinex after the company broadcast comments from him on its Web site that Sinclair claimed were inaccurate.

    Guarente, who tried to recruit Sinclair to Elixir before their falling-out, even bringing him to some of the company's board meetings, wasn't expecting to hear that his former postdoc was starting a company of his own. Sinclair's choices, however, mirror Guarente's years ago: As a young scientist, Guarente rejected an offer from his Harvard adviser, Mark Ptashne, to join Ptashne's new company. Instead, says Guarente, “I and a bunch of young turks at Harvard started a competitor company. … That's what young people do.” The company eventually folded. Ptashne, he says, was “like my father, the establishment.”

    Certainly, Sinclair still views his mentor as something of a father figure. Asked if he's read Guarente's 2003 memoir, Ageless Quest: One Scientist's Search for Genes That Prolong Youth, he looks less than enthusiastic. “I don't want to see into his mind,” says Sinclair. “It's a bit like learning about your parents' sex life.”

    Although many scientists agree that both the resveratrol molecule and the SIRT1 drug target seem promising, they don't foresee smooth sailing. “I can't believe that it's going to be the magic bullet that cures aging,” says James Joseph, a neuroscientist at Tufts University, of resveratrol; he has extensively studied the family of compounds to which it belongs. Resveratrol is also known to target a broad swath of molecules in the body, something that could be problematic in a drug, says Elixir's DiStefano. Sinclair hopes to avoid this by tinkering with the compound's chemical makeup.

    The big question facing SIRT1, meanwhile, is what it does in mammals. The skepticism Guarente confronted in the early 1990s, when he backed yeast as a model for human aging, has abated, but fundamental mysteries remain. Perhaps most importantly, is calorie restriction in mammals mediated by the SIRT1 gene? “It would surprise me only somewhat” if it's not, says Marciniak, a former Guarente lab member now at the University of Texas, San Antonio.

    Both Guarente and Sinclair are wrestling with this question, and, as in other areas, they're racing along parallel paths. Earlier this month, a paper in Cell by Guarente's team and a paper in Science on which Sinclair was an author ( both explored how SIRT1 helps mammalian cells withstand environmental stress.

    Guarente also talks animatedly about a project that's captured his attention: fat and its links to some of the seven SIRT genes in mice. He and his lab members are feverishly working to link these genes with fat accumulation and sensitivity to insulin—which could lead to new therapies for obesity and diabetes. “That's what I think the 5-year plan is,” says Guarente. He never imagined, he adds, that his work on life span might translate into diabetes drugs.

    Sinclair is more coy, but he admits to exploring connections between fat and SIR2 in worms and mice. The workaholic in him hasn't abated, even with a 1-year-old daughter at home. He keeps a microscope, an incubator, and a refrigerator for yeast plates in his house. Reluctant to take time off, he's returned to Australia just once in the last 3 years. And Sinclair guards his work with extraordinary care: After a notebook filled with data went mysteriously missing, he installed a safe in his Harvard office. There the lab's notebooks sit, locked inside.

    Guarente's lab is calmer these days than it was when Sinclair and others toiled there. “Now we have birthday cakes,” says Marcia Haigis, a postdoc. “Lenny says the lab is too touchy-feely.”

    A more relaxed atmosphere hasn't stopped Guarente, like Sinclair, from sticking resolutely to his theory of aging. The truth, if and when it surfaces, may well embrace a synthesis of what the two—and others—propose. Or, of course, it may show that only one of them is right.


    Shifting Tactics in the Battle Against Influenza

    1. Alicia Ault*
    1. Alicia Ault is a writer in Kensington, Maryland.

    When FDA experts met to pick viral targets for the next flu season, they also discussed a promising new way to create vaccines

    U.S. influenza vaccine production relies on a delicate chain of events. It starts each year with global surveillance and isolation of dominant virus strains, runs through an expert panel that selects vaccine candidates in February, and ends around September when manufacturers ship the finished product for administration to 90 million Americans. Last year, a crucial link broke, resulting in a vaccine that lacked a key component and may have left recipients poorly protected.

    Smarting from that breakdown, and edgy about a rising pandemic threat, U.S. Food and Drug Administration (FDA) advisers met 18 to 19 February to pick virus types for inclusion in next season's vaccines. They chose two A strain candidates: Fujian virus, the potent newcomer that was left out of last year's vaccine, and a more familiar virus that was included last year, called New Caledonia. On 17 March officials will meet to recommend one more, a B strain virus. Production deadlines limit the total number to just three, and the decision must be made by April to enable distribution by fall.

    In its meeting last week, the FDA panel also considered changes to 50-year-old production methods that promise more speed and flexibility. However, the group made no formal recommendations. And although some new methods are being tested, most are a few years away from approved use.

    Both production and regulatory issues caused trouble during preparations for the 2003–04 flu season. By February 2003, surveillance by the World Health Organization (WHO) had identified A/Fujian virus as an important new threat, and FDA's advisers wanted to include it in the annual flu shot. But WHO labs were unable to isolate it in hen's eggs, the approved culture medium. They isolated the virus in canine kidney cells, but these had not been certified pathogen-free and could not be used as the starting point for a vaccine given to healthy individuals. So FDA's panel nominated a virus used the previous year that belonged to the same subtype as Fujian, called H3N2. (The code refers to surface proteins, hemagglutinin 3 and neuraminidase 2.) The results are still being debated, but that vaccine may have been only 14% to 60% effective. Usually, the vaccine protects against 60% to 80% of viruses.

    Gently, gently.

    The basic method of producing flu vaccines by growing virus in hen's eggs has not changed in half a century.


    Health authorities would like to be able to respond more rapidly to changing flu threats, and some are pinning their hopes on a new vaccine production method called “reverse genetics.” MedImmune of Gaithersburg, Maryland, controls the technology. In a special agreement, however, the company has given permission to WHO to use the process to test an emergency vaccine that might be needed to combat a flu pandemic (Science, 30 January, p. 609). Meanwhile, MedImmune will soon seek FDA approval to use reverse genetics to make its own flu vaccine, a nasal spray called FluMist using the FDA-approved strain selection.

    Reverse genetics was developed by researchers in many labs, including those at St. Jude Children's Research Hospital in Memphis, Tennessee, who licensed it to MedImmune. The concept is straightforward: The viral genome is converted from RNA to DNA, manipulated to remove the genes that are thought to cause pathogenicity, and converted back to RNA for vaccine production. This technique might make it possible to isolate viruses more rapidly without eggs, as well as engineer vaccines more efficiently and, proponents say, more safely. “You're obviating the risk of carrying forward any contaminant,” says James Young, MedImmune's president of research and development. Agreed Zhiping Ye of FDA's Division of Viral Products at the February 19 meeting: “You use harsh ways to purify the viruses, so that ends the contamination concern.”

    The U.S. Centers for Disease Control and Prevention in Atlanta is pursuing a reverse genetics flu vaccine as well, according to Nancy Cox, chief of the agency's influenza branch. The National Institute of Allergy and Infectious Diseases hopes to conduct pilot trials in July to test for safety and immunogenicity in animals, she added. Phil Minor, head of virology at the U.K.'s National Institute for Biological Standards and Controls, said his lab is testing the technique's reliability in creating high-growth seed virus. Kathy Coelingh, senior director of scientific and regulatory affairs at MedImmune, said it's reliable in their hands but agreed that others need to perfect the process.

    But governments are not ready to sign off on the technique yet. European regulatory authorities regard viruses produced through the method as suspect because they're genetically modified organisms, said Coelingh.

    FDA panel members wanted to see more data. “I would agree the reverse genetics approach probably has the best assurances ultimately for proceeding, but we're not there yet,” says Gary Overturf, a pediatrician at the University of New Mexico School of Medicine in Albuquerque and the panel's chair. Nor is there a guarantee that seed viruses from reverse genetics will grow in eggs, which will continue to be used in the final stage of vaccine production. Coelingh called that “a legitimate question.”

    Could all the questions be answered in a hurry if there were a pandemic? The FDA advisers and manufacturers weren't so sure. Michael Decker, an FDA panelist and vice president for scientific and regulatory affairs at Aventis Pasteur, said the manufacturers' ability to respond depends on timing: “Do we stop all current production and devote everything to making a pandemic strain? If it's a serious enough threat, that's what we'll do.”


    Lab Network Eyes Closer Ties For Tackling World Hunger

    1. Dennis Normile

    A group of 16 research centers is considering centralized labs, pooled resources, and other arrangements to help feed the developing world

    TOKYO—A casual conversation at a coffee break is leading to a major shakeup of agricultural research throughout the developing world.

    Three years ago, Ronald Cantrell and Alexander McCalla shared concerns about the future of agricultural research for and by developing countries during a break in a meeting in Durban, South Africa. Both were acutely aware that new high-yield crop varieties are desperately needed to alleviate hunger among the poor. They also recognized that the institutes they represent—the International Rice Research Institute (IRRI) in Los Baños, the Philippines, and the International Maize and Wheat Improvement Center (CIMMYT) near Mexico City—were lagging behind private companies and academia in exploiting genetic techniques. So they agreed to explore closer collaboration, perhaps even a merger.

    IRRI and CIMMYT are the crown jewels of the Consultative Group on International Agricultural Research (CGIAR), an association of 16 research centers affiliated with the World Bank. And what is good for these two institutes, it turns out, may be good for the entire system. Other CGIAR centers that work on cereals have asked to join the IRRI-CIMMYT talks. The four CGIAR centers that focus on legumes are exploring their own collaboration. And a task force is studying the possible consolidation of four centers in Africa.

    The goal of all these deliberations, says William Dar, director of the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) in Hyderabad, India, is to have “all the centers working in common on the big issues for the small farmers of the world.” And meeting that goal, predicts CGIAR Director Francisco Reifschneider, will require a historic realignment of the entire $370-million-a-year consortium.

    Cantrell, a plant breeder who became director of IRRI in 1998, and McCalla, a professor emeritus of agricultural economics at the University of California, Davis, and chair of the CIMMYT board, can remember when these two institutions ignited a Green Revolution that led to a quantum jump in agricultural productivity for the developing world. CGIAR was formed in 1971 to build on that progress by fostering greater collaboration, including fundraising, among agricultural research centers around the world. But Michael Lipton, an economist at the University of Sussex, U.K., says that the 1980s witnessed “increasing pressure [from donors] to divert money from basic germ-plasm research to a whole range of other goals, from improving the participation of women [in economic activities] to natural resource management.”

    The result was a shift in research priorities. A recent evaluation by the World Bank of some 700 previous reports and studies * notes that CGIAR spending on improving crop productivity declined by 6.5% annually in real terms through the 1990s and that training programs for the developing world decreased by nearly 1% a year (see graphic). At the same time, research into environmental protection and biodiversity were receiving larger shares of a shrinking pie.

    Multigrain mix.

    Reduced funding for basic research on new varieties, such as these rice seedlings developed at IRRI, may stimulate cooperation among CGIAR's 16 centers.


    The resulting fierce competition among centers for scarce funding isolated research programs at a time when germ-plasm research efforts could have benefited from greater collaboration, especially in biotechnology. While private companies and universities in advanced countries invested $8 billion to $10 billion in agricultural biotechnology in the 1990s, says Uma Lele, an agricultural economist who led the World Bank review, the CGIAR system spent just $25 million. “For a billion poor people in the world, that is just minuscule,” she says.

    CGIAR already has adopted many of the reforms recommended by the various reports. The former Technical Advisory Committee has become a Scientific Advisory Council with broader powers to set systemwide objectives. A new executive council meets quarterly to expedite decision-making between the annual meetings. And donors are gradually recognizing the need to give officials greater leeway in spending their money.

    The thorniest issue for CGIAR has been determining the appropriate number of centers and their mandates. “There have been all kinds of proposals, from fewer centers to regional centers and commodity centers,” says Cantrell. This year the International Service for National Agricultural Research, based in The Hague, the Netherlands, will be folded into the Washington, D.C.-based International Food Policy Research Institute, consolidating CGIAR's policy-oriented centers. But CGIAR's Reifschneider wants to avoid top-down restructuring. “The most efficient way to move forward,” he says, “is for the centers to explore how better to do business together.”

    That's exactly what IRRI and CIMMYT hope to do, in part by building upon the latest research. “We know now that the major cereals have a majority of their genes in common,” says Cantrell, “and relatively few genes make these phenotypic differences that we see.” Taking advantage of the similarities among the cereals might lead to a shared genomics laboratory, jointly appointed researchers, and possibly even a common board.

    After Cantrell and McCalla left Durban, they turned to the Rockefeller Foundation, which had helped establish both institutions in the 1960s. The foundation has asked four consultants to report to a committee chaired by Gordon Conway. Conway hopes to complete the process before he retires as president of the foundation at the end of the year.

    IRRI and CIMMYT are the oldest, largest, and most highly regarded of the CGIAR centers, so it's no surprise that other centers would follow their lead. ICRISAT's Dar was particularly interested in cereal genomics, because ICRISAT works on sorghum and millet. But ICRISAT also works on three legumes—peanut, chickpea, and pigeon pea—leading Dar to reach out to the three other CGIAR centers involved in legumes.

    Although it is too early to predict the shape of a new CGIAR system, most scientists expect a heightened and more centralized effort to use genomics to enhance germ plasm. Ren Wang, IRRI deputy director for research, would personally like to see IRRI and CIMMYT merge with “a new headquarters and a new laboratory for upstream genomics research located in India or China.” Such labs could identify genetic markers used in more traditional breeding programs and develop gene chips to be distributed to regional and national labs. Similar efficiencies could come from centralizing bioinformatics efforts, intellectual-property management, and training programs. McCalla says that recent funding cuts have brought IRRI and CIMMYT “close to being below critical mass in such areas. So joining forces would make sense.”

    But restructuring also poses a host of challenges. Genomics might be universal, but downstream work has to be local. “Wheat doesn't do well in Los Baños,” says Cantrell, “and rice doesn't do well in valleys in Mexico.”

    Restructuring also raises fundamental questions about priorities and relations with national agricultural research efforts. The World Bank meta-evaluation notes that most successful introductions of new crop varieties in Latin America and Asia relied on local research capabilities. Now some of the strongest national agricultural research efforts are ahead of CGIAR in selected areas, says the World Bank's Lele. She points to the development of no-till planting techniques for soil conservation in Brazil, watershed management in India, and hybrid rice breeding in China.

    At the same time, the Green Revolution never took hold in Africa, and the continent's agricultural research capabilities are generally weaker than they were a generation ago due to continuing political and funding instability. Yujiro Hayami, a development specialist at the Foundation for Advanced Studies on International Development in Tokyo, thinks that it might be best for IRRI and CIMMYT to turn responsibility for Asia and the Americas over to local institutions and shift research staff and resources “to a new food-staple research institute in Africa.” But ICRISAT's Dar disagrees. “There are still more hungry people in Asia than Africa,” he counters.

    Perhaps the biggest question is how donors will react. Rockefeller's Conway, an agricultural ecologist, says that coupling a new functional genomics program to the existing germ-plasm banks and field-testing expertise of the two centers will create “a really powerful basis for producing new crop traits” that might appeal to donors. So, too, might a report from the Rockefeller Foundation offering a blueprint for a new, improved CGIAR.


    New Leaders, Fellowships Kindle Science on the Mall

    1. Elizabeth Pennisi

    After 3 years in the doldrums, scientists say an environment that had turned chilly for basic research is beginning to warm up

    Three years ago, life was pretty grim for scientists at the Smithsonian Institution, home to high-profile astrophysical, tropical, environmental, and systematics research and head office for 16 museums and a zoo. The federal budget was tight, and the new secretary, Lawrence Small, was most concerned with finding funds to complete an annex to the National Air and Space Museum, build the National Museum of the American Indian, and finish long-overdue renovations at an art museum. To help pay for them and refocus science efforts, Small called for the closure of a conservation center and a materials research lab. Scientists were demoralized (Science, 13 July 2001, p. 194).

    But the scene has changed. Today there's new blood in key science leadership positions in the Washington, D.C.-based institution, a push to recruit new researchers, and new money for a bankrupt fellowship program. The mood is upbeat, according to staff. And although the research budget may not increase, biologists, paleontologists, anthropologists, earth scientists, astronomers, and others are working with Small to make the most of the available expertise and resources. “There's been a complete turnaround,” says Brian Huber, a paleontologist at the National Museum of Natural History (NMNH).

    The revitalization can be traced in part to an independent review board that intervened at the height of the controversial belt-tightening in 2001 to diffuse the growing tension between the secretary and the scientific staff. The crisis came when Small proposed closing the Conservation and Research Center (CRC) in Front Royal, Virginia, and the Smithsonian Center for Materials Research and Education (SCMRE) in Suitland, Maryland. He didn't get very far.

    More than a dozen conservation groups rose up to defend the 13,000-hectare CRC, a division of the National Zoo that hosts research on threatened and endangered species. Representative Frank Wolf (R-VA), whose district houses CRC, joined the protest. Similar lobbying drew attention to SCMRE. Responding to the fuss, the Smithsonian's Board of Regents created a special commission.

    The resulting report, released in January 2003, applauded much of the Smithsonian's research, including work at the endangered CRC and SCMRE. It confirmed that money problems predating Small's arrival were at the root of the breakdown, leading to a particularly bad loss of morale at NMNH. The commission, chaired by Jeremy Sabloff, an anthropologist at the University of Pennsylvania Museum of Archaeology and Anthropology, called for new hires in the scientific divisions, an emphasis on leaders with scientific experience, and better communications between research administrators and the Smithsonian top brass.

    Back on track.

    Renewed fellowship funding supports young scholars such as Shen-Horn Yen (right), reviewing specimens with curator and former fellow Scott Miller.


    These steps were needed, according to the report, to make up for years of attrition in research. The staff of NMNH fell from 670 in 1992 to 499 by the end of 2003. Zoo staff dropped from 363 in 1992 to 315 in 2003. A postdoc fellowship program had all but petered out, and few young faculty members were hired. The staff had become quite “lopsided on the senior scientists side,” says NMNH volcanologist William Melson.

    It was obvious that money woes were taking a toll on staff scientists. A decade ago, they could rely on Smithsonian support for some of their research and related travel. By 2003, most were foraging for outside funding. As result, “a lot of [NMNH] researchers thought they could only do small projects,” says Hans-Dieter Sues, NMNH's new associate director of science (see p. 1289). That put them at a disadvantage. These days, the most successful grantees are those who are thinking big, tackling large-scale problems such as biodiversity that involve researchers from multiple disciplines. He thinks that more Smithsonian scientists need to do the same.

    Vital signs

    Several promising changes are giving staffers hopes of tackling more ambitious science goals. For one, “there's more attention at the top toward science,” says Sabloff, whose panel made more than 70 recommendations. “All of the immediate steps are at least under way or being discussed.” David Evans, Smithsonian undersecretary for science, is just as positive: “We are moving along quite smartly.”

    Consider leadership. A year ago, Christián Samper took the reins of NMNH, and now, several scientists there say, he is winning over a jaded staff that had grown used to disappointment. “He has a direct way of dealing with people,” says Melson. Sues joined Samper as science chief in January; they replaced two nonscientist administrators. Other fresh blood is on the way. Three center directors—at CRC, SCMRE, and the Center for Astrophysics (CfA) in Cambridge, Massachusetts—have left or are preparing to step down, and searches are under way for successors. They will have greater say in guiding the scientists under their care. Huber notes, “There's recognition that [we] shouldn't have central management dictating the research directions.”

    Another bright spot on the Smithsonian horizon is Congress's interest in fellowships. “This program was identified by the scientists as most important,” says Evans. The 2004 and the proposed 2005 federal budgets each set aside $800,000 for the Smithsonian-wide fellowship program. This is half as much as a decade ago but an improvement over last year, when the account was zeroed out. “Postdoctoral fellows fill a key need for a research organization, through intellectual cross-fertilization and their new ideas,” says CfA Director Irwin Shapiro.

    View this table:

    The number of young faculty members is on the upswing as well. Last year, the Smithsonian offered early retirement packages to senior staff to free up salary money. Two senior researchers who took the offer are Melson and fellow volcanologist Thomas Simkin. “We negotiated with the director that … we would be replaced by young people,” Melson explains. Other departments' retirees have paved the way for a few more new staff.

    Congress seems ready to support Smithsonian science in other ways. For example, Evans says NMNH is getting $16 million in 2004 through the National Oceanic and Atmospheric Administration to start work on a 2322-square-meter ocean hall. The Smithsonian appropriation itself includes $8 million to build “a state-of-the-art” collections storage building in Suitland, adds Evans. And there's an extra $700,000 this year to expand digital tracking of the 124 million objects in the NMNH collections, which will make them more accessible to other researchers.


    There are still some dark spots, however. The commission recommended that CRC only get a conditional reprieve: It must become self-sustaining by 2008. “They said we can stay if we earn our keep,” says Janine Brown, a reproductive physiologist at CRC. Some hope the National Science Foundation (NSF) will designate CRC a core site for an environmental conservation program, but that has not happened. And a private foundation created to find funds for CRC has so far failed to obtain them. “It's depressing,” Brown adds: “We're fighting for our lives.”

    CfA, with about $70 million from NASA, and the Smithsonian Environmental Research Center in Edgewater, Maryland, with $16 million from federal agencies such as the Department of Energy, have not suffered as much as others. But NMNH and the zoo are still scrambling. New hires have been meager, and research funds hard to come by. Smithsonian-wide support for research continues to be in short supply. “The recommended reversal of the erosion [in faculty and funds] has not yet taken place,” Irwin complains.

    That's made life hard for postdoctoral fellow Jennifer Leonard, for example. Her molecular evolution fellowship, funded by NMNH, covers living expenses, “but I don't have funding to cover my research, and that's a problem,” she complains. She is ineligible to look for outside support, so she's had to get by with what her lab chief can carve off his grants. Congress last year ordered an end to a policy that classified NMNH staff as government employees and thus barred them from receiving NSF grants. But Evans confirms that the new policy is still being worked out.

    Sabloff is guardedly optimistic, however. And the scientists themselves now seem upbeat. As Huber says: “We feel like we're going somewhere.”


    A New Form of Mad Cow?

    1. Dennis Normile

    Several studies suggest that, contrary to evidence to date, more than one form of prion may be involved in BSE

    TOKYO—Ever since bovine spongiform encephalopathy (BSE) was first identified in the United Kingdom in the mid-1980s, scientists have blamed one unique strain of the infectious agent for the disease. What was later shown to be a misfolded protein called a prion seemed to maintain its molecular characteristics and produce the same brain-wasting symptoms even when it infected humans and mice. This consistency helped track the worldwide spread of the disease among cattle back to feed made from slaughterhouse waste. It also helped show that the human version, variant Creutzfeldt-Jakob disease, was spread by eating contaminated beef.

    But this consistency in what is also known as mad cow disease was somewhat puzzling. Other brain-wasting prion diseases, such as sporadic Creutzfeldt-Jakob disease (CJD) in humans and scrapie in sheep, have been linked to a variety of slightly different prions, each causing slightly different disease manifestations.

    Now a spate of new findings suggests that BSE may have its own suite of related prions. Groups in France, Japan, and Italy are all claiming to have identified atypical BSE cases. And the Italian prion shows molecular similarity to the one implicated in a form of human sporadic CJD, although the researchers are not claiming a cause-and-effect relation.

    The findings may indicate that either the prions sometimes change after infecting a new host or that more than one strain of BSE prion is circulating. Either way, “I think they've found something new,” says George Carlson, a mouse geneticist who studies prion diseases at the McLaughlin Research Institute in Great Falls, Montana.

    At this stage, researchers don't know whether this new agent is transmissible. And because existing tests pick it up, the findings do not alter the already volatile debate on what type of cattle surveillance is sufficient to protect the food supply. But the new findings do add another piece to the puzzle of this rare disease.

    What scientists consider the most convincing claim for a new form of BSE was reported online last week in the Proceedings of the National Academy of Sciences by a group led by Maria Caramelli at the National Reference Center for Animal Encephalopathies in Turin, Italy, working with colleagues there and at three other Italian institutes. Two aged cows, one 15 and one 11, tested positive on rapid screening tests. The brains of these animals were preserved, and further examinations turned up some significant differences from typical BSE cases. Instead of the usual granular, stringy deposits of prions, these atypical cases had amyloid plaques: globular blobs of tangled protein. This led the group to propose calling this new form of the disease bovine amyloidotic spongiform encephalopathy (BASE).

    Different distribution.

    In cows with a newly identified form of BSE (bottom panel), prion proteins accumulate in unexpected parts of the brain.


    The molecular weight of the protein is different, and so is its distribution in the brain. Usually, BSE primarily affects the brainstem, hypothalamus, and thalamus. In the new strain, there were few prions in the brainstem but more in the olfactory bulb and cortex and the hippocampus. Both these differences are similar to the conditions seen in certain human sporadic CJD cases.

    “There are many types of prion diseases in humans, in sheep, in mink, and so on, so our results could have been predictable,” Caramelli says. “But actually, previous studies have only detected a single [prion] for BSE.”

    “The evidence looks convincing,” says Danny Matthews, head of prion disease studies at the U.K.'s Veterinary Laboratories Agency in Weybridge. But whether this prion has recently diversified from the more common strain, has long gone undetected, or arose sporadically is not clear. One complication is that the diagnostic tests now being used to distinguish the different prion strains were not previously available, says Matthews, adding that researchers might have to try applying the new techniques to old samples. Other answers might come from determining whether this new strain is transmissible—studies that are under way in mice and cows, says Caramelli.

    The McLaughlin Institute's Carlson cautions against reading too much into the group's findings of similarities between BASE and some forms of sporadic CJD. “It would be a big leap” to conclude that all sporadic CJD can be traced to variants of BSE or other animal prion diseases, he says.

    More indications of yet other BSE prion strains came from a group at the Virology Unit of the French Agency for Food Safety in Lyon. In the European Molecular Biology Organization's EMBO Reports, the group reported online 19 December 2003 finding three atypical BSE cases. And in a paper published in December in the Japanese Journal of Infectious Diseases, researchers reported evidence for a new form of the BSE prion in an unusually young animal, just 23 months old. Neither group has yet done the follow-up studies done by Caramelli and her colleagues, and so their results are considered preliminary.

    Meanwhile, the new findings don't seem likely to affect testing procedures. “We don't find any difference in terms of epidemiology or in terms of behavior within the host that would require different kinds of recommendations for either animal or human health,” says Alex Thiermann, an official with the Paris-based World Organization for Animal Health (OIE). OIE says that safely removing the brain and spinal column at the slaughterhouse will protect public health, and each country should do enough testing to monitor the epidemiology of the disease in its national herd.

  14. The Origin of Speech

    1. Constance Holden

    How did the remarkable ability to communicate in words first evolve? Researchers probing the neurological basis of language are focusing on seemingly unrelated abilities such as mimicry and movement

    In the 1860s, both the British Academy and the Société de Linguistique de Paris warned their members not to discuss the origins of language, because the topic was so seductive—and so speculative—that it spawned endless, futile theorizing. More than a century later, Noam Chomsky, the most influential linguist of the last 50 years, wrote that language evolution and the brain mechanisms underlying it “appear to be beyond serious inquiry at the moment.”

    But the time now appears ripe for this endeavor. In the past decade, an unprecedented number of researchers from many disciplines have begun to tackle the origin of speech, spurred by new techniques as well as new ways of thinking. Among linguists, the question of language origins was long obscured by the dominance of Chomsky, whose theory of an innate “universal grammar” ignored the problem of how this language ability arose. In 1990, however, the wave of evolutionary thinking that had previously swept through biology finally struck linguistics too: That year, Harvard cognitive scientist Steven Pinker and Yale psychologist Paul Bloom published a long article in Behavioral and Brain Sciences arguing that language must have evolved by natural selection. The Pinker-Bloom paper was “a kind of watershed,” says linguist James Hurford the University of Edinburgh, U.K. “Suddenly it was OK to talk about evolution of language in Chomskyian circles.”

    Meanwhile, advances in brain imaging, neuroscience, and genetics have enabled a new contingent of researchers to go ever deeper into our brains and our biological past. For a long time, researchers treated language ability as some sort of “miracle,” says neuroscientist Michael Arbib of the University of Southern California (USC) in Los Angeles. Now, he says, researchers are breaking that miracle down into a series of smaller, more manageable “miracles,” involving disparate capacities such as the ability to imitate facial expressions or to string movements together. They're not fantasizing that the human brain at some point suddenly found that it could speak with the tongues of angels, he says; rather, it achieved a more modest state some researchers call “language readiness,” which opened the door to further advances in linguistic ability.

    Language origins are “certainly worth talking about now,” says Hurford, who in 1996 launched the first of a series of biennial conferences on language evolution* that have grown steadily. Hurford's Edinburgh colleague Simon Kirby has documented the leap in interest with a citation search: The number of papers dealing with both “language” and “evolution” more than doubled from the 1980s to 1990s. (See also Book Review, p. 1299.)

    Yet despite all the activity, the new lines of evidence remain indirect, leaving plenty of room for interpretation—and conflict. “If you want a consensus, you won't get it,” says cognitive scientist Philip Lieberman of Brown University. With no fossils of speech, the origin of language remains “a mystery with all the fingerprints wiped off,” says brain scientist Terrence Deacon of the University of California, Berkeley.

    The long view

    Archaeologists have identified various milestones in human behavior in the 5-million-year evolutionary void between animal communication and human speech, but there is no consensus on which achievements imply the capacity for language. For example, the first stone tools date to 2.4 million years ago; some researchers think this may indicate linguistic facility, but others argue that toolmaking is far removed from speech. Another possible starting point is 2 million years ago, when the hominid brain began a period of rapid expansion, including in the primary brain areas associated with producing or processing language—namely Broca's area in the left frontal cortex and Wernicke's in the left temporal lobe (see brain model, p. 1318).

    As for actually producing the sounds of words, or phonemes, skeletal studies reveal that by about 300,000 years ago, our ancestors had become more or less “modern” anatomically, and they possessed a larynx located at the top of the trachea, lower than in other primates (see diagram). This position increases the range of sounds humans can make, although it also makes it easier for food going down the esophagus to be misdirected into the windpipe, leaving us more vulnerable than other mammals to choking. Such anatomy could have developed for no other purpose than speech, says Deacon.

    Dangerous talk.

    Side view of human vocal tract shows that because of our lowered larynx, food and drink must pass over the trachea, risking a fall into the lungs if the epiglottis is open.


    Other possible milestones come from genetic studies. For example, researchers at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, reported last year that the FOXP2 “speech gene,” which affects both language and the ability to articulate (Science, 16 August 2002, p. 1105), was apparently a target of natural selection. This gene may have undergone its final mutation fewer than 100,000 years ago—and no more than 200,000 years ago—perhaps laying the groundwork for a new level of linguistic fluency.

    Most researchers are inclined to the view that language gradually emerged over perhaps a couple of hundred thousand years (Science, 20 November 1998, p. 1455). But all we know for certain, says Pinker, is that fully developed language was in place by at least 50,000 years ago, when humans in Europe were creating art and burying their dead, symbolic behaviors that point unequivocally to fluent language.

    From ape to human.

    Magnetic resonance images of a bonobo brain are warped onto the shape of a human cortex, viewed from (left to right) the side, top, and front. Red and yellow areas in the temporal region (linked to language) and in the prefrontal and occipital regions had to be stretched the most to reach the human configuration, whereas blue areas are similar in apes and humans.


    The motor route

    Understanding when language emerged will probably have to await better understanding of how it emerged. In recent years many researchers have become increasingly attracted to the notion that changes in the brain's motor areas were crucial for language capability.

    Although we tend to associate language first with sound rather than movement, speech may be better understood as a motor activity, says Deacon. Like other fine motor activities such as threading a needle or playing the violin, speech demands extraordinarily fine and rapid motor control. Elaborate movements of the larynx, mouth, face, tongue, and breath must be synchronized with cognitive activity.

    Thus researchers are probing the links between language and areas in the brain that control gestures, either hand movements or the articulatory gestures of mouth and tongue. Linguist Robert Kluender of the University of California, San Diego, says explorations of gestures, including sign language, offer glimpses into what might have been the “intermediate behavioral manifestations” between animal communication and speech.

    Many researchers think hand and face gestures offer behavior that is more analogous to speech than are animal vocalizations. In all other mammals, both breathing and articulation are directed by brain areas quite separate from those associated with human speech, notes Pinker. Lieberman argues that nonhuman primates engage in “a limited number of stereotyped calls” such as alarm calls and that they don't have the interactive or combinatorial quality of language. Apes' anatomy is such that they “could produce a [phonetically] reduced form of human speech,” adds Lieberman. “But they don't.” They're much better at signing, because apes' motor behaviors have more flexibility and are more involved in social interaction—through gaze, mouth and facial movements, and limb gestures—than their calls, Lieberman says.

    Lieberman argues that the crucial changes that laid the groundwork for language ability occurred in brain circuits connected with the basal ganglia, subcortical structures involved in movement. In his view the basal ganglia is the “sequencing engine” that makes combinations—both verbal and gestural—possible. As evidence he points to the fact that patients with Parkinson's disease, which disrupts the basal ganglia, suffer erosion of syntactical abilities, as well as problems with balance and movement.

    Hand and mouth.

    Chimps gesture with both face and hands to help express themselves.


    Pinker's research, with cognitive scientist Michael Ullman of Georgetown University in Washington, D.C., lends weight to this view. They have shown that Parkinson's patients with basal ganglia damage have more trouble with regular verbs than with irregular ones. Conjugating a regular verb such as “walk,” Pinker explains, is a combinatorial, sequential task that calls for adding the “ed” for past tense. But retrieving the past tense for an irregular verb such as “come” simply calls on long-term memory. Such tasks require other brain areas as well, but Lieberman argues that the basal ganglia are a common element in both movement and language disorders.

    Indeed, although many other brain areas, including those responsible for articulation, hearing, planning, and memory, had to develop to support language, there is abundant behavioral evidence for an intimate connection between language and motor abilities, says Pinker. For example, psychologist David McNeill of the University of Chicago cites the case of a man who lost all sense of touch below the neck due to a strange virus. Although the man had to relearn the simplest movements, using cognitive and visual feedback to substitute for lost senses, he continued to gesture automatically when he spoke, even when researchers hid his hands from his own and listeners' view. “The hands are really precisely linked to speech articulation,” says McNeill. “Gesture is not a behavioral fossil that was superseded by language but an indispensable part of language.”

    But not everyone is ready to dismiss the meaningfulness of animal calls, with differing views often dependent on a scientist's specialty. Primatologist Marc Hauser of Harvard, for example, believes that primate calls are better candidates for speech precursors than any gestures are. With chimp gestures, “nothing gives a suggestion of anything referential”—that is, having an explicit association with a concept or thing—he says. Primate alarm calls, in his view, “kind of look like words.” For example, he cites work by psychologist Klaus Zuberbühler of the University of St. Andrews, U.K., who has reported that African Diana monkeys can modulate their alarm calls to indicate what type of animal (leopard or eagle) is threatening. Such sounds, says Hauser, “have a far greater … connection to language than any discovery on nonvocal signals.”

    Wired for imitation?

    Classic language areas—Broca's and Wernicke's (yellow)—overlap (orange) with areas critical for imitation (red).


    Many linguists, too, are unmoved by motor arguments, which they do not believe can explain how the brain developed syntax. “Motor organs are for muscular movements,” says Derek Bickerton of the University of Hawaii, Manoa, and that puts them at the “end of the pipeline” of language production. “Whatever organizes motor movements is on a par with what organizes throwing movements,” says Bickerton. “The purpose is to put things in a regular invariant sequence.” That, he says, is very different from making sentences, which requires “putting things into an extremely plastic order determined by your conceptual structure.”

    Mirror, mirror

    Despite such caveats, the motor-language connection continues to draw attention, in part because of a 1996 discovery that many see as the first hard data in years to bolster the theory. This is the so-called mirror neuron system found in monkeys' brains.

    Mirror neurons' link to language depends on imitation, a skill largely unique to humans and considered vital to language. Although parrots and dolphins can do vocal mimicry, imitation is not as a rule a mammalian attribute: Even nonhuman primates do it poorly (contrary to the implication of the term “to ape”). But imitation is the way babies learn their first words. And it's the only way a common meaning can emerge for an abstract symbol, a phenomenon that linguists call “parity.” “Imitation is the common thread for people writing about language origins,” says neuroscientist Marco Iacoboni of the University of California, Los Angeles.

    So researchers were excited when a team led by Giacomo Rizzolatti of the University of Parma, Italy, found what they considered a plausible antecedent for the human ability to imitate in the brains of monkeys. The team recorded electrical activity in macaques from 532 neurons in an area called F5, which is homologous to Broca's area in humans. Neurons in F5 are known to fire during monkeys' “goal-directed” hand and mouth movements—for example, when reaching for food.

    What intrigued the researchers is that a subset of these neurons, which they dubbed mirror neurons, also became active when a monkey merely watched another monkey (or a human) perform the action. This finding “opened a whole new approach to the language evolution story,” says Arbib of USC. “What would a mirror system for grasping be doing in the speech area of the brain?” The researchers concluded that these mirror cells form a system for matching the observation and execution of mouth and hand actions—the first steps toward imitation.

    So far, mirror neurons have been found in only two brain areas in macaques, and the single-cell brain recording technique that revealed the macaque neurons isn't done on humans. But Iacoboni believes he has identified a similar circuit—“a core neural architecture for imitation”—in people. He combined the results of single-cell brain recordings in monkeys with functional magnetic resonance imaging in humans while they watched or imitated finger movements or facial expressions. Iacoboni says that in addition to Broca's, the circuit comprises an area in the superior temporal cortex (which overlaps with Wernicke's and has neurons that respond to face and body movements) and one in the parietal cortex, the homolog to the macaque area called PF, which combines visual and bodily information. “The neural architecture for imitation … overlaps very well with well-known language areas in the human brain,” says Iacoboni, who concludes that the dual-use nature of Broca's area in particular “suggests an evolutionary continuity between action recognition, imitation, and language.”

    Mirror neurons provide the “neural missing link” between movement and speech control, says Arbib. They also fit well with an old theory, the “motor theory of speech perception,” developed in the 1950s by the late Alvin Liberman of Yale University's Haskins Laboratories. Psychologist Michael Studdert-Kennedy of Haskins Labs explains that when children imitate their first words, experiments have shown that they (unlike another imitator, the parrot) are guided by the “gestural” features of the sound—that is, by the actions of the mouth rather than by a sound's acoustic features. A well-known trick to demonstrate this is known as the McGurk effect: If you watch someone pronounce the syllable “ga” while listening to a recording of someone saying “ba,” you will likely hear “da,” a sound anatomically between the other two.

    This means “you perceive speech by referring the sounds you hear to your own production mechanism,” says Studdert-Kennedy. Humans, unlike other animals, are equipped with an intuitive sense of how their body parts correspond with those of others. Thus a small child knows how to raise its hand in response to a parental wave. “There's obviously a direct representation of your body in its body,” says Studdert-Kennedy.

    The theory developed new life when Studdert-Kennedy brought it to bear on questions of language evolution. Mirror neurons, he says, “for the first time provide an example of a direct physiological hookup between input and output”: the observation of an action and its imitation. Indeed, Rizzolatti's group recently reported that the macaque has “audiovisual” mirror neurons: Some of the cells in F5 fire not only when a macaque watches a meaningful grasping action, but when it hears the sound of one, such as the sound of breaking peanuts (Science, 2 August 2002, p. 846). Arbib believes that mirror systems probably exist in other parts of the brain for many other behaviors.

    He and others feel that mirror neurons offer the first concrete neurological evidence of abilities crucial to language, but it's a long way from a few firing neurons to speech. Some scientists think the potential significance of mirror neurons may be exaggerated. Macaques, after all, can't speak and they can't imitate either, notes Pinker. In his view, mirror neurons' “relevance to language is still pretty fuzzy.”

    The first syntax: words or waves?

    Despite such drawbacks, mirror neuron theory is being invoked by both sides in the schism over whether the earliest language—that is, symbolic sounds or gestures connected by some sort of rules of syntax—used the voice or the hands.

    Those who favor gestural origins, such as psychologist Michael Corballis of the University of Auckland, New Zealand, point out that mirror neurons are found in brain areas responsible for grasping. “I think it's extremely likely that language evolved in our early ancestors as a manual system, not as a vocal one,” as far back as a million years ago, says Corballis. He notes that when robbed of speech, people quickly develop sign language, as has been shown by the case of a community of deaf Nicaraguans who created their own language.

    Given the strong role of manual and facial gesture in speech and the relatively recent final mutation of the FOXP2 gene, Corballis argues that “autonomous” speech may not have become fully developed until the cultural explosion beginning 50,000 years ago. The mirror system, he believes, reinforces his theory, because it apparently evolved first for manual control. It “probably picked up vocal and facial control quite late in hominid evolution,” he says, as speech became the preferred modality for communication for various reasons, such as the need to free the hands for work or to talk in the dark.

    But others believe equally strongly that even if movement and language are inseparable, language is primarily an oral, not manual, behavior. Psychologist Peter MacNeilage of the University of Texas, Austin, has developed a theory that monkey oral behaviors (not vocalizations) are precursors of human syllables, and he argues that the mirror neuron system—especially the recent discovery of neurons that respond to lip smacking and nut cracking—bolsters his ideas.

    MacNeilage suggests that the brain's supplementary motor area (an area adjacent to the primary motor cortex that is important for motor memory and sequential movements) controls the physical constraints on vocal expression. The actions of chewing, sucking, and licking took on communicative content—a job for Broca's predecessor—in the form of lip smacks, tongue smacks, and teeth chatters. The next stage, says MacNeilage, was to give voice to these behaviors by bringing the larynx into play. This theory fits well with the fact that the unique sounds of click languages, which some speculate may have been the original mother tongue (see next story), do not use the larynx. Once the larynx was involved, a phonology—a set of sounds that could be combined in endless ways to form a large vocabulary—developed, and this in turn paved the way for the emergence of syntax.

    “I don't believe manual gestural communication got to the point of the combinatorial phonology that I'm talking about, because if it did we'd still have it,” says MacNeilage. In his view, if sign language had become that complex, there would have been no reason strong enough—the desire to talk in the dark notwithstanding—to cause a transition to vocal speech. “Nobody who argues that we went from sign to speech has given us an adequate translation theory,” he says.

    Others say the “which came first” debate is beside the point. “Evolution selected the ability to combine speech and gesture under a meaning,” says McNeill. “The combination was the essential property”; neither gesture nor speech could have evolved without the other, he says. It doesn't matter which came first, agrees Zuberbühler: “Once an individual reaches a certain threshold in its cognitive sophistication, it will inevitably express itself in a sophisticated way,” through any means at its command, he says.

    The deepest questions—such as how humans became symbolic thinkers and developed “theory of mind,” or awareness of others' thought processes—remain far from resolved. Researchers say one way to tackle them will be through ever-finer brain imaging technology so they can, as Bickerton puts it, “find out the flow chart for a sentence in the brain.” Harvard's Hauser and colleagues believe that research in animals may identify behavioral analogs for “recursion”: the ability to string words together in infinite hierarchical combinations. Arbib predicts that the discovery of other types of mirror systems, in both humans and animals, will help yield a better “taxonomy” of the language conundrum, especially if bolstered by computational modeling. But answers won't come all at once. “I see this as a process of gradual convergence. The problem space is shrinking” at long last, says Bickerton. “It will be solved when that space goes to zero, not when someone comes up with the killer solution.”

  15. The First Language?

    1. Elizabeth Pennisi

    Genetic and linguistic data indicate—but can't quite prove—that our ancient ancestors spoke with strange clicking noises

    In the 1980 movie The Gods Must Be Crazy, a soda bottle falls out of the sky and lands among some strange-sounding Africans. Their excited chatter, punctuated by rapid-fire sucking and clicking noises, sounded intriguing but alien to audiences around the world. But a handful of studies of this seemingly esoteric language suggest that our early ancestors depended on such clicks to communicate. The latest linguistic work points to clicks as having deep roots, originating at the limits of linguistic analysis sometime earlier than 10,000 years ago, and genetic data suggest that click-speaking populations go back to a common ancestor perhaps 50,000 or more years ago.

    Although the idea is far from proven, “it seems plausible that the population that was ancestral to all living humans lived in the savanna and used clicks,” says vertebrate systematist Alec Knight of Stanford University. Knight estimates that today only about 120,000 people rely on these odd sounds. Even so, they are providing new insights into how humans evolved the gift of gab, particularly when researchers add up the results of different kinds of data. “There's a lot of mileage to be gained by cross-referencing linguistic, genetic, and archaeological data and theories,” says Nigel Crawhall, a graduate student studying click languages at the University of Cape Town, South Africa.

    All alone.

    Researchers ponder why the Hadzabe live so far from other click speakers.


    Clicks in context

    Today clicks are part of typical conversation for about 30 groups of people, most from Botswana, Namibia, South Africa, and nearby. The only recognized non-African click language is Damin, an extinct Australian aboriginal language used only during manhood initiation ceremonies. Among African click speakers, daily conversations can be dominated by clicks, and sometimes verbal sounds drop out completely.

    Adept tongue and inward air movements distinguish clicks from other nonverbal utterances. They are really just very strongly pronounced consonants, says Amanda Miller-Ockhuizen, a linguist at Cornell University in Ithaca, New York. Click speakers have click sounds in common, but they have different words and therefore very different languages.* Some researchers argue that click languages are far more different from each other than English is from Japanese.

    But that diversity is only now being appreciated. In the 1960s, the influential Stanford linguist Joseph Greenberg put all click languages under one umbrella, which he named the Khoisan language family after the two biggest groups included: herders known as Khoe and hunter-gatherers called San. Now, however, historical linguists are challenging Greenberg's classification, examining Khoisan with more stringent analytical methods and splitting it into several language groups. “It's been easy to say they are all in one family,” says Bonny Sands, a linguist at Northern Arizona University in Flagstaff, “because nobody has gone and looked.”

    The latest work divides the Khoisan family into at least three geographically and linguistically distinct ones. And a few of these languages don't fit in any known families, Crawhall notes. For example, in 1995 Sands reexamined the grammar, meanings, and sounds of Hadzane, spoken by about 1000 Hadzabe people in north-central Tanzania, 2000 kilometers away from the majority of click speakers. She “proved that Hadzane cannot be shown to be related to any of the other families,” says Crawhall. Rather, says Sands, linguistically Hadzane is unlike any other known language.

    That suggests that either Hadzane had a separate origin from other click tongues or that it and other existing click languages derive from a very ancient protoclick language. Sands thinks that there have always been multiple click languages, but “if there was originally only one click family, it must be many tens of thousands of years old,” she says. That's further back than linguistic studies can establish.

    Silent stalkers.

    !Kung hunters may use clicks while sneaking up on prey in the savanna.


    Tracking ancient populations

    But genetic data on click speakers have also been streaming in, and these results can offer a glimpse into the more distant past. In 1991, one study hinted that Hadzabe were an ancient people based on the great diversity in their DNA; mutations accumulate over time, so diverse sequences imply an ancient population. Most recently, at a physical anthropology meeting last year, human geneticist Sarah Tishkoff of the University of Maryland, College Park, reported great diversity in the DNA of the Hadzabe and another click-speaking group in eastern Africa, the Sandawe.

    The puzzling origins of these groups and their clicks intrigued Knight and Stanford anthropological geneticist Joanna Mountain. Last year, they decided to use genetic data to decipher the relationship between the isolated Hadzabe and the San in southern Africa. They thought that perhaps the Hadzabe had recently moved into Tanzania from the south, bringing clicks with them, or that the San had been part of a northern group that migrated south. “We expected a recent shared heritage, but the data indicated something opposite [from the recent origins] we expected,” Knight recalls.

    Knight, Mountain, and their colleagues examined mitochondrial DNA and parts of the Y chromosome from 49 Hadzabe and about 60 people from three other Tanzanian populations. They also gathered Y chromosome data from a San group, the Ju‖'hoansi (also known as the Kung) from Namibia and Botswana, and two non-click-speaking groups in central Africa.

    Similar patterns in certain DNA segments indicate relatedness—and the Hadzabe and San turned out not to be closely related at all. The genetic sequences suggest that the two went separate ways very early on in their histories; neither group had migrated recently either northward or southward to bring clicks to the other. “The research suggests that the Hadzabe are the descendants of one of the first groups to split off” from an ancient pool of click speakers, says Crawhall.

    Some researchers think the split between the Hadzabe and all other click speakers could have been as early as 100,000 years ago, but Knight puts it between 70,000 and 50,000 years ago. That's roughly the time frame proposed for the exodus of modern humans out of Africa, which some say might have been spurred by the development of language itself. But Knight warns that the dating is the most tentative part of their study.

    Such an early origin for clicks appeals to linguist Michael Corballis of the University of Auckland, New Zealand, who has argued for years that 100,000 years ago, our only “words” were gestures: a flick of a finger, a twirl of the wrist, and so on (see p. 1316). “It may be that clicks themselves date back to a time when language was not autonomously vocal; they were a kind of [preverbal] way of adding sound,” or a steppingstone to human speech, he says.

    Knight thinks that only groups that retained ancestral hunting lifestyles continued to need clicks, and other click languages died out when early humans moved into new environments. That fits with evidence from living Hadzabe, who told Knight that when they hunt, they use clicks—and verbal talk disappears. Filmmaker John Marshall of Documentary Educational Resources in Watertown, Massachusetts, who has made dozens of films of click speakers, has noted this too. “I know from experience that using only clicks to communicate works well when stalking game,” he explains. He and Knight suggest that whereas voices can spook animals, clicks mimic rustling grass, a typical sound on the dry savanna and one less likely to send game running.

    Plausible as it all sounds, the theory of clicks as the first language is by no means proven. Even though Knight's work expands on Sands's ideas about the history of clicks, she's worried that Knight may be pushing his data too far. Genetic and language evolution don't necessarily go hand in hand. “The most he can say is that [the two] are correlative,” she says. Thus there's no way to prove whether clicks made up the mother tongue, she argues.

    Meanwhile, some researchers, such as linguistic historian Christopher Ehret of the University of California, Los Angeles, still stand by Greenberg's all-inclusive family for the click languages and downplay the genetic data. Furthermore, whereas most researchers insist that all clicks stem ultimately from the same ancestral tongue, Sands wonders whether clicks might have evolved several times, with Damin in Australia and Hadzane as examples. “Clicks are part of the normal language mechanism that people have,” Sands notes, and children make clicks as they are learning to speak.

    All agree that nothing can be settled without more work. Knight and Mountain are seeking DNA from more groups, and Sands and Crawhall are scrambling to bag more click languages for the linguists' portfolio. Sands worries that they can't work fast enough; one group has just 10 speakers left. But as the data stream in, Knight remains optimistic. “In the year 2000, we didn't know anything compared to what we know now,” he says.

  16. Speaking in Tongues

    1. Elizabeth Pennisi

    Researchers struggle to find a cohesive classification scheme to organize the world's thousands of languages

    In the 1980s, after puzzling over the pronouns in a bewildering array of languages spoken in the Americas, linguist Joseph Greenberg of Stanford University thought he had made a breakthrough. By focusing on similarities in the meanings of pronouns and other words beginning with the same letters in Eskimo, Aleutian, Apache, and other native American tongues, he was able to organize mountains of data and classify all 2000 or so languages into just three groups. The achievement was particularly appealing because those groups roughly corresponded to genetic and archaeological evidence thought to indicate three large migrations into the Americas.

    Conceptual leap.

    August Schleicher (right) pioneered the idea that languages could be arranged in evolutionary trees.


    It was just the kind of work researchers long for. It helped make sense of the world's babel, classifying languages from the Arctic to Tierra del Fuego and from Greenland to Siberia. And it offered insight into human history with a simple scheme. Many researchers studying the peopling of the Americas eagerly embraced the work.

    There was just one problem: Greenberg's work was wrong, says Lyle Campbell, a linguist at the University of Canterbury in Christchurch, New Zealand. “Antiscientific” and “crap,” growls historical linguist Donald Ringe of the University of Pennsylvania in Philadelphia. The pair are among a cadre of historical linguists who argue that Greenberg's way of collecting and analyzing data led to spurious results and that the linguistic similarities he noticed could be due to coincidence rather than shared origin. Campbell and other native American linguists insist that there are upward of 150, rather than three, language families in the Americas.

    Greenberg died in 2001, but deep schisms over how the world's languages are related live on. Greenberg's academic descendants continue to insist that they are able to trace languages further and further back in time, and in doing so piece together extended families and protolanguages ancestral to those families. But they are in a tug of war with others wary of the limits of linguistic clues, who content themselves with maps full of complexity and refuse to peer back further than 10,000 years. At that point, argue researchers such as Ringe and Campbell, languages had already diverged so much that establishing earlier kinships is difficult if not impossible. “We're seen as the wet blankets,” says Robin Barr, linguist-in-residence at American University in Washington, D.C.

    There are a few points of agreement, such as Greenberg's classification of Arabic, Egyptian, and Hebrew into the Afro-Asiatic language family (see map). But still, controversy prevails. Tracing the common ancestors of languages and grouping their descendants into ever larger collections is not for the faint-hearted. Most linguists rely solely on studies of words and grammar to date the split of one language into two, and the results are often anything but clear. And the addition of other types of data, such as genetic clues to migrations, tends to add more complexity than clarity. Genetic clues can reveal events older than those traced by linguistic studies, but it's not clear that genes accurately reflect what has happened to a language, warns Bonny Sands, a linguist at Northern Arizona University in Flagstaff.

    Pater to father

    The notion that even seemingly distant languages are related dates back to the late 1700s, when British jurist Sir William Jones and others noticed similarities among Sanskrit, Latin, and Greek. Jones suggested that all these languages might stem from a common source. A century later, even before biologists were toying with the idea that organisms could be arranged in family trees, German linguist August Schleicher developed trees as a way to portray connections between languages. About the same time, others worked out a systematic way to establish kinship among languages—the so-called comparative method. These scholars and their successors laid the framework for modern historical linguistics.

    To assess the degree of relatedness using the comparative method, investigators look for the “same” word across several languages, seeking systematic patterns of change. To take an example from one mother language, many Latin words that start with p begin with f in English. Thus “father” is parallel to pater and “fish” to pisces. This time-consuming approach requires an in-depth understanding of the languages involved, because researchers must identify and exclude words borrowed from another language and words that evolved in parallel; such parallel evolution often occurs in words that represent sounds, such as cuckoo, and words that come from baby talk, such as papa.

    Nonetheless, says April McMahon, a linguist at the University of Sheffield, U.K., the comparative method “seems to work.” For example, the method has revealed close connections among the nearly 150 languages of the Indo-European language family, some of them seemingly very distant, such as Hindi, Russian, English, and Iranian (see p. 1323).

    But such work is slow, so Greenberg adopted a more broad-brush approach. He usually depended on a “core” vocabulary of a few hundred words, including numbers and pronouns, which he thought were likely to be basic words preserved as new languages arose; changes in those words should reflect the basic alterations involved in any given language. He compared those same words across many languages, an approach that didn't require culling words or even a very deep understanding of the languages under study. In the 1950s, he sorted 2000 African languages into four groups: Khoisan, which includes almost all click speakers; Bantu or Niger-Congo, a collection of 1436 languages; Afro-Asiatic, which includes Egyptian, Hebrew, Arabic, and others; and Nilo-Saharan, languages in Sudan and Central Africa. Many linguists applauded his effort at the time. “What he did was seen as really exciting because nobody had been able to do that,” says Barr.

    Greenberg eventually concluded that most of the world's 7000 languages can be grouped into about 17 “families.” According to Stanford's Merritt Ruhlen, a longtime colleague of Greenberg, his work has made possible “the definitive classification of the world's languages.”

    But although the simplicity of Greenberg's interpretations is appealing, particularly for nonlinguists, his views in several regions have now come under closer scrutiny, and historical linguists have brought out the heavy guns. Even his acclaimed African study is being questioned, as researchers analyzing the vocabularies and grammar of click languages have concluded that not all belong together (see p. 1319).

    The talking world.

    This map shows one view of the complex distribution of language families, although several boundaries remain in dispute.


    Tracking linguistic ghosts

    Undeterred by his critics, Greenberg moved into even more controversial territory. His ultimate goal was to group languages into ever bigger entities in search of a true mother tongue. Others had already promoted the existence of the so-called Nostratic superfamily, first proposed at the turn of the last century by Danish linguist Holger Petersen and later by Russian linguists. In their view, Nostratic included the language families of Indo-European, Uralic (spoken in northeastern Europe), North African and Semitic, Dravidian (from Southern India), and Altaic (from Central Asia).

    The ghosts of superfamilies subtly haunt modern vocabularies, say Ruhlen and others. Consider the word five, which originated in words representing the hand. The word for fist was penkwe and later pnkwstis in ancient Indo-European and became peyngo in the Uralic root tongue and p'aynga in Altaic families such as Turkish and Mongolian. The Proto-Indo-European penkweros, finger, became pente in Greek, quinque in Latin, and panca in Sanskrit. From these variants, linguist Manaster Ramer of Wayne State University in Detroit has come up with the ancient Nostratic equivalent, perhaps spoken 12,000 years ago: payngo for fist.

    Instead of Nostratic, Ruhlen and others support another, very similar superfamily, Eurasiatic, which was proposed by Greenberg 4 years ago and includes languages as disparate as English, Mongolian, Siberian, and Japanese. It encompasses a different subset of language families from Nostratic, although both include Indo-European and Altaic. In this analysis, Greenberg noted quite subtle similarities in the words across families. For example, the words for canines looked alike and often began with a similar sound: the ancestral version of Proto-Indo-European “dog” is thought to be kwon; “wolf” in Proto-Uralic was küjnä, and in the Russian tongue Gilyak, “dog” was qan. Greenberg, along with Ruhlen, also pushed for the recognition of other superfamilies, such as Austric, which would encompass Southeast Asian families.

    But all these analyses continue to draw fire from researchers who say the data simply can't support peering so far back in time. “Languages have been evolving for so long that too much has been lost,” says Ringe. Many of the similarities Greenberg noted, such as similar first letters, are so subtle that they may be circumstantial, says Ringe.

    Going back to the distant past is also difficult because languages change at different rates. Icelandic children can easily read texts written many centuries earlier, but that ability doesn't hold for English students studying Chaucer. For these reasons, many historical linguists generally refuse to set dates for language changes before 5000 B.C.E.; for unwritten languages, they aren't even confident they can go back that far.

    Written in the genes

    Over the past few decades, genetic analysis has increasingly been brought to bear on this debate. Some studies have offered support for large and ancient family ties. Luigi Luca Cavalli-Sforza of Stanford University, for example, has shown that people with the same genetic makeup also tend to share their language. For example, the Bantu speakers whom Greenberg grouped together do indeed form their own distinct genetic group. In the Americas, Cavalli-Sforza identified four genetic groups; the one that branched off earliest from this line (and is therefore the one that is most distinct genetically) contains speakers of Na-Dene languages, whereas most of the rest use what Greenberg calls Amerind languages.

    But as researchers probe the genetics-language connection further, “we've been finding much more noncorrespondence between linguistic trees and biological trees, suggesting a much more important role for language shift in the history of human populations,” says Bernard Comrie, a linguist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Even Cavalli-Sforza himself had found exceptions: Genetically, Ethiopians fit in well with other Africans, but their language is more akin to that of Middle Easterners, for example.

    One reason for these differences is that genes and words don't follow the same timetable. It takes many generations for a new genetic variant introduced by invaders to take hold in a population, but a native language can be quickly replaced by that of the conquerors. “You cannot switch your genes, but you can easily switch your language,” says Nigel Crawhall, a linguist at the University of Cape Town, South Africa. Nor does changing language require an extreme event such as war; trading and intermarrying can have a similar effect, as may have happened in Ethiopia.

    Such a cautious attitude toward piecing together the linguistic past has its own critics, however. It's “like 17th century astronomers who said that the telescope wouldn't get any better,” counters Christopher Ehret, a linguistic historian at the University of California, Los Angeles. “There's a group of people who throw up their hands without trying.” He thinks Greenberg and DNA experts are prescient in piecing together scenarios that reach much deeper into the past.

    Although complete answers remain elusive, the quest to understand linguistic history continues to generate excitement. Geneticists are constantly boosting the diversity of their databases, while linguists race to document disappearing languages. A few researchers are also finding new methods to analyze their data. Ringe, for example, is working with a computer scientist on linguistic models to build plausible scenarios for the evolution of language. McMahon and her colleagues are adapting sophisticated methods used by evolutionary biologists to trace organismal family trees to determine relationships among languages. Ringe expects that these new techniques will reverse a trend of diminishing numbers of historical linguists. After all, says Crawhall, “everyone is curious about how the languages relate to one another.”

  17. Search for the Indo-Europeans

    1. Michael Balter

    Were Kurgan horsemen or Anatolian farmers responsible for creating and spreading the world's most far-flung language family?

    Around 6500 years ago, a group of seminomadic warriors arose on the treeless steppes north of the Black Sea. They herded sheep and goats, and they tamed the wild horse. Their language was rich with words reflecting their pastoral way of life. When one of their warrior-chiefs died, he was buried with great ceremony under a large earth mound called a kurgan. After about 1000 years of restless existence on the barren steppes, the story goes, these nomads went in search of new grazing land, riding out of their homeland between the Dnieper and Volga rivers armed with bows and arrows, spears, and bronze daggers. Over the next 2 millennia, the horsemen swept into eastern and central Europe, Anatolia, and much of western Asia, bringing their culture and colorful language with them. Before long, the hills of Europe and Asia echoed with the gallop of horses' hooves and the strongly enunciated vowels and consonants of a new language, which linguists today call Proto-Indo-European (PIE).


    Kurgan advocate Marija Gimbutas.


    The “Kurgan hypothesis,” as this dramatic account of the spread of the Indo-European language family during the Early Bronze Age is known, was the dominant paradigm among linguists and archaeologists during much of the 20th century. It is most closely associated with the late Marija Gimbutas, an archaeologist at the University of California, Los Angeles, whose visions of prehistory were often filled with romantic pageantry. She argued that the Kurgans overrode existing matriarchal, Mother Goddess-worshipping societies, imposing their warrior religion as well as their patriarchal culture throughout Europe and western Asia. But the theory caught on for much more pragmatic reasons: It seemed to solve the long-standing mystery of the origins of Indo-European, a closely related group of 144 tongues that today are spoken on every continent. The family includes English as well as all of the Germanic, Romance, Slavic, Indian, and Iranian languages.

    In 1973, however, Cambridge University archaeologist Colin Renfrew proposed that the driving force behind the propagation of the Indo-European languages was not the fast gallop of horses' hooves but the slow adoption of farming. Renfrew argued that the gradual expansion of the agricultural way of life, which originated in the Near East some 10,000 years ago, carried the language family into new territories together with the seeds of wheat and barley. Because archaeologists widely agreed that farming had spread from Turkey to Greece and southeast Europe, Renfrew's “farming-dispersal hypothesis” pointed to the Anatolian plateau, which makes up most of modern Turkey, as a better candidate for the original Indo-European homeland (see sidebar p.1324 and Book Review, p. 1298).

    At first, most linguists and many archaeologists reacted with hostility to Renfrew's hypothesis, in part because they thought that it put the initial dispersal of Indo-European languages far too early. But in recent years, an accumulation of new evidence has considerably weakened support for the Kurgan hypothesis. Some archaeologists have challenged the notion that the Kurgans rode horses at all, and others have questioned the original linguistic analyses that put the Indo-European homeland north of the Black Sea. “Confidence in the Kurgan theory is waning,” comments historian Robert Drews of Vanderbilt University in Nashville, Tennessee. “But,” he adds, “the alternatives are not yet very attractive.”

    Indeed, Renfrew's analysis has certainly not swept the field. Although new and highly controversial dating of PIE, based on the techniques of evolutionary biology, supports a very ancient origin for the first appearance of the language family—8000 or more years ago—many linguists continue to insist that such early dates cannot be right. Wherever the first Indo-Europeans came from, they argue, reconstructions of the PIE vocabulary indicate that they could not have been the early farmers of Anatolia. “PIE was the language of a society which was very familiar with wheeled vehicles” and copper metallurgy, says Lawrence Trask, a linguist at the University of Sussex, U.K. “This obliges us to date the split of PIE no earlier than about 6000 years ago”—long after Anatolian farmers had dispersed.

    Spreading the word.

    Colin Renfrew argues that Anatolian farmers were the first Indo-European speakers.


    Horses, wheels, and wool

    Although archaeologists such as Gimbutas and Renfrew have played key roles in promoting the idea of an Indo-European homeland, linguists have often had the last word on the subject. In some cases, the similarities between words in diverse Indo-European languages are so striking that it doesn't take a trained linguist to spot them. The English word brother, for example, translates as bhrater in Sanskrit, brathir in Old Irish, frater in Latin, and phrater in Greek. But the field's most notable accomplishments consist of truly heroic efforts over many decades to recreate PIE, a technique sometimes called linguistic paleontology.

    The results have caused many linguists to reject the farming-dispersal hypothesis. Take the English word wheel, which has its roots in the PIE word *kwekwlos. (The asterisk indicates that this word is a reconstruction and has not been actually found in written inscriptions; the superscript is a guide to the way that the word was pronounced.) Its equivalent is cakras in Sanskrit, kuklos in Greek, and kukäl in Tocharian, an extinct Indo-European language once spoken in western China.

    “There is no evidence for wheels earlier than about 3500 B.C.,” or 5500 years ago, says Bill Darden, a linguist at the University of Chicago. This means, he contends, that the Indo-European languages, all of which apparently share a common root for this word, could have diverged only after wheels were invented. Darden and other linguists have made similar arguments for the words for yoke, horse, and wool. For example, Elizabeth Barber, an archaeologist and linguist at Occidental College in Los Angeles, California, has traced the history of the word wool, which derives from the Indo-European root word *HwlHn-. Yet when sheep were first domesticated in the Near East, around 9000 years ago, they were hairy rather than woolly. Only after about 6000 years ago did their coats take on the soft, curly pleats that today keep people warm during winter nights. “The farmers who moved from Anatolia to Greece [9000 years ago] did not know about wool, wheels, yokes, or horses,” Darden insists. H. Craig Melchert, a linguist at the University of North Carolina, Chapel Hill, agrees: “I personally find the arguments of Darden and Barber quite irrefutable.”

    But Renfrew finds such literal readings of PIE too much to swallow. For example, he questions the assumption that the earliest roots of words such as wool and wheel necessarily had the same meanings as they do today. The PIE term for wool, Renfrew says, might originally have referred to the “fleece of the sheep” and only later came to refer to a luxurious coat. Likewise, the root for wheel may derive from earlier words meaning “to turn.” And when it comes to the horse, Renfrew and his supporters believe that new evidence pulls the legs right out from under the Kurgan hypothesis.

    Gimbutas and other archaeologists had claimed that the Kurgans domesticated the horse more than 5000 years ago, then quickly trotted off to conquer distant lands. Meanwhile, linguists had long been impressed with how rich PIE seemed to be with words for animals such as sheep, cattle, pigs, dogs, and horses, which were abundant on the steppes north of the Black Sea 5000 years ago. (The reconstructed PIE word for horse is *HeGraphicwos, which led to aśvas in Sanskrit, equus in Latin, and eoh in Old English.)

    Beginning in the 1960s, excavations at a Kurgan site called Dereivka, located on a tributary of the Dnieper River in modern-day Ukraine, turned up supposed “bit wear” on premolar teeth and horse remains buried with other domestic animals such as sheep and cattle. These were taken as evidence of the earliest known domestication of the horse and dated to about 5500 years ago, right around the time the Kurgans supposedly began to ride forth from their homeland. Recently, for example, archaeologist David Anthony of Hartwick College in Oneonta, New York, concluded from experiments conducted on modern horse teeth that what he interprets as wear marks on the Dereivka horses would have required at least 300 hours of riding with a hard bit. Similar claims have been made for the site of Botai in Kazakhstan, occupied more than 5000 years ago and under excavation since the early 1990s.

    But archaeologists disagree about the strength of much of this evidence. Also beginning in the 1990s, Cambridge University archaeologist Marsha Levine carried out a series of studies of horse bones from Dereivka, Botai, and other sites. Levine concludes that there is no credible evidence for horse domestication, and especially for horse riding, at these sites—nor, indeed, at any archaeological site before about 4000 years ago. When Levine examined the horse bones from Botai, for example, she found that the thoracic vertebrae completely lacked the kinds of bone pathology that modern horses suffer when they are ridden. As for bit wear, Levine counters that only a minority of the premolars from the Dereivka and Botai horses show the marks that Anthony interprets as bit wear, and she argues that these marks could have other causes.

    And even if horses were mounted as long ago as Anthony and others have claimed, Renfrew argues that there is no direct archaeological evidence that “horses were … ridden for military purposes before about 3500 years ago. This is simply too late to coincide with Indo-European origins.” Renfrew also points out that one key assumption long made by linguists—that the horse was not known at all in the Near East—has been overridden by recent evidence of butchered wild horses at the 9000-year-old farming village of Çatalhöyük in south-central Anatolia. This means, he says, that a hypothetical Kurgan homeland “is not a very good alternative” to his Anatolian farming-dispersal hypothesis, although he concedes that this alone “does not mean that the farming hypothesis has to be right.”

    Seeking a homeland.

    The first Indo-Europeans may have been horse lovers from Dereivka or farmers from Çatalhöyük.


    Genes, words, and trees

    Many experts had hoped that the debate between the Kurgan and Anatolian hypotheses might be resolved by studies tracing the genetics of modern Europeans. Indeed, research during the 1970s and 1980s, led by geneticist Luigi Luca Cavalli-Sforza of Stanford University in California and others, gave strong initial support to the notion that a large population migrated out of Anatolia during the early days of farming about 8000 years ago (Science, 7 July 2000, p. 62). But in more recent years, geneticists have found that this picture is much more nuanced than originally thought and that the original hunter-gatherer populations in Europe may also have contributed significantly to the modern gene pool. As a result of these complications, most partisans in the debate, including Renfrew, believe it is premature to try to directly correlate genetic evidence for population movements with the spread of Indo-European languages (Science, 25 April 2003, p. 597).

    Although the contribution of genetics to the debate has so far been disappointing, that has not stopped evolutionary biologists from jumping into the fray. In the 22 July 2003 issue of the Proceedings of the National Academy of Sciences, geneticist Peter Forster of Cambridge University and linguist Alfred Toth of the University of Zürich in Switzerland reported on an attempt to apply the mathematical and computer techniques of molecular phylogeny—which biologists use to reconstruct evolutionary trees of living organisms—to figure out when the Celtic languages of Europe first went their separate ways. In the process, they also came up with a date for the first divergence of the Indo-European family that is too early for the Kurgan theory but consistent with the Renfrew hypothesis.

    Forster and Toth used bilingual Gaulish-Latin inscriptions to date the first split of Celtic from the rest of the Indo-European languages. (Gaulish was the version of Celtic spoken in France.) Their best guess came out at around 5200 years ago, whereas the Kurgan hypothesis puts this split—which occurred on the Atlantic coast, thousands of kilometers to the west of the Kurgan homeland—far later. And when Forster and Toth extrapolated their results backward to the common root of all Indo-European languages, they came up with a date of about 10,100 years ago, plus or minus 1900 years, for the first spread of Indo-European into Europe.

    Those dates roughly match a more systematic attempt to use the methods of molecular phylogeny to test the Kurgan and Anatolian hypotheses, reported in the 27 November 2003 issue of Nature by evolutionary biologists Russell Gray and Quentin Atkinson of the University of Auckland, New Zealand. Their best estimate for the initial split came out at about 8700 years ago, which also coincides very closely with the first spread of farming from Anatolia into Greece.

    However, many linguists remain unconvinced by such analyses, questioning the relevance of evolutionary biology techniques to linguistic problems (Science, 28 November 2003, p. 1490). “There is no reason whatsoever to assume that vocabulary would behave the same way that organisms do,” says Alexander Lehrman, a linguist at the University of Delaware in Newark.

    At the very least, the new evidence is making partisans on both sides of the debate think twice about their assumptions. And although a meeting of the minds is hard to envision in this contentious field, there may be grounds for compromise. Gray and Atkinson identified a rapid divergence of languages around 6500 years ago that gave rise to the Romance, Celtic, and Balto-Slavic language families. Because this date matches the first evidence for Kurgan occupation of the Black Sea steppes, Gray and Atkinson say, both camps could be partly right: The farmers spread PIE initially, but the Kurgans spurred the later burst of languages. “There is no need to set up the Kurgan and farming hypotheses at variance with one another,” says April McMahon, a linguist at the University of Sheffield, U.K. “But sadly, this is something that [people] have a tendency to do.”

  18. Why Anatolia?

    1. Michael Balter

    ÇATALHÖYÜK, TURKEY—While some archaeologists probe Kurgan earth mounds for the origins of Indo-European languages (see main text), others are focused on the even more ancient cultures of Anatolia, in modern-day Turkey. Here, some see a likely candidate for both the seeds of the Indo-European language and the gateway for the spread of agriculture into Europe some 8000 years ago. Recent research at a number of Anatolian sites has shown that this huge plateau was teeming with human populations, rich with art and culture, and home to the grains, cereals, and legumes key to the expansion of farming. “Everybody agrees that farming came to Europe from Anatolia,” says Cambridge University archaeologist Colin Renfrew, chief partisan of the farming-dispersal model of Indo-European origins. “So Anatolia must be the point of departure [for languages too].”

    Anatolia's importance in prehistory was firmly established in the 1960s, when excavations here at Çatalhöyük, near Konya in south-central Turkey, unearthed the largest early farming community ever discovered. This vast settlement of up to 10,000 people is also famous for its spectacular wall paintings (Science, 20 November 1998, p. 1442). Renfrew and other archaeologists note close cultural continuities in architecture, pottery, and figurines between Çatalhöyük and early Greek sites, which they see as an indication that Anatolia was the springboard for the introduction of agriculture into Greece and the rest of Europe.

    Indeed, radiocarbon dating of archaeological sites in western Asia and Europe shows that farming first appeared about 11,000 years ago in the Near East and 10,000 years ago in central Anatolia, and that it had spread to Greece by 8000 years ago. Five hundred years later, farming villages began cropping up in the Balkans and central Europe. The wild ancestors of the seven “founder crops” harvested by the world's first farmers have all been traced to the region of southeastern Turkey and northern Syria (Science, 2 June 2000, p. 1602). The origins of einkorn wheat, for example, were recently located in the Karacadağ Mountains of southeastern Turkey, very close to Neolithic sites—dated as far back as 9600 years ago—where archaeologists have found seeds of both wild and domesticated einkorn.

    Renfrew believes that Çatalhöyük, as the largest Neolithic settlement in central Anatolia, may have been at least one major source of Indo-European-speaking populations. Surveys in the area around the site suggest that the huge village may have been in close communication with smaller settlements nearby, says Ian Hodder, an archaeologist at Stanford University in California and current director of the Çatalhöyük excavations. “So you have all these people having to create a way of speaking with each other,” Hodder says. Nevertheless, he cautions, until the linguistic debates over the origins of Indo-European are resolved, any attempts to correlate Neolithic cultures with the spread of languages may be little more than “clutching at straws.”

  19. From Heofonum to Heavens

    1. Yudhijit Bhattacharjee

    Ancient texts and computer simulations help linguists explore how words and grammar evolve over the time scale of centuries

    If a modern-day priest were to chance upon an 11th century manuscript of The Lord's Prayer in English, he would need the Lord's help to decipher its meaning. Much of the text would be gobbledygook to him, apart from a few words that might have a recognizable ring, such as heofonum (heavens) and yfele (evil). And even after a word-for-word translation, the priest would be left with the puzzling grammatical structure of sentences like “Our daily bread give us today.”

    Although researchers generally think of languages as having evolved slowly over many millennia, language change occurring over time spans of a few centuries has confounded scholars since medieval times. After trying to read a 600-year-old document, the first known printer of English works, William Caxton, lamented in 1490, “And certainly it was written in such a way that it was more like German than English. I could not recover it or make it understandable” (translated from Old English).

    The comparative analysis of such texts is the closest that researchers can get to tracing the evolutionary path of a language. By studying the evolution of words and grammar over the past 1200 years of recorded history, linguists hope to understand the general principles underlying the development of languages. “Since we can assume that language and language change have operated in the same way for the past 50,000 years, modern language change can offer insights into earlier changes that led to the diversification of languages,” says Anthony Kroch, a linguist at the University of Pennsylvania in Philadelphia.

    Since the mid-19th century, that hope has driven scholars to document a variety of grammatical, morphological, and phonological changes in French, English, and other languages. In the past 3 decades, more and more theoretical and historical linguists have turned their attention to analyzing these changes, and sociolinguists have explored the social and historical forces at work. Now researchers in the growing field of computational linguistics are using computer models of speech communities to explore how such changes spread through a population and how language changes emerge in multilingual populations.

    The simulations are infusing precision into the study of a phenomenon once thought to be the exclusive domain of humanistic inquiry. “Computational modeling of language change is in its infancy, but it is already helping us to reason more clearly about the factors underlying the process,” says Ian Roberts, a linguist at the University of Cambridge, U.K.

    Voice of the Vikings

    Linguists view language change as something of a paradox. Because children learn the language of their parents faithfully enough to be able to communicate with them, there seems no reason for language to change at all. But historical texts show that change is common, although the trajectory and rate of change may be unique for any given language. In the 10th century, to consider a classic example, English had an object-verb grammar like that used today in Modern German, requiring sentence constructions such as “Hans must the horse tame.” By 1400 C.E., the English were using the familiar verb-object grammar of “Hans must tame the horse.” French underwent a similar change before the 16th century, whereas German retained its basic grammar.

    To find out why such changes happen, researchers explore the historical circumstances surrounding them. In the past few years, based on a comparative analysis of religious texts from northern and southern England, Kroch and his colleagues at the University of Pennsylvania have suggested that northern English was transformed during the 11th and 12th century as Viking conquerors married native Anglo-Saxon women, who spoke Old English. The resulting bilingual households became crucibles for linguistic change. For example, whereas Old English had distinct verb endings to mark differences in person, number, and tense, the speakers of what is now called Early Middle English began using simpler verbs—perhaps because the Scandinavians had difficulties keeping track of all the verb forms—and settled on a simplified system closer to what we use today.

    In the absence of invasions and other external influences, languages can remain stable for long periods. Japanese and Icelandic, for instance, have hardly changed since 800 C.E. But researchers point out that isolation does not guarantee the status quo; grammatical shifts can also be triggered by internal forces such as minor changes in the way a language is spoken.

    French is a case in point. In the 16th century, the language changed from a system in which the verb always had to be in second place (known as a verb-second structure) to one in which the verb (V) could be in any position as long as it came after the subject (S) and before the object (O); Modern French and Modern English both have this SVO structure. For example, “Lors oirent ils venir un escoiz de tonnere” (Then heard they come a clap of thunder) became “Lors ils oirent venir un escoiz de tonnere” (Then they heard come a clap of thunder). Roberts, who documented the transition by comparing a representative text from each century between the 13th and the 17th, believes that the change arose because speakers of Middle French reduced the emphasis on subject pronouns—“they” in this example—to the point where children learning the language barely heard the pronouns. Roberts inferred this decline in phonetic stress from usage changes in the written language. For example, subject pronouns were earlier used with modifiers, such as “I only,” but later they did not carry such modifiers. The result of this reduced emphasis, says Roberts, “was that for sentences beginning with a subject pronoun, the verb sounded like the first word of the sentence to the listener.” That ambiguity dealt a fatal blow to the verb-second rule, paving the way for the emergence of an SVO grammar.

    John the book buys

    But a new grammatical feature cannot emerge overnight. For a variant such as an innovative construction by a single speaker or a novel form of syntax produced by a new adult learner to become part of the language, it must get picked up by other speakers and be transmitted to the next generation. Historical texts show that it can take centuries for a change to sweep through the entire community.

    David Lightfoot, a linguist at Georgetown University in Washington, D.C., says the key to understanding large-scale linguistic transformation lies in the link between the diffusion of novel forms through one generation and large grammatical shifts occurring across generations—changes he calls “catastrophic.” And this link, according to him and many others, is language acquisition. Children may simply carry forward a variant that arose in the preceding generation. But more significantly, says Lightfoot, children may themselves serve as agents of change “by reinterpreting a grammatical rule because of exposure to a variant during their learning experience.” As adults, they may end up using a somewhat different grammatical system from that of their parents. Repeated over generations, this can lead to a dramatic makeover in the language.

    Computational linguists such as Partha Niyogi of the University of Chicago have built computer models to understand the dynamics of such evolution. Their goal is to map out the relationship between learning by the individual and language change in the population, which Niyogi calls the “main plot in the story of language change.”

    In one of the first attempts to unravel that plot's outline, Niyogi and Robert Berwick, a computer scientist at the Massachusetts Institute of Technology, came up with a class of models simulating the transmission of language across generations. They started out by considering a virtual population with two types of adult speakers. The first type uses one set of grammatical rules—say, one that, like English, mandates a verb-object order for all constructions, generating sentences such as “John buys the book,” and “I know that John buys the book.” The rest of the speakers use a different grammar, for example one similar to German, in which the first verb goes in the second position but the second verb goes after the object. The speakers of the second grammar produce some sentences exactly like speakers of the first—“John buys the book”—but they also produce other kinds of sentences such as “I know that John the book buys.” The researchers spelled out a learning algorithm for children in this population, providing each learner with logical steps for acquiring grammatical rules from linguistic encounters with adults.

    Following the linguistic behavior of this virtual community over generations led Niyogi and Berwick to a startling conclusion. They found that contrary to expectation, the population does not inevitably converge on the grammar spoken by the majority, nor on the simpler of the two grammars. Instead, the winning grammar is the one with fewer grammatically ambiguous sentences like “John buys the book,” which, although simple, might be analyzed as belonging to either grammatical type. In other words, if minority speakers consistently produce a smaller proportion of grammatically ambiguous sentences as compared to the majority, the population will over time shift completely to the minority grammar.

    Niyogi, who first presented the work at the International Conference on the Evolution of Language at Harvard in April 2002 and is publishing it in a forthcoming book, says the finding makes it possible to imagine how a grammatical variant spoken by a handful of individuals might replace an entrenched grammar. It's conceivable for the variant to pose no threat to the established grammar for many generations, he says, until the proportion of grammatically ambiguous sentences produced by speakers of the variant drops below the corresponding proportion for the dominant grammar. “For instance, sociocultural factors might change the content of conversations among minority, English-type speakers in a way that they stop using single-clause sentences like ‘John buys the book,’” says Niyogi. That would make their speech more complex—but less grammatically ambiguous. Then learners would hear a higher proportion of the multiple-clause, uniquely English constructions in English speech than they would hear uniquely German constructions in German speech. This would make them more likely to infer the English grammatical system from what they heard, even though their overall exposure to German and even uniquely German constructions would be greater. Suddenly, the mainstream German grammar would become unstable and the English grammar would begin to take over.

    “That a little good info should be able to trump a lot of bad [ambiguous] info makes sense,” says Norbert Hornstein, a linguist at the University of Maryland, College Park, who sees the mechanism of change suggested by Niyogi as “a good fit with our understanding of language acquisition.” He says it provides a possible explanation for how small local changes—for instance, a simplification of the verb system by mixed households in 13th century northern England—may have spread through the entire population. Confirming this account of change would require testing computational models with real-world data such as the proportion of specific syntactical forms in historical texts, assuming written language to be a faithful impression of speech. Niyogi admits that the task could take years.

    In a broader sense, however, researchers have already validated the computational approach by matching the outlines of models to real-world situations. For example, University of Cambridge linguist Ted Briscoe modeled the birth of a creole, a linguistic patois that arises from prolonged contact between two or more groups. He specifically considered Hawaiian English, which developed between 1860 and 1930 through contact between Europeans, native Hawaiians, and laborers shipped in from China, Portugal, and other countries. Briscoe's simulation started out with a small but diverse group of speakers and factored in the periodic influx of adult immigrants. He found that a population with the right mix of children and new adult learners converged on an SVO grammar after two generations. That matches empirical studies showing that many features of Hawaiian Creole, including an SVO word order, did not stabilize until the second generation of learners.

    Lasting legacy.

    Viking marauders, like those depicted in this English sculpture, left their mark on the English language


    Salikoko Mufwene, a sociolinguist at the University of Chicago, says that a detailed picture of mechanisms of language change could emerge if computational researchers succeed in modeling very specific contexts. For instance, he says, modeling spoken exchanges on a homestead of eight Europeans and two African slaves could help illuminate the linguistic evolution of the larger community. “The two Africans in this example are likely to be so well immersed that after a few months they would be speaking a second language variety of the European language. Say one of the Africans is a woman and bears a child with one of the white colonists. The child is likely to speak like the father because the father's language happens to be dominant at the homestead. Growing up, this child will serve as a model for children of new slaves,” explains Mufwene. “Nonnative speakers will exert only a marginal influence on the emergent language of the community,” in this case the native European variety.

    But if the population increases significantly through a large influx of new slaves, he says, the dynamics of interaction change, and more adult nonnative speakers of the European language serve as models. Children now have a greater likelihood of acquiring some of the features spoken by adult nonnatives and transmitting them to future learners; over time, a new variety of the European language will emerge.

    Detailed modeling along these lines, Mufwene says, could unveil the significance of factors that researchers may have missed, such as the pattern of population growth and the pace of demographic shifts. “Even without real-world number crunching,” he says, “the exercise would suggest what questions we should be asking and what kinds of evidence we should be looking for.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution