News this Week

Science  26 Mar 1999:
Vol. 283, Issue 5410, pp. 1986

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    DOE Lab Exchanges Targeted in Wake of Espionage Claims

    1. David Malakoff*
    1. With reporting by Richard Stone.

    The political sparks from allegations that China has stolen U.S. nuclear secrets now threaten to singe international scientific exchanges. Last week Republican lawmakers called for a moratorium on exchange programs that bring thousands of foreign researchers from “sensitive” nations to Department of Energy (DOE) laboratories. DOE officials say such a policy would disrupt efforts to improve nuclear safety in Russia and do little for security. But some researchers fear the flap could smother international cooperation and undermine the labs' efforts to recruit the best foreign-born scientists.

    The jousting over DOE's exchange program is part of a wide-ranging and often highly partisan congressional inquiry into China's efforts to acquire U.S. technology. Last June, a special nine-member House of Representatives panel led by Christopher Cox (R-CA) began investigating charges that the Clinton Administration allowed U.S. companies to sell advanced computer and satellite technologies to Chinese firms in exchange for illegal campaign contributions from Chinese sources. In a still-secret report, the panel also focused on allegations that, in the late 1980s, a Taiwanese-American computer scientist at the Los Alamos National Laboratory in New Mexico helped China obtain information on a new, small nuclear warhead.

    On 8 March, days after The New York Times highlighted the allegations, Los Alamos officials fired Wen Ho Lee, a longtime employee, for failing to cooperate with investigators. Lee, whose attendance at two scientific meetings in China in the late 1980s has apparently raised questions about his possible involvement in espionage, has not been charged with a crime. But his ouster drew renewed attention to security at DOE's 20 labs, especially the three heavily involved in weapons research: Los Alamos, Sandia National Laboratory in Albuquerque, New Mexico, and Lawrence Livermore National Laboratory in California.

    On 15 March, charging that “our labs are not as secure as they should be,” Senator Richard Shelby (R-AL), chair of the Senate Select Committee on Intelligence, asked DOE to suspend parts of an exchange program involving more than 20,000 foreign scientists annually. In particular, he called for a time-out on visits by researchers from 25 nations that DOE classifies as “sensitive,” including China, India, Pakistan, and all 12 former Soviet states. More than 4000 scientists from the sensitive nations now visit DOE labs each year—more than twice the annual influx during the 1980s—of which almost half spend time at the three weapons labs. Although most of the visitors are barred from entering classified areas, some participate in projects with military applications, including efforts to build and monitor safer nuclear weapons. Although Shelby presented no evidence that any visitor has stolen classified information, he suggested shutting down the program for up to 2 years.

    Shelby is not the first lawmaker to fault the visitors' program. Over the last decade, both Democrats and Republicans have taken the agency to task for security lapses detailed in a series of reports. In 1988 the General Accounting Office (GAO), Congress's investigative arm, concluded that DOE was ignoring many of its own security rules, including requirements for background checks of some foreign visitors. Last year GAO found that “essentially the same problems” still exist, although it could not confirm that the lapses had led to the loss of any sensitive information. Still, Shelby said the reports justified a moratorium.

    Growing interest.

    Foreign scientists from “sensitive” countries make up a small but rising share of visitors to U.S. weapons labs.


    DOE officials defend the exchanges, however, and say critics are barking up the wrong tree. “The visitors program was not the problem here,” Energy Secretary Bill Richardson protested after a 16 March hearing. “The cases we're examining involve American citizens.” The next day, after vowing to protect the program, Richardson also warned congressional critics not to “get hysterical and overreach” in demanding new security regulations.

    Amid protestations, however, DOE has also responded to its critics as part of a feverish week of activities. First, Richardson announced a suite of new security measures, including tighter controls on lab e-mail and greater scrutiny of visitors. Then, on 18 March, President Bill Clinton appointed former Republican Senator Warren Rudman to lead a review of lab security and recommend improvements. The next day, Richardson quietly shelved a planned review of the exchange program by former CIA head John Deutch because, according to one source, “there were getting to be too many reviews.” Sidney Drell, deputy director of the Stanford Linear Accelerator Center (SLAC) in Palo Alto, California, and a member of the Rudman panel, says that “we need a good analysis of what helps security that recognizes the importance of the exchange of information.”

    Laboratory officials hope the moves will deflate Shelby's moratorium proposal. “It's a nutty idea,” says John Shaner, head of Los Alamos's Center for International Security Affairs. Since 1994 the United States, Japan, and Europe have spent more than $350 million to keep former Soviet weapons scientists working on civilian projects, and an even greater sum on decommissioning Russian weapons and upgrading stockpile monitoring systems. An integral part of these programs have been scientific exchanges between the Russian and U.S. weapons labs.

    Although Shaner says the exchanges—already swaddled in security regulations that require extensive advance notice and background checks—could “live with” new restrictions, an outright moratorium would weaken the trust between scientists. “Stopping the progress in collaboration between our countries will be a great mistake,” warns theoretical physicist Boris Vodolaga, deputy director for international collaboration and conversion at the All-Russia Scientific Research Institute for Theoretical Physics, an elite nuclear weapons design center in Snezhinsk, Russia.

    Chinese diplomats are also concerned about any moratorium. New restrictions shouldn't “go too far so that normal scientific exchanges are affected,” warned He Yafei, a minister-counsel at the Chinese Embassy in Washington, at an 18 March press conference. He said some Chinese officials have questioned continued involvement in a nascent U.S.-China nonproliferation program modeled on the Russian exchange.

    Such pullouts could cause the scientific community at DOE labs to become isolated and “wither,” Los Alamos director John Browne told a congressional committee last October. “It is vital that the lab interacts with the best scientists in the world,” he says. Whether it remains able to do so, however, is now up to Congress.


    Data in Key Papers Cannot Be Reproduced

    1. Michael Balter

    New findings, published last week, appear to confirm suspicions that several key papers in a hot area of plant development were fatally compromised by scientific fraud. The results, published in the March issue of Plant Journal, stem from an investigation at the Max Planck Institute for Plant Breeding Research in Cologne, Germany, which concluded last year that a laboratory technician falsified experiments forming the basis of 10 publications going back to 1992. The technician, Inge Czaja, and the leader of the group in which she worked, Richard Walden, resigned in early 1998 in the wake of the scandal, although Walden has never been accused of participating in the fraud.

    In the Plant Journal article, a team of researchers at the Cologne institute, along with colleagues from other European labs, report on their attempts to repeat key experiments in eight papers published in Science, EMBO Journal, the Proceedings of the National Academy of Sciences (PNAS), Trends in Plant Science, and Plant Journal. The authors could not reproduce the most central findings. Two other papers from the institute, which had originally appeared in Nature and PNAS in 1997, were retracted last year by most of their authors after their findings also could not be reproduced.

    View this table:

    “I can no longer believe any parts of the data in any parts of the papers,” says plant biologist Alan Jones of the University of North Carolina, Chapel Hill, who adds that the new findings will have “a negative effect on the field,” because “major conclusions were drawn” from the papers. The lead author of the Plant Journal report, plant researcher Jeff Schell—who is head of the department in which the Walden group worked and a co-author on the disputed papers—agrees that all the major findings were “subject to falsification.” Nevertheless, Jones, Schell, and other researchers stress that the basic techniques used in the research—which were pioneered by Walden and other colleagues—remain valid and are being enthusiastically used by other researchers. “This technology has been very influential,” says plant molecular geneticist George Coupland of the John Innes Centre in Norwich, United Kingdom.

    The affair dates from the early 1990s, when Walden and his co-workers pioneered a new way to study the actions of plant genes. The technique, called activation T-DNA tagging, creates mutations by inserting DNA from the soil bacterium Agrobacterium tumefaciens, which induces plant tumors, into the genome of plants they wish to study. They found that genes flanking this inserted foreign DNA were “overexpressed”; that is, they produced much higher levels of proteins than normal, allowing those genes and their protein products to be studied much more easily.

    With this method, Walden and his co-workers began trying to decipher the poorly understood mechanisms of action of two plant hormones—auxin and cytokinin—that control plant cell division and growth. To do this, the team produced numerous mutants of tobacco plants which they thought were capable of growing independently of the presence of these two hormones. Using these mutants, the team isolated a number of genes, proteins, and other factors that appeared to stimulate plant growth “downstream” of the hormones—and thus were implicated in the hormones' mechanism of action.

    It now appears, however, that these mutants were not capable of independent growth after all. The investigation carried out at the institute concluded that Czaja added plant growth factors to culture media used in the experiments and manipulated the experiments to make it appear that cultured plant cells were capable of auxin- and cytokinin- independent cell division. (Czaja, who was also a co-author on the papers, declined to comment when contacted by Science.) Serious suspicions had been raised by early 1998, when researchers at the institute were unable to repeat results stemming from the technician's work. Walden and his co-workers began investigating and soon concluded that at least some of the results had been faked.

    In March 1998, Walden informally let other plant researchers know that there were potential problems with the work, and the following month he, Schell, and another coworker published an initial warning about the data in Trends in Plant Science. Nevertheless, under strict new rules on scientific misconduct adopted by the Max Planck Society in November 1997, institute officials sought, and received, Walden's resignation. “There were ample signs that [Walden] did not exercise proper responsibility for his group,” says Heinz Saedler, a co-director of the Cologne institute. (Walden, who now works at a research institute in the United Kingdom, told Science he preferred not to comment on the affair.)

    Despite the dramatic findings in this month's Plant Journal report, Schell says the group has no immediate plans to publish retractions of the eight papers in the journals in which they originally appeared. “This article is about the only thing we were planning to do. The main thing is to get our science going again.” On the other hand, Schell adds, if the journals themselves asked for retractions, “I would consider it very seriously.” But some editors of the journals involved say they believe the co-authors should submit letters stating that the results could not be reproduced. John Tooze, co-executive editor of EMBO Journal, says that although the journal has no hard-and-fast policy about retractions, it would be “common sense” for the authors to contact the journals involved. “A statement in each of the journals from the authors would be an appropriate thing to do,” he says. And Floyd Bloom, editor-in-chief of Science—where three of the eight papers appeared—says that “we would have expected Dr. Schell or his institution to contact us when the results that had been published in Science were conclusively identified as suspect. We will be discussing the possible need for retractions of the papers that Dr. Schell and his collaborators published in Science with him, and will act accordingly.”

    Jones says that, in retrospect, flaws in some of these papers might have been spotted with closer review. For example, in the Plant Journal study the researchers used a second assay technique—incorporation of the DNA building block thymidine into plant cells—in addition to a cell-counting method used in the original work to determine whether cell division had occurred. “In hindsight, why wasn't the thymidine incorporation done originally; why didn't the reviewers call for that?” Jones asks. On the other hand, he says, “hindsight isn't fair. … When the papers came out I was extremely enthusiastic.”


    Fossil Offers a Glimpse Into Mammals' Past

    1. Carl Zimmer*
    1. Carl Zimmer is the author of At the Water's Edge.

    Last year Ji Qiang made paleontological history when he reported that he had found fossils of feathered dinosaurs in the Liaoning Formation, about 400 kilometers northeast of Beijing. Now Ji, a paleontologist from the National Geological Museum of China, has done it again: He has unearthed the world's oldest complete mammal fossil, dating back at least 120 million years. And he found it in the same fossil-laden hills that surrendered the feathered dinosaurs (Science, 26 June 1998, p. 2051).

    Most mammal fossils older than 65 million years are nothing but teeth and scattered bones, but this one is an exception. “When I saw it, I freaked out—it's an incredibly complete fossil,” says mammalogist John Wible of the Carnegie Museum of Natural History in Pittsburgh. In this week's issue of Nature, Ji and his colleagues conclude that the fossil is a close relative to the common ancestor of all mammals alive today, from humans to opossums to platypus. “This thing gives us the closest look at what the last common ancestor of modern mammals was like,” says Tim Rowe, a paleontologist at the University of Texas, Austin. If Rowe is right, that ancient creature was truly bizarre: a rat-sized chimera that walked on mammalian front legs and splayed reptilian hindlegs.

    Paleontologists had already found the fossils of many forerunners of today's mammals, allowing them to trace our ancestors' first important steps toward the modern mammalian body plan. By about 200 million years ago, mammals had already evolved from bulky, cold-blooded creatures only a step or two removed from reptiles into small animals just beginning to acquire the three-boned mammalian middle ear, and probably fur and milk as well. Yet they still retained the old reptilian style of walking, with legs spread out to the sides. Paleontologists suspect that not long after this time, one lineage branched off and became the monotremes (represented today by the platypus and echidna), which lay eggs and walk with a sprawling gait. Millions of years later a lineage called therians, the ancestors of all other living mammals, both marsupials and placentals, emerged. Therians give birth to live young, and, thanks to a series of changes to their legs, shoulders, and hips, they walk with their limbs under their bodies rather than sprawling.

    Beyond these basic outlines, though, the history of Mesozoic mammals is sunk in obscurity. Researchers aren't sure when the living branches got their start, or just where on the mammalian family tree many of the early species belong. Much of the trouble stems from the fact that Mesozoic mammal fossils are so scrappy that the animals are mainly known only from their teeth. And teeth alone can be deceptive: Mammals on distant branches sometimes evolved teeth that ended up looking very similar.

    One particularly enigmatic group is the triconodonts, known from little more than teeth ranging from 150 million to 80 million years old. With so little material to go by, some paleontologists argued that triconodonts were very primitive protomammals, while others thought that their closest relatives were therians. Now it seems that they are right in the middle.

    The teeth of the new find, which Ji and his colleagues named Jeholodens jenkinsi (Jeholodens, or “tooth of Jehol,” refers to an ancient name for the region, and jenkinsi honors Harvard mammal paleontologist Farish Jenkins), identify it as a triconodont. The rest of the fossil offers a surreal mix of anatomy. Its rear legs are designed for the old reptilian stance, yet its shoulders and front legs are designed to be as mobile as any therian's. “You have the elbows pointing back, whereas you have the knees pointing to the side,” says Zhexi Luo, a paleontologist at the Carnegie Museum of Natural History and one of Ji's co-authors. “Were it not for the fact that the whole thing was articulated, we wouldn't have dared come out with such an apparent contradiction.”

    Using this anatomy, the paleontologists fit Jeholodens onto the mammal family tree, finding that it branched away from an ancestral mammal just before the lineages of living mammals originated. In their report, the researchers point out that this relationship offers two tantalizing choices for how the mammalian body evolved. One possibility is that the modern mammalian shoulders and front legs evolved twice: They appeared first in the triconodonts, after that group branched away from the lineage leading to modern mammals, including monotremes, which retained the more reptilian stance. Much later, mobile forelimbs then evolved independently in the first therians.

    But given the evidence, says Luo, it's equally possible that Jeholodens represents the first step to modern mammals. In this scenario, modern shoulders and arms evolved only once, in the common ancestor of Jeholodens, monotremes, and therians. But when monotremes evolved, they reverted back to the more primitive anatomy as they adapted to their own peculiar ecological niches. Meanwhile, the therian lineage held onto the flexible front legs and then added on advanced hind ones.

    It might seem peculiar for one pair of limbs to change so much earlier than the other. But Rowe points out that mammal embryos develop their front limbs first, and the back ones catch up later. That precedence is also reflected in the evolution of other vertebrates—fish evolved their front pair of fins before their rear ones. Because evolution changes body shape by building on an existing developmental program, evolutionary patterns often echo those in ontogeny. “It arises first in development, it arose first in phylogeny. So this case could just be carrying on the trend,” says Rowe.

    Paleontologists such as Rowe aren't ready to choose between the two scenarios, though. The only way to decide between the two versions of mammalian history will be to find good fossils of primitive monotremes and therians. And if the recent past is any indication, the best place to look is back in the Liaoning Formation. “Virtually everything that's turned up there has brought some Earth-shattering insight,” says Luo. “My forecast is that this site will rival Olduvai Gorge.”


    Chinese Center Sues Over Study Coverage

    1. Dan Zhang,
    2. Lei Xiong*
    1. Zhang Dan and Xiong Lei write for China Features in Beijing.

    BEIJING—The workshop was meant to train volunteers to prick the fingers of thousands of elderly Chinese as part of an international study of human longevity. But as the first drops of blood appeared, Tong Zeng saw red. Initially worried about the welfare of the elderly subjects, he soon wondered whether the participants would be properly informed and if the genetic component of the study might be used for commercial purposes that would not benefit China. Working at the China Research Center on Aging (CRCA), the organization conducting the project, Tong helped launch a media campaign that led the government to temporarily halt the project last spring. Although the furor has ebbed, the genetic fruits of the research—more than 4000 blood samples already collected—have yet to be harvested. Instead, they sit locked inside a well-guarded safe, with domestic scientists waiting for the necessary resources to analyze them.

    The campaign was not the first time Chinese newspapers and magazines had questioned the reasons behind genetic research involving foreign scientists. And it came as the government was preparing rules to restrict the export of genetic material (Science, 18 September 1998, p. 1779). But this time the targets of the media assaults did not remain silent. Last fall, the Beijing-based CRCA sued two local newspapers and one weekly magazine for libel, claiming that its reputation had been damaged by what it said were false and misleading articles. The case, which is pending before local courts in Guangzhou, Shenzhen, and Nanchang, asks for $360,000 to cover the center's legal expenses and the cost of any delays in the research. Xiao Zhenyu, deputy director of CRCA, says he hopes that the media also “will apologize openly” for their conduct.

    “The accusations are ridiculous and fabricated,” says one of the principal investigators, demographer Zeng Yi, a professor at Beijing University who also is affiliated with the Center for Demographic Studies at Duke University and the Max Planck Institute for Demographic Research in Rostock, Germany. Although Duke has a $335,000 grant from the U.S. National Institute on Aging to support the project, and Max Planck has contributed close to $50,000, Zeng describes the study “as a Chinese program with modest international financial aid.” The results, he says, “will greatly help China's efforts to improve the life quality of its elderly citizens.” But the publications maintain their reports were accurate and that project scientists initially misled subjects about the genetic component of the research project to avoid controversy.

    Zeng proposed the project in late 1997 to colleagues at Beijing University's Institute of Population Research (IPR), which he used to chair. He says the aim is to survey the lifestyles and environmental conditions of 10,000 senior citizens, aged 80 and older, and learn why some people survive to an advanced age in good health. Both the surveys and the genetic analysis are similar to those used in a study in Denmark led by James Vaupel, who directs the Rostock institute and is a senior scientist at Duke.

    However, IPR is too small to handle such an extensive outreach project alone. So it turned to CRCA, which has ties to grassroots aging organizations across the country. In March 1998 it convened a training workshop, including a demonstration of how to extract finger-tip blood samples. That's when Tong decided that something was wrong.

    “My first response was that the process of collecting samples might be horrifying to old and often frail persons,” says Tong, who observed that it took more than a minute—and about 11 drops of blood—to fill all five spots on the filter paper as required. “But later I became suspicious of the real motive behind the blood sampling. I wondered if the elderly subjects would be informed about the real use of the samples.”

    Tong says his doubts were fueled by the fact that the survey was drawn up outside China and then translated into Chinese. He adds that the permissions letter given to subjects explains that their blood is being collected as part of a “health checkup” and does not mention any genetic analysis.

    Zeng and others dispute Tong's contentions, saying that subjects were told from the start about the genetic component of the study and that the survey was modified to fit Chinese culture. Zeng also emphasizes that heredity traits are only one of many factors being examined. “For healthy longevity, hereditary genes are thought to make up about 25% of the outcome, while family, social, and environmental factors make up the rest,” he says.

    While surveyors were sent out to some 880 counties, Tong began to express his views publicly. In early April, an alarmed Ministry of Civil Affairs halted the project to make sure it had been reviewed by the proper authorities. Two months later, the ministry gave the project a green light. However, it stipulated that none of the samples could be shipped out of the country, that subjects must be fully informed, and that Chinese and foreign participants should share credit for any published research and commercial products, including patents and licenses, stemming from the study.

    That decision did not stop the critics, who featured Tong's views in two articles disseminated widely last summer. One article, written by free-lancer Guan Mingqiang, characterized project scientists as “traitors” who were “selling the interests of the country.” With pressure building, the CRCA sued three of the publications that carried the articles—the Information Daily in Nanchang, Jiangxi Province, and the Guangzhou Evening Newspaper and the Panorama Weekly in Shenzhen, both in Guangdong Province. Last fall Tong also lost his job. He says he was fired for expressing his doubts about the study, but Xiao says it was for “not attending to his duties.”

    In late October China Daily, the national English-language newspaper in Beijing, published a letter from Vaupel explaining that “no blood or DNA derived from the blood will be exported to any other country” and adding that “there never was any agreement to do so.” Both sides agree, however, that Beijing University doesn't have the resources to do the genetic analyses of the blood samples, some 4200 of which are locked in a well-guarded safe at Beijing University. University officials are seeking additional support from China's Natural Science Foundation, but scientists have not sought state permission for international help in analyzing the blood samples. “Currently it's too sensitive a topic,” says Zeng. As for the suit, none of the cases has reached a judge, although two courts held hearings last fall to gather evidence.

    Outside scientists believe that the suit raises a larger issue, namely, China's right to equal status in any international collaboration. “We don't have to find out the original motives of the project organizers,” says Yang Huanming, director of the Human Genome Center of The Institute of Genetics with the Chinese Academy of Sciences, who helped draft the recent regulations on exportation of human genetic materials. What's critical, he says, is that “the project must be carried out on the basis of mutual benefit and equality.”


    MIT Issues Mea Culpa on Sex Bias

    1. Constance Holden

    The Massachusetts Institute of Technology (MIT) is winning widespread praise for publicly admitting that it has sinned—if only inadvertently—against women scientists. A report from an MIT faculty committee posted on the university's Web site this week concludes that MIT's School of Science has provided a better work environment for male faculty members than for women. Officials say they have taken steps to rectify inequalities among the School of Science faculty, and the university administration is considering how to generalize its new insights campuswide.

    In the summer of 1994, molecular biologist Nancy Hopkins and two other tenured women science faculty members polled their colleagues (the faculty had 15 tenured women and 194 tenured men) and found what they suspected was true: Compared to their male peers, the women were getting less money, office space, and access to research resources and positions carrying greater responsibility. They took their grievances to science dean Robert Birgeneau, who promptly set up a nine-faculty-member committee to explore the issues further.

    The committee went on to document numerous instances of gender bias in a series of internal reports withheld from the public. A summary of its final report, completed 2 years ago, was put online this week as an “educational” process for the whole university, says Birgeneau. Cleansed of telling detail, the report offers only vague observations and conclusions. For example, it states that while junior women faculty feel “well supported” in their departments, “exclusion and invisibility proved to be the common experience of most tenured women faculty.” Discrimination in this “post-Civil Rights era” doesn't take obvious forms, the report notes, but “consists of a pattern of powerful but unrecognized assumptions and attitudes” that have concrete penalties such as lower salaries for women as well as “subtle differences in … treatment.” According to Hopkins, “it took a lot of work to put together a case that you couldn't deny.”

    University officials have swiftly endorsed the report. In an accompanying statement, MIT President Charles M. Vest said, “I have always believed that contemporary gender discrimination within universities is part reality and part perception … but I now understand that reality is by far the greater part of the balance.” Birgeneau, whom the committee praised for his support, told Science that all the inequities related to matters such as salaries and lab space have been rectified in the past few years. In addition, he says, school officials are putting more energy into recruiting women science faculty, who have edged up from 22 of 274 positions in 1994 to 31 of 265 this year.

    Birgeneau says he hopes other schools will learn from the MIT experience. Hopkins is dubious. “This problem is the same at all schools that are elite,” she contends. But “these other universities … are just in denial.”

    MIT still has plenty of work to do, Birgeneau says. For example, he says, there are still no women heading departments or labs in the School of Science. In addition, he says, MIT needs to “figure out how to generalize this from women to underrepresented minorities, where we have made no progress whatsoever.”


    UN to End Children's Vaccine Initiative

    1. Helen Gavaghan*
    1. Helen Gavaghan is a writer in Hebden Bridge, U.K.

    The Children's Vaccine Initiative (CVI)—an alliance of United Nations agencies, private foundations, and industry set up in 1990 to improve vaccination programs for the poorest children in the world—is being disbanded after eight troubled years. No announcement about its future has yet been made, but Science has learned that it will be replaced later this year with a new structure for promoting cooperation between public and private sector groups in the international vaccine community. The details have not yet been worked out. Roy Widdus, who heads the CVI secretariat in Geneva, told Science: “I can confirm that the CVI is to be dismantled.”

    The vaccine industry will be sad to see the demise of the CVI, because it gave companies a strong voice with the UN agencies in policy and planning. But others seem to have few regrets. The alliance, observers say, was often hamstrung by turf battles between agencies such as the World Health Organization (WHO) and the UN Children's Fund (UNICEF). Epidemiologist D. A. Henderson of Johns Hopkins University in Baltimore, who headed efforts to eradicate smallpox, says, “I have been very disappointed to see infighting between WHO and UNICEF.”

    CVI is supported by a grant of $2.5 million per year, principally from WHO, UNICEF, and the World Bank. It was established in 1990 with the aim of reducing the number of children dying from preventable infectious diseases. Its remit was to set priorities for global vaccine development and delivery, promote collaboration between agencies, and find new sources of money.

    Despite the high hopes for the initiative, it failed to raise significant amounts of new money or to coordinate the vaccine community fully, says Barry Bloom, dean of Harvard School of Public Health in Boston. Nevertheless, the CVI has had some successes, says Robert Breiman, head of the National Vaccine Program Office of the U.S. Centers for Disease Control and Prevention in Atlanta. “The areas where CVI has been most effective, for example, bringing industry to the table and taking a strategic view on the introduction of new vaccines, are not [easily] quantifiable.”

    For the past year, the global vaccine community has been discussing how to improve its record of immunizing the world's poorest children. Finally, at a meeting last week in Bellagio, Italy, senior officials from industry and the UN agencies recommended that each agency strengthen its own internal efforts to collaborate and that the CVI should become a scaled-down operation with a coordinating role but no responsibility for policy, fund-raising, or setting priorities.

    Industry is hoping CVI will be replaced by an independent body in which it would have equal status with the agencies. But WHO is not keen on this idea, says Jacques-François Martin, who headed the biologics committee of the International Federation of Pharmaceutical Manufacturers' Associations for 4 years: “The CVI brought industry back to the table. [Now] we feel very frustrated and excluded from the global process at a critical time.” Bjorn Melgaard, director of the department for vaccines and other biologicals at WHO, says WHO—under its new head, Gro Harlem Brundtland, appointed last year—has every intention of establishing an equal partnership with the private sector. An announcement is expected in September or October.


    Gamma Beams From a Collapsing Star

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, CA.

    ATLANTA—Astrophysicists see a spark of consensus emerging on the origins of mysterious gamma ray bursts, the most powerful explosions in the cosmos today. The longest lived blasts, lasting 10 seconds or more, may arise when new black holes consume doomed stars far more massive than the sun and spit out intense beams of energy, according to work presented here this week at a meeting of the American Physical Society. But other bursts, lasting less than a second, remain unexplained.

    If we floated above Earth's atmosphere with eyes that could spot gamma rays, we would see flares as bright as Venus pop off at least once per day across distances of billions of light-years. The most recent detection, on 23 January, pointed to a burst so distant that its brilliance as seen from Earth implied an explosive release equivalent to converting a mass greater than that of our sun into pure energy.

    However, studies in this week's Science and next week's Nature suggest that the burst's energy could have been much lower (see News story, p. 2003). It may have appeared deceptively bright because the object targeted us with a narrow searchlight blast of gamma rays. That jibes perfectly with a scenario championed by astrophysicist Stan Woosley of the University of California, Santa Cruz. His “collapsar” model, devised with graduate student Andrew MacFadyen, proposes an exotic chain of events that may churn out gamma ray beams while generating an outsized supernova explosion.

    A massive star explodes as a supernova when it exhausts its nuclear fuel and collapses, and astrophysicists agree that the collapse of the most massive stars spawns black holes. The hole swallows gas from the slowly moving poles of the star. But if the rest of the star is spinning quickly enough, it careens in a disk around the black hole at close to the speed of light. Then, according to Woosley and MacFadyen's calculations, the hole gulps the disk within 10 to 20 ferocious seconds. The inner part of the disk heats to 20 billion degrees and shoots stupendously energetic jets of particles out of narrow channels at the star's poles. Twisted magnetic field lines may help the jets drill into space.

    The jets probably collide with clumps of gas billions of kilometers from the star to create gamma rays. Particles within the jets may clash violently against one another to unleash gamma rays as well, Woosley notes. Astronomers would see only about one of every 100 such events in the universe—the ones that happen to point their bright beams at Earth.

    This scheme builds on the “hypernova” hypothesis advanced a few years ago by theorist Bohdan Paczy»nski of Princeton University. “We think the collapsar is the engine that drives the hypernova,” Woosley says, because a shock wave from the collapsar would obliterate the rest of the star in a titanic supernova. That concussion would stoke the visible “afterglow” that telescopes see at the burst site. A bizarre supernova last year, called 1998bw, coincided with a relatively nearby gamma ray burst, supporting the idea, Woosley notes (Science, 19 June 1998, p. 1836).

    Another favored model for gamma ray bursts, merging neutron stars, may explain the blasts that shut off in less than a second. However, such collisions probably aren't energetic enough to account for events like that of 23 January, Woosley believes.

    Astrophysicist Gerald Fishman of NASA's Marshall Space Flight Center in Huntsville, Alabama, says Woosley's model is the most credible yet: “There are no showstoppers. People haven't found any fatal flaws.”


    Agencies Launch Effort to Improve U.S. Schools

    1. Jeffrey Mervis

    The U.S. government has launched a multiyear research initiative to improve early math, science, and reading instruction and to expand the use of technology in schools. In addition to improving student achievement, the new Interagency Education Research Initiative (IERI) hopes to link researchers who use different approaches to understanding how children learn and to shorten the delay in implementing the latest research findings.

    Announced last month, IERI is a unique collaboration of the National Science Foundation (NSF), the National Institutes of Health (NIH), and the Department of Education (ED). Although each agency already funds research on improving kindergarten to grade 12 education, they typically draw on separate pools of scientists from different disciplines. “This initiative brings together educators and scientists who, for many years, have been on parallel tracks,” says Yale University pediatrician Sally Shaywitz, who co-directs an NIH-funded Learning Disability Research Center that combines classroom work and brain imaging to study and treat learning disabilities in young children. “The idea is to show that rigorous scientific principles can be applied to education research just as they are applied to cancer research.”

    Those lofty goals are backed by an impressive budget: $30 million this year, $50 million requested in 2000, and $75 million annually in 2001 and beyond. (The current request is divided between NSF and ED, while NIH made a belated bid for $25 million that didn't survive the White House budget process.) “This is a huge chunk of money for education research,” says Alan Kraut, executive director of the American Psychological Society. “If they can put in $50 million or more every year, then the field is really going to take off.”

    IERI is the latest attempt to bolster U.S. education and reverse the poor performance of U.S. students in international comparisons. It's the direct result of a 1997 report by the President's Committee of Advisors on Science and Technology (PCAST), which recommended spending $1.5 billion over 5 years on technology to bolster learning. Although Congress shot down the Administration's initial request for $75 million, NSF cobbled together $22 million and ED put up $8 million from current-year funding. A workshop last fall helped the agencies come up with a research agenda that emphasizes three areas: preparing preschoolers for math and reading; math, science, and reading instruction in the primary grades; and training teachers in all grades. The initiative is on a fast track: Last month's announcement (NSF 99–84) sets a 15 May deadline for proposals, with the first round of winners to be picked by September.

    The federal partners admit that melding their different priorities poses a challenge. NSF, for example, usually focuses on math and science but not reading; “this program has a reading component because of ED,” says NSF's John Cherniavsky, who is temporarily overseeing the new initiative. Education department officials were concerned that the PCAST report “was too focused on technology,” says Dick Venezky, on leave from the University of Delaware to help get IERI off the ground. “So we decided to focus on chronic education issues such as school readiness and teacher preparation, with technology as an aid.” And Shaywitz's center is part of a network funded by NIH's National Institute of Child Health and Human Development (NICHD) that uses a medical model in working with children at risk for reading disabilities. “We run a $21 million a year program with 41 sites,” says NICHD's Reid Lyon, “but we haven't worked with NSF and ED on early math and reading skills before this initiative.” (Although his institute has not earmarked money for either 1999 or 2000, Lyon says NICHD will help review IERI proposals and may fund some that fit in with its overall mission.)

    Officials are also wary of promising too much, too soon. “At one point we thought we'd be funding solutions,” says Venezky. “But that's at least 5 years off. We probably know the most about readiness, thanks to Head Start and other programs. The quality of research on initial reading skills isn't as good. In early math, we have a good core of research, but not much in the way of large-scale interventions. And in staff development we're closer to ground zero, with almost no good theory about either preservice or midcareer training.”

    Even when existing programs generate scholarly knowledge, say researchers, too little finds its way into schools of education or the classroom. “It takes 10 years for [the latest research findings] to make it into teacher training programs,” says Elena Bodrova of the Mid-Continental Regional Education Laboratory in Denver, who has developed a computerized aide to help teachers assess student literacy. Cherniavsky offers an even simpler bottom line. “If we really understood learning, why aren't we doing it better?” he asks. By linking academic researchers with classroom teachers, he says, “we want to see what changes can buy us in terms of improved student achievement, using technology as a tool.”

    Douglas Clements, an education professor at the State University of New York (SUNY), Buffalo, is already doing exactly that with an NSF-funded software project that fits the description of what IERI hopes to accomplish. “We want to mimic kids' activities in a way that reinforces the underlying mathematics in such skills as number counting and shape recognition,” says Clements about Building Blocks, which teaches math concepts to children aged 4 to 7. With a lot of educational software, he says, “it's hard to find the math.”

    In reading, progress has been hampered by the ferocious debate between proponents of the whole word/reading in context approach and proponents of phonics, which emphasizes the sounds of individual letters and their alphabetic representation. The fireworks have left a residue of mistrust that could hinder progress. “It's interesting that NICHD hasn't put in any money,” says Cathy Roller, director of research for the International Reading Association, which believes that Lyon has denigrated the whole-word approach and made exaggerated claims on behalf of phonics. “Still, it's better to have them at the table than sitting on the sidelines,” adds Roller, whose organization is likely to apply for funding.

    Even before the initiative makes its first awards, NSF's social and behavioral sciences directorate is priming the pump with a $1 million competition to support a series of meetings this summer aimed at stirring the intellectual pot. “There's an old saw that everybody thinks children's learning is important—except educators and developmental psychologists,” quips Chuck Brainerd of the University of Arizona, Tucson, who hopes for NSF support to bring together 25 senior scientists in neuroscience, cognitive and developmental psychology, and educational disabilities for a 4-day meeting in August. “There's a real need to inject excitement and energy into the field of children's learning, and that's what NSF is trying to do,” he says.


    Call for 'Sustainability' in Forests Sparks a Fire

    1. Charles C. Mann,
    2. Mark L. Plummer*
    1. Mann and Plummer are the authors of Noah's Choice.

    By proposing that “ecological integrity” be the lodestar for managing the national forests, a committee of scientists may have inflamed the conflicts over these lands

    For more than 90 years, the national forests and grasslands that cover more than 8% of the United States have effectively been all things to all people. Loggers regarded them as reserves of low-cost timber, easily reached on government-built roads. Vacationers treated them as giant playgrounds, studded with picnic areas and campsites. Environmentalists wanted them to be nature reserves, minimally touched by human hands. Inevitably, the different visions collided, and the national forests and grasslands have become snarled in protest and seemingly endless litigation.

    On 15 March, an independent scientific committee proposed what it hopes will become a more coherent vision for these public lands. “Ecological sustainability,” the committee said, should become the principal goal in managing the national forests and grasslands—a suggestion that U.S. Agriculture Secretary Dan Glickman immediately endorsed as “a new planning framework for the management of our forests for the 21st century.” According to Chris Wood, an assistant to chief U.S. Forester Mike Dombeck, the committee's proposals will help guide the Forest Service through a more immediate challenge: Within 5 years, the agency is legally required to produce updated management plans for more than three-quarters of its land.

    In selecting ecological sustainability, the committee staked out new terrain in long-standing debates over the mission of the Forest Service—and in ecology itself. Since 1960, the agency has been guided by an explicit congressional mandate to manage the forests for “multiple use,” serving industry, recreation, and conservation all at once. But it has been unable to satisfy its different constituencies. By recommending that ecological sustainability be given first priority, the scientific committee hopes to end the conflicts—and, its members say, keep the forests able to satisfy demands for timber, grazing, and recreation as well.

    Instead, they may have opened a new round of controversy—one that at times has engulfed the committee itself. Its chair briefly resigned, feeling that it had overstepped its mandate of giving scientific and technical advice, and its final report is accompanied by another member's expression of similar concerns. And on 15 March a second blue-ribbon panel, this one from the Society of American Foresters (SAF), the leading professional association of silviculturists, issued a summary of its own report, arguing that selecting any one criterion—sustainability or anything else—as a single management goal will inevitably preclude some forest uses, and calling on Congress to make the crucial choices about how the lands should be managed.

    If that weren't enough, the concept of “sustainability” itself is at the center of a simmering debate in ecology. Along with sister concepts like “integrity” and “health,” sustainability has long been indicted by some ecologists for being vague and impossible to quantify. Other critics—many, but not all, outside the field—argue that these terms are little more than attempts to cloak a conservation agenda in scientific garb. The committee's report tries to offer more quantitative methods for measuring sustainability. But with even supporters of these concepts conceding that their use inevitably involves a host of value judgments, the committee's report and the reaction to it illustrate the complex, sometimes uncomfortable roles played by scientists in land-use and conservation decisions.

    Congress had hoped to settle the debates over the national forests more than 20 years ago with the National Forest Management Act (NFMA) of 1976, which required the Forest Service to develop detailed plans, using public participation, for managing the 155 national forests and 20 national grasslands. The plans, to be revised every 10 to 15 years, were supposed to set out how the agency would “coordinate” logging, recreation, and conservation. Unfortunately, the legislation was less than clear about how to make trade-offs when the various uses conflicted.

    “You could view [NFMA] as a Rorschach,” says Errol Meidinger, an environmental law professor at the State University of New York (SUNY), Buffalo. “Some people say it's about economic efficiency because there's language in it about the efficient use of the nation's resources, whereas others see it as a promise to support timber-dependent communities, and there's language to support that, and still others see it as a mandate for protecting ecological integrity, and there are parts of NFMA that seem to me to very clearly say that as well.”

    Not only that, other environmental laws, such as the Endangered Species Act and the Clean Water Act, add further constraints and duties, many of them at odds with each other and with NFMA. With the Forest Service's mission increasingly confused, the door was opened for litigation—the agency estimates that 1000 appeals and 20 to 30 new lawsuits are filed every year, from both environmental and timber interests.

    Whipsawed between the combatants, the Forest Service became widely regarded as paralyzed. Its troubles caught the attention of Congress. In dozens of workshops and hearings in 1997, according to Mark Rey, a staffer on the Senate Environment and Public Works committee, “what we found was in one sense extraordinary: Nobody, in all the testimony and statements, told us they were satisfied with the status quo.” That year, Senator Larry Craig (R-ID) introduced legislation that would have boosted the role of logging in forest planning and restricted the opportunities for appeals and lawsuits. Meanwhile, the Sierra Club campaigned to end logging altogether in national forests.

    Worried about the Craig bill, the Clinton Administration set out to revise NFMA regulations to give greater weight to ecological protection. To help it formulate its position, the Administration convoked a scientific advisory panel in December 1997. Consisting of 13 forest, ecological, and social scientists, led by Norm K. Johnson, a forester at Oregon State University in Corvallis, its mission was to provide scientific and technical advice on new regulations. Beyond that, the committee was asked by Agriculture Undersecretary Jim Lyons “to develop a conceptual framework for land and resource planning that could last at least a generation” and “to dream a little.”

    From the start, the Committee of Scientists, as it became known, was racked by disputes over how much weight to give ecological goals. The overriding reality, according to committee member Barry Noon, an ecologist at Colorado State University in Fort Collins, is that “we are losing biological diversity and changing landscapes at an unprecedented rate, and there may be severe consequences to human welfare as a result.” He proposed that the Forest Service choose as a lodestar the concept of sustainability—ecological, economic, and social. (Social sustainability, the committee report explains, involves “the capacity for future generations to maintain cultural patterns of life and adapt to evolving societal and ecological conditions.”) All three are important, Noon says, but “ecological sustainability is primary—it takes precedence over the other two, and it basically sets the bounds for the other two.”

    Using a forest socially and economically depends on understanding its ecological limits, explains committee member Charles Wilkinson, an environmental law professor at the University of Colorado, Boulder. “You have to be sure you've got that environmental baseline before you can assure that the other two uses are sustainable. When the forest crashes, you lose some of the economic benefits.”

    But committee member Roger Sedjo of Resources for the Future, a Washington-based public-policy institute, argues that NFMA regulations are supposed to be a framework for negotiations among different interests. “If you make ecological sustainability preeminent, then there are no trade-offs,” says Sedjo, who outlines his concerns in an appendix to the report. “Anytime there's a conflict, we know which side wins.” Such a broad change should only come from Congress, he contends, not a panel of scientists.

    “The hardest thing for this committee,” Johnson says, “has been to decide where scientific and technical advice ends and policy-making begins.” A dispute over where to draw this line in fact led to Johnson's resignation from the committee last December. “We were saying much more strongly that the Forest Service had to do certain things,” Johnson says now. “I didn't want to write things in stone.” He quickly rejoined, however, and by February the committee had hammered out a compromise. He now says that the report “provides a useful approach” to ecological sustainability—a two-pronged attack on the issue.

    First, the committee suggests, the Forest Service should assess the “ecological integrity” of a planning area, looking at broad factors such as the proportion of old-growth forests, stream flows, wildfire frequency, and the amount and distribution of large dead trees. Because these factors vary over time, the committee argues that the benchmark for assessing integrity must be their “historic range of variability,” with that range, in effect, being the conditions before European settlement. The more current conditions fall outside the historic range, the report argues, the lower the ecological integrity; the lower the ecological integrity, the greater the risk to ecological sustainability. Armed with this information, the Forest Service would then create plans to safeguard ecological integrity; economic and social activity could take place within this constraint.

    Second, the Forest Service should identify a set of “focal species”: native species whose abundance and well-being would be indicators of the functioning of the larger ecological system. Forest planners would then seek “to provide ecological conditions needed to protect and, as necessary, restore the viability of focal species.” Although “protect” and “restore” are strong standards, they would be a departure from current regulation, which requires the agency to “insure” the viability of all native populations.

    Environmentalists are watching these recommendations closely. Early committee drafts raised “concerns,” according to Mary Munson of Defenders of Wildlife, because they appeared to relax the standard of viability and let “politics play a little with the risk of extinction.” But the final report somewhat allayed these fears. The change in standards, says Mike Francis of the Wilderness Society, “might not be as dramatic a difference as it appears,” although he emphasizes that the group's lawyers are going through the report “with a fine-toothed comb.”

    But some critics charge that the choices made by the committee are rife with value judgments. Forest sociologist Robert Lee of the University of Washington, Seattle, argues that making ecological sustainability paramount amounts to making the “arrogant” claim that “the needs of the ecology determine the needs of the people—the needs of the people can be satisfied in many different ways.” Others are troubled more specifically by the use of “ecological integrity,” a term that has proven notoriously hard to define. “Ecological integrity—that term bothers me,” says silviculturist Chad Oliver of the University of Washington, Seattle, a member of the SAF task force. “It doesn't have a specific enough meaning, so that everyone could agree that a certain piece of ground has it.”

    Noon concedes that “there's no single indicator that one can use to capture or assess the degree of integrity of an ecosystem.” What's more, because ecosystems can have more than one state in which they seem to function stably, the choice of benchmarks depends on who is choosing them. For this reason, even supporters of the concept are often worried by it. “What needs to be measured and how is it best done? … For what values of the measures will [ecological] integrity be deemed to have been lost? Who will make this decision and who will act on it?” James J. Kay, a systems ecologist at the University of Waterloo in Ontario, has asked.

    In its report, the SAF task force provides its own answers to these questions, contradicting the vision laid out by the committee of scientists. Convened in December 1996, the 10-member SAF task force was chosen from academia, government, and—unlike the Forest Service committee—industry. According to Don Floyd, the natural-resource policy specialist at SUNY Syracuse who heads the task force, individual parcels of land can be managed either as long-lasting tree farms for industry or as long-lasting wilderness preserves, but not both at once. “You can't both clear-cut an area and keep it as wilderness,” he says. “It's common sense.” Society, he explains, should decide which areas to devote to logging, and manage them as timber farms, and which to devote to nature preserves, and manage them to restore desired environmental qualities.

    Although many SAF task force members favor giving greater overall weight to ecological factors, they argue that it's not up to scientists to make that choice. Congress, the task force's draft report concludes, should “act decisively,” revamp or scrap NFMA, and “establish clear priorities … through new legislation.”

    Members of the Forest Service committee say they are not the ones setting the priorities. The idea of parceling the land into separate timber and wilderness areas has “consistently and roundly been rejected by the American people,” says committee member Margaret Shannon, an environmental-policy analyst at Syracuse University's Maxwell School of Citizenship and Public Affairs. And the Forest Service's Wood rejects the notion that Congress needs to settle the debate over values. “Most folks have so much disposable income,” he says, “that they are looking at forests in terms of the positive outcomes of good stewardship, like biodiversity, like tourism, like existence values, like knowing that there's a wilderness out there and I can go there if I want to even if I'm sitting in this cubicle in Washington, D.C.” Worrying about the role of value judgments in science is “interesting but academic,” because society has already made the relevant decisions on values—and chosen sustainability.

    As long as Congress remains interested in forest management, this conclusion may be premature. Craig's bill, which reaffirms the importance of logging, will likely resurface in the next few weeks, says Senate staffer Rey. As for the committee report, he says, “we're interested in seeing the work, because the system needs to be modernized.” But Rey says that his interest may be tempered if the report ventures from “scientific and technical advice” into policy- making. “If scientists want to offer me a policy recommendation, they may have experience that's useful,” he says. “But I hope they don't expect me to genuflect to them just because they're a scientist.”

    On 16 March, both committees testified to their contrasting views in the House. Providing Congress does not quickly pass Craig's bill, the Forest Service will incorporate the committees' suggestions into a new set of draft regulations. It hopes to issue final regulations early in 2000. Whether it can meet that ambitious schedule depends, in part, on whether the two reports help to settle, rather than further ignite, the controversy over the forests.


    The March of Paradigms

    1. Jon Cohen

    The number of grants and papers invoking the term “new paradigm” has been growing by leaps and bounds, yet most seem to have little impact

    Forget about those dour predictions of the end of science or those lamentations about the passing of a golden age of discovery. New findings are apparently overthrowing entire bodies of evidence at an unprecedented rate, replacing them with novel frameworks for understanding everything from particles to organisms to the universe itself. The evidence is right there in the scientific literature: Last year alone, 124 papers in leading journals invoked the term “new paradigm” in their titles or abstracts. And use of the expression has been growing steadily throughout the 1990s.

    Many of these claims, however, may not be quite the kinds of developments science philosopher Thomas Kuhn had in mind when he made the term new paradigm famous with his paradigm-shifting 1962 book, The Structure of Scientific Revolutions. Kuhn described the process—which he called a paradigm shift—by which a prevailing set of theories and supporting evidence gives way to a new set: the replacement of natural order by natural selection, for example, or Newtonian mechanics by quantum theory. The recent spate of new paradigms has a different ring: integrating genomic function and nuclear architecture, osteopathy to manage back pain, EBNA1 and E2 as origin-binding proteins, and links between spiritual care and the environment or between epidemiology and the liberal arts. New paradigms are now so commonplace that one author felt obliged to note that “problem-based learning” was not a new paradigm.

    To get a quantitative sense of the remarkable proliferation of new paradigms, Science asked the Institute for Scientific Information (ISI) in Philadelphia, Pennsylvania, to analyze the frequency with which the phrase crops up in papers published across a broad range of scientific disciplines. Use of the term in abstracts and titles in the ISI database of leading journals increased steadily from 30 papers in 1991 to 124 in 1998. A search of MEDLINE—a database of biomedical publications maintained by the National Institutes of Health (NIH)—for the same period reveals a similar trend: “New paradigm” usage increased at a rate of 26% a year, from 21 papers to 73. And probes of the NIH and National Science Foundation databases of new grants turned up evidence of the same sharp increases (see graphs)—which should keep new paradigms flowing into the literature for years to come.

    If these papers point to new scientific vistas, they should be highly visible in the scientific literature. To find out, ISI's David Pendlebury analyzed how many times other publications cited each of the 292 papers published between 1981 and 1999 that used new paradigm in their titles. Surprisingly, only 32 received 10 or more cites— including citations in separate publications by the same authors. “These data show that 90% of new paradigm papers affected the research world very little indeed,” Pendlebury says. Indeed, they were cited less often, on average, than papers that avoided the term. Only 22 of the most cited papers, notes Pendlebury, exceeded the average number of citations for papers published in the same journal during the same year. “So, the new paradigm fell flat, it would seem, for 31% of these 32 most cited papers,” Pendlebury concludes.

    Perhaps the problem lies in citation analysis itself: The new paradigms may be so radical that the rest of the scientific world, stuck in the old ways of looking at things, hasn't yet shifted to them, depressing citation counts. So Science turned to a time-honored, although less rigorous, evaluation: We randomly selected a few current papers and contacted independent experts to ascertain whether the papers indeed had revolutionized their views.


    Asked to comment on a Journal of Biological Chemistry paper entitled “Regulated co-translational ubiquitination of apolipoprotein B100: A new paradigm for proteasomal degradation of a secretory protein,” Daniel Steinberg, an apolipoprotein B100 authority who works at the University of California, San Diego, says it “is stretching the words very thin” to call this a new paradigm. The paper, says Steinberg, offers “an alternative hypothesis.” Steinberg—who notes that he has much respect for the paper's last author, a former postdoc in his lab—may be an especially tough critic, however. He happened to have been at Harvard with Thomas Kuhn and had many discussions with him. “I thought we should reserve ‘new paradigm’ for Darwin, Freud, and Newton,” says Steinberg. “Maybe we use it five times in a century.”

    Josef Penninger of the University of Toronto has a similar view of a paper published last August in the European Journal of Endocrinology, “Osteoprotegerin and its cognate ligand: A new paradigm of osteoclastogenesis.” Penninger, who admits to similar paradigmatic offenses himself, says this paradigm once was new. But that was in 1972, when a paper in Science described the basic finding that a factor made by white blood cells could trigger osteoclastogenesis, the mechanism of bone reabsorption.

    Even the new paradigm paper that ISI found had the most citations may involve a questionable use of the term. Published in EMBO Journal in May 1989, “Human atrial natriuretic peptide receptor defines a new paradigm for 2nd messenger signal transduction” had a big impact on its field, garnering 237 citations. But the paper, says Lincoln Potter of the Salk Institute for Biological Studies in La Jolla, California, essentially validates a controversial hypothesis put forward decades before by Earl Sutherland, who won the Nobel Prize in 1971 for his discovery of second messengers.

    What, then, might account for the proliferation of new paradigms in the scientific literature? Nobel laureate Steven Weinberg, a physicist at the University of Texas, Austin, has one possible explanation. Weinberg—who attacked Kuhn's proposition that new paradigms displace old ones in a critique that ran last year in the New York Review of Books—suggests that the rise is linked to the increasing specialization of science. “It's harder and harder for scientists to make a splash that goes beyond their fellow specialists,” Weinberg says. The term is an attention-getter, says Penninger. “I use it, too, sometimes, but really for political reasons—to make reviewers happy and for funding,” he says.


    One especially puzzling result of Science's investigation is the nursing paradigm paradox: 67 of the 459 uses of “new paradigm” in the MEDLINE database from 1968 to 1999 involved nursing research. Patricia Grady, director of NIH's National Institute of Nursing Research, offers a simple explanation: “Nursing research is relatively new on the horizon of scientific research.” The newer the field, the more new paradigms there are to discover. Grady says she personally eschews the phrase, however. “People often ask, ‘What does that mean?’” says Grady. “I try to avoid speaking in ways that are mysterious.”

    Grady is not the only person who finds the term difficult. Kuhn himself had trouble precisely pinning down the meaning of paradigm. “Turn now to paradigms and ask what they can possibly be,” wrote Kuhn in a 1969 postscript to the second edition of the book. “My original text leaves no more obscure or important question. One sympathetic reader … prepared a partial analytic index [of the book] and concluded that the term is used in at least twenty-two different ways.” To help solve this problem, Kuhn introduced yet another phrase with which to discuss a paradigm: “disciplinary matrix.” A MEDLINE search on that term yielded only one hit: “Philosophic analysis of a theory of clinical nursing.”


    NIH Invites Activists Into the Inner Sanctum

    1. Bruce Agnew*
    1. Bruce Agnew is a writer in Bethesda, Maryland.

    Under pressure from advocacy groups to open up the grant-review process, the NIH is adding lay members to some study sections—to mixed reviews

    For more than half a century, the holiest of holies at the National Institutes of Health (NIH) has been the peer-review “study sections”—the small panels of 15 to 20 researchers that weigh the scientific merit of more than 24,000 grant applications each year. Scientists whose ideas are turned down often criticize the study sections bitterly, but at least they know they have been judged by fellow scientists. “The important thing about peer review,” says molecular biologist Keith Yamamoto of the University of California, San Francisco, “is that it's peers.”

    Now that's changing, fast. Under political pressure to listen more closely to specific-disease advocates and ordinary people, top NIH officials are pressing individual institutes to place patient representatives on some study sections—particularly those dealing with potential therapies. “No directives have been issued, but we're encouraging it,” NIH director Harold Varmus said in a recent interview. “Our assessment is that under appropriate circumstances, having informed patients on study sections can be extremely useful.” But some scientists worry that NIH is “diluting” expert advice.

    The use of nonpeer reviewers isn't totally untried. Following a recommendation of the Institute of Medicine, the U.S. Army since 1995 has been including two “consumers”—that is, patients—on each review panel in its $210 million research program on breast, prostate, and ovarian cancer and neurofibromatosis. Scientists who have served on these panels say the process works surprisingly well.

    Pressured by advocacy groups to become more open, the National Institute of Allergy and Infectious Diseases (NIAID) and the National Cancer Institute (NCI) have been seating patients on selected study sections for some time. Consumer panelists offer expertise on such issues as clinical trial consent forms, recruitment, retention, outreach, and follow-up, says John McGowan, director of NIAID's Division of Extramural Activities. Over the past 10 years, consumer participation “has improved the science and the quality of what we fund,” he says. Marvin Kalt, director of NCI's Division of Extramural Activities, which started using consumer panelists 2 years ago, says he's unaware of any researcher complaints.

    NIAID and NCI have been the exceptions, but other NIH institutes are hurrying to catch up. The National Institute of Mental Health (NIMH) is already recruiting consumer representatives—patients, family members, health-care providers, or others—to serve on study sections that will review treatment-oriented grant applications in June. The National Institute on Drug Abuse (NIDA) may follow suit in May. The National Institute of Child Health and Human Development (NICHD) and others are weighing the idea.

    “The train is definitely rolling,” says Yamamoto. “I'm very uneasy about it. I don't like it.” Yamamoto chairs the advisory committee of NIH's Center for Scientific Review (CSR), which operates the mostly basic science study sections that review about 70% of NIH grant applications. CSR doesn't plan to invite patient advocates onto these panels anytime soon, but that still leaves about 30% of applications—those reviewed by institute-run study sections—that might be subject to mixed-company reviews.

    Yamamoto and other skeptical researchers argue that adding nonscientists to study sections is both unwise and unfair. “Every vote is very meaningful to the applicant,” Yamamoto says. “Diluting that with people who can do no more than vote their impressions of the discussion is an injustice to those applications.”

    No one expects placard-waving activists to break up meetings with shouts and demands. Where the process has been tried, program managers carefully select people who are knowledgeable about a disease, able to function in the calm give-and-take of a meeting, and willing to check their advocacy at the door. In the Army program, consumer and scientist reviewers are extensively briefed in advance on what to expect and what's expected of them, says Colonel Irene Rich, who directed the Army program. On scientific issues, proponents say, the patients usually vote with the scientists.

    So what? counters Richard McCarty, executive director for the scientific directorate of the American Psychological Association. At a meeting of the NIMH Advisory Council on 5 February, McCarty dismissed proponents' contention that lay reviewers offer valuable insights and don't much change study section outcomes. “I don't find those issues especially compelling as a rationale for altering the scientific review system that has served NIH and the nation so incredibly well since the mid-1940s,” McCarty said. Even if lay panelists tend to go along with the scientific majority, their votes would “flatten” the results, Yamamoto warns, making it more difficult for creative but unorthodox projects to win funding.

    Psychologist Joseph Campos of the University of California, Berkeley, a member of the NICHD Advisory Council, also has qualms. Although Campos says informed and interested laypeople “unquestionably” have a right to be involved in the grant mechanism, study sections aren't the place for them. On such small review panels, he says, the vote of one person who disagrees with the rest “may reduce the score or increase the score unfairly.” He adds, “I have seen the problem occur with scientists who are against a certain type of research and have held up that type of research getting funded.” As an alternative, Campos suggests inviting laypeople onto study sections as nonvoting observers. Or, he says, tabulate scores as a median rather than a mean, so that an outlier won't distort the outcome so badly.

    Varmus acknowledges that “there are some concerns” about laypeople on peer- review panels. But he says the experience of the Army program and others “has been positive.” “We're not talking about the average consumer,” Varmus emphasizes. “We're talking about people who are experts just as our scientists are experts. Many patients who are not scientifically trained become very expert in many issues that are involved, particularly in clinical research, and can be very useful in helping to evaluate grants.”

    Exactly right, say scientists who have served with consumers on panels. “I probably was somewhat skeptical” before serving on several Army panels, says medicinal chemist Donald Bergstrom of Purdue University in West Lafayette, Indiana. “I felt they really can't contribute anything. That opinion changed pretty fast once we got involved in conversations.” The Army experience has been an eye-opener for NIH leaders as well. NIDA director Alan Leshner changed his mind after he sat in on Army review panels. “I had been ambivalent about the issue,” says Leshner, “but candidly, I was blown away” by the quality of the comments.

    “They're a reality check,” says cell biologist Howard Hosick of Washington State University in Pullman. “They bring up things that the scientists wouldn't have thought about.” For example, Gary Pasternack, director of the division of molecular pathology at Johns Hopkins University School of Medicine in Baltimore, recalls a proposed breast cancer project that intrigued the scientists in the room but left the consumer panelists cold. “They said that because the perceived benefit was marginal, no one in their right mind would undergo it,” he says.

    Involving consumers in grant review puts a human face on the disease—and on the scientists, too. Richard DiAugustine of the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina, says the presence of two prostate cancer survivors on his review panel made him think hard about the real motives of applicants for prostate cancer fellowships. “I thought, ‘I have two guys I have to answer to here. Are these [applicants] just going after money, or do they really want to have a career in prostate cancer research?’”

    For researchers, the hours spent in meetings with lay reviewers can have unexpected positive side effects: Consumer panelists go back to their own constituencies as allies of scientists rather than critics. “When I've spoken to breast cancer groups, when I hear women who are angry, I try very hard to explain to them what this [scientific] process involves—that it cannot happen overnight,” says Connie Gee of Brentwood, Tennessee, a kindergarten teacher and first vice president of the Tennessee Breast Cancer Coalition. Jill Wagner of Lima, Ohio, a former General Dynamics Corp. supervisor, adds, “The most heartwarming thing for me about serving on the panel with all these esteemed scientists was to find out that they really, really wanted to be reminded that this disease is about people.”

    Virgil Simons of Secaucus, New Jersey, a textile industry executive and founder of The Prostate Net, says his view of researchers “absolutely” changed when he saw the constraints under which they operate. “You've got people who are going to ultimately save lives working for money that's far less than we pay garbagemen,” he says. “We've seen some investigators whose salaries are around $35,000 a year. We've seen some senior people who are working for $50,000 or $60,000 a year. It's almost criminal.”

    Despite the obvious goodwill it fosters, the Army way of peer review isn't directly transferable to all that NIH does. Cell biologist Daniel Medina of Baylor College of Medicine in Houston notes that the Army panels concentrate on “very focused review areas, which is different from many NIH review panels, which cover a broad area of topics.” But would the approach work on NIH study sections that are focused, such as those weighing responses to Requests for Proposals? “I don't know,” says Medina. “I think you just have to try it.”

    That's what NIH is about to do.


    EU Facilities Program Keeps Researchers on the Move

    1. Sabine Steghaus-Kovac*
    1. Sabine Steghaus-Kovac is a science writer in Frankfurt, Germany.

    A European program to open up local facilities to scientists across the continent is winning plaudits from both young researchers and lab managers

    DARMSTADT AND BAYREUTH, GERMANY— When Dolores Cortina-Gil was a physics postgrad at the University of Valencia in 1993, she faced a serious logistical problem: There were essentially no facilities in her native Spain for the kinds of nuclear physics experiments she hoped to conduct. So she packed her bags and moved to the GANIL heavy-ion research center at Caen in northern France to do her doctorate. After that, she moved on to a postdoc position at GSI, Germany's heavy-ion lab in Darmstadt, where she is now conducting her own nuclear structure studies with unstable nuclei.

    Such scientific country-hopping is becoming more common in Europe, thanks to a European Union (EU) program called Access to Large-Scale Facilities (LSF). And it is about to get even easier: The EU's Framework 5 program, launched last month at a meeting in Essen, Germany, will spend $200 million on the LSF program over the next 4 years—a 50% increase over previous spending levels.

    The program gives Europe's top researchers and young scientists an opportunity to work at the facility best equipped for their research, irrespective of who owns the facility or where it is located within the EU. The more than 100 facilities that are now part of the scheme get block grants to pay for travel, accommodation, and technical assistance for visiting researchers, and wear and tear. But much of the emphasis is on training and enabling young researchers to use top-notch facilities. “This is the easiest way to meet people and to make new collaborations,” says Cortina-Gil, who is funded by the LSF program in part to provide technical help to visiting scientists using GSI's fragment separator. “To change from one European country to another would be very difficult without the financial support of the European Union.”

    Facility managers like the program too. Says Giorgio Margaritondo of Italy's ELETTRA synchrotron in Trieste: “The LSF program has been extremely effective and its impact very positive. … The travel support of users has effectively removed the most serious barrier preventing scientists from using top-level facilities.” Wouter Los from the Zoological Museum at the University of Amsterdam agrees. “One of the strengths of the program is that it identifies and ‘recognizes’ large-scale facilities in Europe.”

    LSF started out in 1989, during Framework 2, as a small program with a budget of $31 million. It was an immediate hit: 1600 researchers took the opportunity to visit the 17 participating physics facilities during the first 4 years. By the end of Framework 4 last year, it had mushroomed to encompass 116 facilities in a wide variety of fields, such as chemistry, engineering, and life and earth sciences, which were visited by more than 6000 researchers. The types of facilities have also evolved over the years. No longer are they just large, expensive pieces of equipment, but also collections of biological data, medical research facilities, or field study centers in ecosystems ranging from arctic to tropical.

    The LSF program typically gives such a facility about $1 million for a period of 3 to 4 years to select and support visiting researchers. Often, facilities use the money to buy scientific equipment, computers, and materials, or to employ researchers to help the visiting scientists. Researchers submit applications directly to the facility, and from there they are passed to an independent international review committee. “The program is managed primarily at the facility level, eliminating needless and expensive duplications,” says Margaritondo.

    Although most facility managers who spoke with Science are enthusiastic about the LSF program, they have some gripes. From talking with other facility managers, Egil Sakshaug of Trondheim Marine Systems in Norway says “the most frequently mentioned complaints are financial, that the funds compensating for ‘wear and tear’ at the host institutes are not enough.” Ross Angel of the Bavarian Geosciences Institute in Bayreuth, Germany, agrees: “We gain in the things we cannot quantify: new ideas and collaborations or teaching practice for our students. Purely financially we obviously lose.”

    Indeed, the opportunity to exchange ideas and techniques is the biggest draw for most facilities to participate in LSF. “Visitors bring their expertise here,” says Angel. “Catherine Dupas, a postdoc from Lille [in France], came here with an LSF grant. She improved our technology in using transmission electron microscopy.” According to Klaus-Dieter Gross, project manager at GSI, “the EU-funded researchers make a major impact on the research at our institute. It is hard to imagine the situation without them.”

    Many of the researchers who visit the facilities gain a lot more than just new ideas. For those like Cortina-Gil, who come from regions of the EU where major research facilities are rare, it is the only way they can conduct their research. And for many, it provides valuable early career experience. An EU study carried out last spring found that more than half of LSF-supported researchers are aged 35 or younger, and two-thirds are first-time users of the facility concerned.

    Lorella Franzoni, an Italian biophysicist from the University of Parma, is a typical beneficiary. She investigated the structure of proteins at a powerful nuclear magnetic resonance (NMR) spectrometer at Frankfurt University in Germany. “When I arrived here, I was new to the field of multidimensional heteronuclear NMR of proteins,” Franzoni says. “I consider myself very lucky to have benefited from the EU funding. It is also fortunate for the group in Parma, because I will transfer that knowledge.” Cortina-Gil also hopes to spread her newfound skills back in her homeland. When her position at GSI expires at the end of this year, she hopes to secure an academic position in Spain. But, she adds, “to continue my research I will have to keep contact with my former collaborators at GANIL and GSI, because I have no facilities to do my experimental work in Spain.”


    Watching the Universe's Second Biggest Bang

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    On 23 January, astronomers witnessed a cataclysmic event in a distant galaxy. The observations hold many lessons about the mysterious nature of gamma ray bursts

    It happened long, long ago, in a galaxy far, far away: the most violent event ever observed in the universe. For billions of years, the aftershock of that event—a titanic flash of high-energy radiation—has hurtled through space at the speed of light. It swept past Earth 2 months ago, in the early morning of Saturday, 23 January. Astronomers specializing in gamma ray bursts (GRBs)—high-energy flashes that have been a puzzle for decades—were watching and waiting. And thanks to a sophisticated alert system of orbiting telescopes and Earth-bound observatories across the globe, they were able to catch the whole light show: from the initial gamma flash through to radio waves crackling days later from the site of the burst. They also caught, for the first time, an optical flash simultaneous with the initial gamma ray burst.

    Its power and visible-light display have now made the modestly named GRB 990123 an astrophysical celebrity, the subject of no fewer than six papers in today's Science and next week's Nature. “It's a great discovery,” says Martin Rees of Cambridge University, a leading GRB theorist. “This is the very first time that optical emission has been observed during the burst itself,” adds Titus Galama of the University of Amsterdam. In the case of GRB 990123, despite the enormous distance of the source, the optical flash was bright enough to be easily visible with an amateur telescope or even binoculars.

    The observations combine to give a complete portrait of GRB 990123, which is yielding new clues to the nature of the cataclysmic events that give rise to these flashes. In the Science and Nature papers, along with others posted on the Internet, researchers describe what they have learned so far, including the tantalizing suggestion that GRBs may appear so powerful because their energy is focused into narrow beams that we only see when they are pointing more or less in our direction.

    Gamma bursts themselves cannot be detected from the ground because of atmospheric absorption. They were first spotted in the 1970s by U.S. military satellites, which were looking for gamma rays from Soviet nuclear explosions. A systematic hunt for them began in 1991, when NASA's Compton Gamma Ray Observatory (CGRO) was launched and began detecting GRBs at a rate of about one per day. Even then their origin remained mysterious because gamma ray detectors have very low positional accuracy and the bursts fade very fast. That changed in 1996 with the launch of the Italian-Dutch satellite BeppoSAX, equipped with wide-field x-ray cameras that can pinpoint the position of certain bursts. Astronomers set up an alert system so that ground-based observers could quickly point their instruments at each burst. Soon they were picking up x-ray, optical, and radio afterglows of the bursts—data that eventually made it clear that GRBs originate in very distant galaxies.

    By this time, theorists had built up a picture in which GRBs result from the collision of two high-density neutron stars or from a “hypernova”—the total collapse of a very massive star. Both kinds of events would form a central black hole and eject matter at close to the speed of light. This matter would collide with the surrounding interstellar gas, creating shock waves that travel both inward through the ejecta and outward into the interstellar medium, heating it into a fireball that would expand and cool to create a lingering afterglow.

    GRB 990123 hit the astronomical headlines because for the first time the alert system acted fast enough to capture an optical flash as well as a gamma ray one. Once the gamma ray burst triggered detectors on BeppoSAX and CGRO, they dispatched a message to the Robotic Optical Transient Search Experiment (ROTSE), an automated camera at Los Alamos National Laboratory in New Mexico, which within 10 seconds started snapping pictures of the constellation Bootes. Just 22 seconds after the initial flash, it captured an image of the optical burst.

    In an article in next week's Nature, Galama and his colleagues combine observations of the flash with data from other wavelengths—gamma, x-ray, infrared, submillimeter, and millimeter—to conclude that they are seeing the effects of three kinds of shock waves within the fireball. “The initial gamma ray burst is believed to be caused by internal shocks in the ejecta,” says Galama. “The optical flash recorded during the burst is probably due to the short-lived reverse shock, while the afterglow arises from the forward shock.”

    But although the popular fireball model for the afterglows of GRBs is supported by the new observations, the tremendous energy release of GRB 990123 is a real puzzle. On page 2075, Michael Andersen of the Nordic Optical Telescope on La Palma in the Canary Islands describes his team's analysis of the spectrum of the optical component of the afterglow. Their results, which a team at the Keck II telescope on Mauna Kea, Hawaii, has confirmed, indicate that the source had a redshift (a cosmological measure of distance) of at least 1.6—equivalent to a distance of several billion light-years. Hubble Space Telescope observations made on 8 and 9 February picked out the actual explosion site: the outskirts of a very distant irregular star-forming galaxy.

    Such a distant source makes GRB 990123 the most luminous gamma ray burst seen so far, putting the energy of the explosion that created it second only to the big bang itself. Assuming that the explosion did burst with equal intensity in all directions, it must have generated a colossal 3.4 × 1054 ergs—the energy you would get if you took two stars the size of the sun and converted all of their mass instantaneously into energy. In visible light alone, the burst shone as bright as a million normal galaxies.

    Theorists are at a loss to explain this prodigious output. Originally, some suggested that a concentration of mass somewhere between Earth and the source might have acted as a gravitational lens, brightening the burst (Science, 29 January, p. 616). Now, astronomers invoke beaming: If the blast preferentially emitted gamma rays in two opposite directions, and we happen to look down one of the two jets, less energy could account for the observed luminosity.

    In another article in next week's Nature, Shrinivas Kulkarni of the California Institute of Technology in Pasadena and his colleagues claim that they see evidence for beaming in their multiwavelength studies of the afterglow of GRB 990123: About 2 days after the burst, the afterglow started to fade faster than before. This “break” in the light curve, which is also seen by Alberto Castro-Tirado of the National Institute of Aerospace Technology in Madrid and collaborators (p. 2069), is what you would expect when a relativistic jet points more or less in your direction and, once it has cooled a certain amount, suddenly starts to expand sideways, increasing the cooling rate.

    Although theorists say this doesn't yet amount to a smoking gun, other hints of beaming have turned up. A group led by Jens Hjorth of the University of Copenhagen studied the polarization of the afterglow—a signature of magnetic fields at the light's source—with the Nordic Optical Telescope (p. 2073) and, to their surprise, didn't find any polarization at all. “This could mean that the field is tangled,” he says, or it could mean that the field is coherent but the burst is strongly beamed, pointing exactly toward us.

    Some theorists are now coming up with explosion mechanisms that would naturally produce beams of radiation—emerging, for example, from the poles of a spinning black hole (see story, p. 1993). But others are withholding judgment. “The theoretical evidence for beaming is quite compelling,” says Rees, “but the observational evidence isn't very strong yet.” Another titanic burst, and another haul of data, may change that.


    Did Cooked Tubers Spur the Evolution of Big Brains?

    1. Elizabeth Pennisi

    A controversial new theory suggests that cooking—in particular, cooking tubers—sparked a crucial turning point in human evolution

    Potatoes, turnips, cassava, yams, rutabagas, kumara, manioc—these are just a few of dozens of underground tubers that sustain modern humans, who boil, bake, and fry them for lunch, dinner, and sometimes breakfast. Now, a small but enthusiastic band of anthropologists argues that these homely roots were also pivotal in human evolution. In work in press in Current Anthropology, Harvard anthropologist Richard Wrangham and his colleagues announce that tubers—and the ability to cook them—prompted the evolution of large brains, smaller teeth, modern limb proportions, and even male-female bonding.

    Already this work, which Wrangham has presented at meetings, has provoked skepticism, for it challenges the current dogma that meat-eating spurred the evolution of Homo erectus, the 1.8- million-year-old species whom some anthropologists say was the first to possess many humanlike traits. But the idea dovetails with another challenge to the primacy of meat-eating as an evolutionary force: the notion that gathering by females was crucial, which another team of anthropologists will present in the May issue of the Journal of Human Evolution (JHE). And some researchers find the new perspective, based on a potpourri of data from both archaeology and modern human societies, quite refreshing. “Cooking as making such a difference is not something that I had previously considered,” says Andrew Hill, a paleoanthropologist at Yale University. “It's nice to have this put forward.”

    But skeptics say there is a very good reason why this idea may be half-baked. If early humans did cook tubers, then they must have controlled fire about 1.8 million years ago—but the first clear evidence for hearths isn't until about 250,000 years ago. “The application of heat for food was a late thing,” says C. Loring Brace, an anthropologist at the University of Michigan, Ann Arbor. “I think [Wrangham] is on the wrong track.”

    Invoking diet to explain the differences between H. erectus and earlier forms such as H. habilis, a species known only from fragmentary fossils, and our more apelike ancestors, the australopithecines, is nothing new. The size difference between males and females in H. erectus is narrower than it is in the australopithecines of half a million years earlier. And the brains of both sexes grew larger while their guts and teeth shrank; the most dramatic changes occurred between specimens assigned to early Homo species and those classed in H. erectus. “There's no other point [in time] when you get such large changes,” says Wrangham.

    The traditional dietary explanation, however, is a shift from nuts and berries to meat. Cut marks on animal bones suggest that humans had mastered meat-eating, perhaps by scavenging carcasses, by 1.8 million years ago. Many researchers have assumed that this high-quality food fueled the rise of H. erectus, enabling it to process food with smaller teeth and guts and nourishing larger brains and bodies. And with more food to go around, females began to catch up with males in size.

    But Wrangham and his Harvard team think a range of evidence, from archaeology to studies of primates and modern human societies, argues against that scenario. They question whether scavenged carcasses could have been a major staple. And they point to hints that even the more apelike australopithecines may have consumed meat more than a million years earlier (Science, 15 January, pp. 303, 368), without evolving big brains or changing their overall size; indeed, other modern omnivores eat meat without large increases in body size.

    Nor do modern tropical hunter-gatherers rely heavily on meat. Among modern tropical African tribes, “there is no case of [people] eating more meat than plant food,” Wrangham points out. For example, anthropologists James O'Connell and Kristen Hawkes of the University of Utah, Salt Lake City, found that although a hunter belonging to the Hadza tribe of Tanzania on average might catch one large animal per month, often weeks would go by with no kills. The Hadza hunt with bows and arrows, technology far more advanced than that of any early humans, yet even for these modern hunters, “this is no way to feed the kids,” says Hawkes.

    But if meat wasn't responsible for the increase in brain size 1.8 million years ago, what was? Cooked tubers, says Wrangham, arguing that these starchy roots would have been quite abundant on the plains of Africa 2 million years ago, even when drier climates made fruits, nuts, and perhaps animal prey scarce. Today, there are 40,000 kilograms of tubers per square kilometer in Tanzania's savanna woodlands, for example. Other tuber-eating animals, such as wild pigs, thrived in Africa during this time, and Wrangham notes that fossil mole rats, which subsist almost entirely on tubers, have been found among hominid remains from 2 million years ago.

    Observations of living apes also offer some precedence for primates digging up roots. For example, chimps in a dry region of the Congo dig down an arm's length to reach the root of a particular vine, then chew on its moist root and carry it as a canteen for long trips. Some apes pull up lakeside herbs and eat the subterranean parts, says Wrangham.

    Thus even Australopithecus may have munched tubers. But the real revolution came once human ancestors tasted a tuber baked in a lightning-sparked grass fire and realized the value of cooking, Wrangham asserts. Heat turns hard-to-digest carbohydrates into sweet, easy-to-absorb calories. Using the protein, fat, and carbohydrate makeup of modern fruits, seeds, meats, and tubers, Wrangham's team calculated the caloric value of diets containing various proportions of these foods, assuming a constant total amount of food dry matter. A diet of 60% cooked tubers, about the proportion used in modern native African diets, and no meat boosts caloric intake by about 43% over that of humans who ate nuts, berries, and raw tubers, says Wrangham. A 60% meat diet offers just a 20% advantage.

    “There seems to be a genuine energetic advantage in cooking food,” agrees Yale's Hill. “This could lead to a shift in human behavior” as well as physical changes such as smaller teeth. “Tubers have a lower fiber content [than other plant foods], and that would fit very nicely with this [idea],” adds Leslie Aiello, an anthropologist at the University College in London. “And cooking would just accentuate this.”

    Wrangham takes his tubers even farther, arguing that they set off another whole chain of evolutionary events. As a valuable resource, cooked tubers needed to be safeguarded from theft. Because cooking requires food to be gathered and held in one place rather than eaten during foraging, males could simply wait until dinner was done, so to speak, and steal it from females.

    According to Wrangham, females attempting to thwart theft would use sexual attractiveness to recruit the best male defenders. This tended to offer plenty of mating opportunities for males and less rivalry among them, hence less selection for large males. Thus, while females evolved a larger body size—either to better produce and nourish babies or to fend off stealing—males stayed about the same size, and the size gap between the sexes narrowed. At the same time, the rudiments of the modern human social system—pair-bonding in family groups—took shape.

    To some, that scenario doesn't add up. “I can't imagine there was such a dependency on females cooking tubers that males did nothing,” says Anna K. Behrensmeyer, a paleoecologist at the Smithsonian National Museum of Natural History in Washington, D.C. But another group of anthropologists agrees that gathering and cooking tubers could have altered human behavior. In their upcoming paper in JHE, O'Connell, Hawkes, and Utah colleague Nicholas Blurton-Jones assume that modern gender roles have their roots deep in the past, so that while men were out hunting or scavenging, females, including grandmothers whose own children were grown, brought home the daily bread. Earlier humans foraged for fruits and nuts, which children as well as adults can gather, says Hawkes. But tubers, with their high caloric value, offered a food source rich enough to feed the group without the children's contribution. This “means [the group] is no longer tethered to resources that children can get,” explains Hawkes, and led to longer lived, better nourished populations of H. erectus. She and O'Connell also argue that these humans were then able to handle a wider range of environments, spreading into grasslands and cooler climates as the fossil record indicates.

    But Henry Bunn, a paleoanthropologist at the University of Wisconsin, Madison, has a more typical—and skeptical—reaction to the tuber theory. He says Wrangham's team “downplay[s] lots of sound evidence that we have [for meat-eating and fire use] and [accepts] at face value problematic evidence.” A major problem for the theory, notes Hill, is that where there's cooking smoke, there must be fire. Yet he, Michigan's Brace, and most other anthropologists contend that cooking fires began in earnest barely 250,000 years ago, when ancient hearths, earth ovens, burnt animal bones, and flint appear across Europe and the middle East. Back 2 million years ago, the only sign of fire is burnt earth with human remains, which most anthropologists consider coincidence rather than evidence of intentional fire.

    O'Connell counters that fires for cooking tubers rather than meat “might have been very ephemeral” and left few traces, but most of his colleagues remain unconvinced. “I think there would be evidence if it were [behind] as important an evolutionary leap as [Wrangham's team] suggests,” says Behrensmeyer.

    Even Wrangham agrees that more evidence is needed. “There hasn't been enough satisfactory archaeology for people to get their teeth into,” he says. But he also contends that the more he looks into the question, the more convinced he is of cooking's great importance, even 1.8 million years ago. “What could be more human,” he asks, “than the use of fire?”

Stay Connected to Science