News this Week

Science  13 Apr 2001:
Vol. 292, Issue 5515, pp. 182

    For All But NIH, the Devil Is Indeed in the Details

    1. David Malakoff

    The Bush Administration this week tried to prove Albert Einstein's maxim that mathematics isn't necessarily reality. While officials talked up the president's 2002 budget request—the details of which were released on 9 April—as a real boon to science, the numbers said otherwise for nearly every agency except the National Institutes of Health (NIH).

    The pronounced tilt toward bioscience is generated by a proposed 13.5% increase for NIH, while the National Science Foundation (NSF), the Department of Energy's (DOE's) Office of Science, and NASA would get essentially flat funding and the U.S. Geological Survey (USGS) would be cut by 8%. “This [budget] is creating an awful lot of discomfort,” says physicist Michael Lubell of the American Physical Society, one of many science lobbyists who hope Congress will come to the rescue. The Senate has already passed a nonbinding resolution that calls for increased funding for NSF and DOE science for the fiscal year that begins on 1 October.

    The bulk of the good news came from Health and Human Services (HHS) Secretary Tommy Thompson, who confirmed that NIH is slated for its fourth major boost in a row. Most of NIH's two dozen institutes and centers would get raises of about 12% under a record $2.8 billion increase for the agency, and the new National Institute of Biomedical Imaging and Bioengineering would start life with a $40 million budget. The number of “new and competing” awards would remain level at 9158, however, as NIH plans to increase the size of individual grants by 4.3%. That would lift the average cost of a “new research start” to $348,000—36% more than in 1998. Asked to justify NIH's good fortune, Thompson said: “Yes, it's a lot of money, and, yes, there's some grumbling going on by sister divisions [at HHS]. But it is an investment for all of us.”

    At NSF, the news was grimmer—but you couldn't tell by listening to director Rita Colwell. She pointed to a $40 million increase in three highly touted research programs—nanoscale science, information technology research, and biocomplexity—while glossing over the fact that the agency's $3.3 billion research account would drop by $15 million. NSF's overall budget would grow by 1.3%, a far cry from last year's 13.5% jump and a big departure from the 5-year doubling plan science advocacy groups are pushing.

    NSF spokesperson Curt Suplee assured reporters that “nothing will be eliminated” under the president's plan. But another senior official was less sanguine. “Clearly, you can't support all these increases without cutting back in other areas,” says Robert Eisenstein, head of the $850 million mathematical and physical sciences directorate. As an example, two disciplines essential to nanoscience—materials science and physics—would have their core research programs trimmed by $9 million and $7 million, respectively. The disconnect is even greater in NSF's $786 million education account. It would grow by $86 million—not enough to pay for a $200-million-a-year presidential initiative to link local school districts with universities to improve precollege math and science instruction. As a result, a decade-long systemic reform initiative would take a 60% hit, to $45 million, and teacher training would shrink by 28%, to $80 million.

    Colwell acknowledged a setback in trying to fund major new facilities. The White House nixed plans to start building the Atacama Large Millimeter/Submillimeter Array, a $550 million project with Europe and Japan (see p. 185), along with networks of ecological observatories and seismic stations. It also froze support for a high-altitude research plane already on order. In a bit of good news, the budget contains a $2500 boost in NSF's annual graduate student stipends, to $20,500, although officials had lobbied for a jump to $25,000.

    View this table:

    Highlights from other agency budgets

    • NASA: The space agency won a 2% boost to $14.5 billion, but “faces some very difficult decisions” given a host of overruns, says Administrator Dan Goldin. For instance, Goldin warns that growing cost estimates for some important missions, such as the Space Infrared Telescope Facility, will limit the number of new starts. Overruns already have forced NASA to scale back the international space station and abandon missions to the sun and Pluto. But Goldin vowed to “find a way” to launch an Earth observer called Triana, a brainchild of former Vice President Al Gore that is now on hold.

    • Funding for Mars exploration fares better, rising from $431 million next year to $659 million by 2005. The future of station research, meanwhile, depends on new European and Japanese contributions to the overall program. But Goldin pledged to have at least 10 experiment racks operating by 2004.

    • DOE: The Office of Science's $3.16 billion budget would remain flat, although one of its four major programs—Biological and Environmental Science—would drop by 8%, to $443 million, largely by eliminating 24 congressionally ordered projects. DOE's microbial genome research would grow by $10 million to $19.5 million. In high-energy and nuclear physics, U.S. funding or Europe's Large Hadron Collider and the Spallation Neutron Source being built in Tennessee would stay on track, but some user facilities would operate on a reduced schedule. The Relativistic Heavy Ion Collider in New York, for example, would operate for only 20 weeks, down from 27 this year. In addition, notes Lubell, the money to universities “declines slightly across many programs.”

    • Outside the Office of Science, renewable energy research takes a 36% hit, to $237 million, while the Nuclear Cities Program, designed to keep Russian nuclear scientists working on peaceful projects, would be cut 80% to $6.6 million.

    • Environmental Protection Agency: The Office of Research and Development (ORD) would drop 7%, to $535 million, but EPA officials say that's a $3 million increase after $42 million in earmarks added by Congress last year is removed. The STAR extramural grants program would remain unchanged at $97 million. There are “no wild swings” in most research programs, says budget analyst Amy Battaglia. But agency scientists are bracing for the loss of 36 of ORD's 1971 full-time positions, mainly through attrition.

    • USGS: The 8% cut will strike hardest at the agency's Toxic Substances Hydrology and National Water-Quality Assessment (NAWQA) programs. The toxics effort drops by 71%, to $4 million, while NAWQA falls over a waterfall, dropping by 30% to $45 million. “This means people out of work,” says USGS director Chip Groat. Up to 300 positions are in danger if USGS can't find partners to foot the bill for 18 long-term studies of river basins and watersheds. Funding for biological research would decline 7%, to $149 million, including a 52% cut in programs that distribute biological information.

    • National Institute of Standards and Technology: NIST's standards measurement laboratories will jump 12%, to $337 million. But the White House wants to suspend new research projects with industry in the $145 million Advanced Technology Program, requesting just $13 million to finish existing projects and conduct yet another study of the controversial initiative.

    • National Oceanic and Atmospheric Administration: The budget catch for key science programs was off 4% to $1.3 billion, with most of the cuts coming from the National Marine Fisheries Service, which helps enforce the embattled Endangered Species Act.

    • The president's budget must now work its way through Congress. Decisions about the ultimate size of a proposed tax cut and defense spending will largely determine how much money is left for domestic discretionary programs like research, whose projected $95 billion budget represents less than 5% of a $1.96 trillion pie.


    Plan to Close Zoo Lab Draws Fire

    1. Elizabeth Pennisi

    The Smithsonian Institution plans to close two research centers as part of a move to consolidate and reshuffle its scientific activities. The plan, released last week, has raised an outcry among many researchers, who worry that other valuable programs may be cut as well. An influential member of Congress has already asked officials to rescind the plan, which is part of the agency's newly unveiled 2002 budget request (see previous story).

    Considered by many the U.S. equivalent of a ministry of culture and science, the Smithsonian consists of 16 museums, the National Zoo, and a half-dozen research centers. Slated for the ax are the Center for Materials Research and Education, which seeks to improve preservation and curation techniques for museum artifacts, and the Conservation and Research Center (CRC), a 1290-hectare rural breeding and study facility for threatened or endangered species, operated by the zoo. The proposed closures are “a redirection of spending,” as the Smithsonian focuses on a few key disciplines, says Lawrence Small, a former investment banker who last year took over as secretary of the 150-year-old institution.

    The proposed cuts must still pass muster with Congress, which provides about 60% of the institution's $750 million budget, and with the Smithsonian's governing board, which is expected to review them next month. Representative Frank Wolf (R-VA), a member of the House appropriations panel whose district includes the center, immediately chastised the Smithsonian for the plan and called for a reversal of the decision. “I have let Smithsonian officials know of my extreme displeasure,” Wolf said on 6 April, 1 day after word of the cuts was leaked to the media.

    Smithsonian researchers say that they are shocked by the news, which was announced by the zoo's director, Lucy Spelman, during a surprise 4 April visit to the conservation center some 100 kilometers west of Washington, D.C. “It's kind of amazing,” says Devra Kleiman, a former zoo researcher and now a conservation biologist at Conservation International in Washington, D.C. “At a time when most zoos are making major efforts to hire people to do research and conservation that link the work they do in zoos with the work they do in the field, the Smithsonian National Zoo, which was a model for that 25 years ago, [is] eliminating those functions.”

    In a staff memo, Spelman explained that “the resources are simply not available to maintain the CRC as a world class facility,” even though Small insists that conservation research remains important to the Smithsonian. The CRC gets about $5 million per year in federal funds. The materials center, which was set up in 1963 to service all the museums, has a federal budget of about $3.3 million per year, but Small says it is “not an area of high priority” because most individual museums now have their own preservation programs. Small's plan would also create discipline-based institutes in astrophysics, geology and earth sciences, and the life sciences. A fourth, perhaps in conservation biology, would encompass the Smithsonian's remote field sites in Panama, Florida, and Maryland. Currently, each administrative unit, be it the CRC or a museum department, operates its own research program.

    With personnel details yet to be announced, the proposed closures have had a devastating effect on staff members. “Morale is just right down in the basement,” says one biologist who requested anonymity. Beyond the impact on their job status, some museum researchers worry that creating disciplinary institutes could conflict with the Smithsonian's stated mission of increasing the diffusion of scientific knowledge by putting too much distance between its scientific and educational activities.


    Location Neurons Do Advanced Math

    1. Laura Helmuth

    A mouse's minutes are numbered if it rustles through a field within hearing distance of a barn owl. The owl knows where to aim its talons even in the dark, thanks in part to a precise map of auditory space engraved in its brain's inferior colliculus, located in the brainstem. Now researchers have discovered that space-specific neurons in this map can perform more sophisticated computations than are commonly credited to neurons: Most neurons simply add incoming signals to come up with an answer, but neurons in the owl's auditory map multiply.

    If a mouse squeaks to an owl's right-wing side, the owl's right ear registers a slightly louder signal, and slightly sooner, than the left ear. Earlier research by Masakazu Konishi and colleagues showed that a set of auditory neurons calculates this interaural level difference (ILD) and interaural time difference (ITD) and sends the results to neurons in the inferior colliculus that are precisely tuned to particular locations. Humans use ILD and ITD cues as well, but the human auditory map isn't as well understood as the barn owl's.

    To discover how the owl's space-specific neurons process the incoming timing and level information, neuroscientists José Luis Peña and Konishi of the California Institute of Technology (Caltech) in Pasadena outfitted 14 barn owls with headphones and monitored the auditory map's responses to pairs of sounds. As they report on page 249, multiplication best describes how the neurons behave; a multiplicative model predicts how the neurons respond to different kinds of stimuli with about 98% accuracy.

    “This is the cleanest evidence of multiplication in the brain,” says Christof Koch, also of Caltech but not involved in the project. He points out that many neural tricks in humans as well as owls seem to require multiplication, such as keeping neurons in the visual cortex trained on one spot even when the eyes or head move. But most earlier reports of multiplicative neurons relied on “dirty multiplication,” he says—some combination of addition and multiplication that wasn't as satisfying as that in this new report.

    Two properties illustrate what it means for a neuron to multiply. First, when very faint ITD and ILD signals correspond to the same region of space (giving mutually affirming indications that a mouse is underneath a log off to the right, say), neurons in the inferior colliculus fire robustly. If the two subthreshold signals were added, in contrast, their combined stimulation wouldn't rile up the space-specific neurons enough to fire. Second, the lack of either an ITD or an ILD signal can veto a space-specific neuron's firing. In multiplication, 2 × 0 = 0; in a barn owl's inferior colliculus, a strong ITD signal × no ILD signal = no response. As Peña explains, the neuron acts like an “and” gate, requiring both signals, rather than an “or” gate, which could respond to just one.

    Archetypal neurons don't compute this way. Normally, a neuron receives a host of excitatory and inhibitory signals of various volumes along its dendrites. When the signals add up to surpass some threshold, the neuron fires. Such a neuron acts like a transistor in an electronic circuit, says Koch. But a neuron with the power to multiply, he says, “is more like a little processor; computationally it's much more powerful.”

    Mathematically, the behavior of these space-specific neurons is easy to explain. Neurophysiologically, it's another matter. “We don't know anything about how [multiplication] is computed” in these neurons, says Koch. Peña says his and Konishi's “next step is to figure out the biophysical mechanisms that underlie” the multiplication.


    Philodendrons Like It Hot and Heavy

    1. Leigh Dayton*
    1. Leigh Dayton writes from Sydney, Australia.

    SYDNEY, AUSTRALIA—Philodendrons may look like leafy dullards, but their image conceals a racy secret: hot sex. The reproductive antics of the tree philodendron, Philodendron selloum, generate as much heat as would a 3-kilogram cat. The behavior, says biologist Roger Seymour, apparently serves to attract and reward pollinating beetles, who bask in its warmth. And it's made possible by a finely honed system for rapidly getting oxygen to heat-producing cells.

    Seymour, of the University of Adelaide in Australia, had shown previously that P. selloum and a distant relative, the sacred lotus, can regulate the temperature of their blossoms during flowering. Their ability to keep their blooms warmed to about 35°C, regardless of the ambient air temperature, was a stunning finding, says plant physiologist John Beardall of Melbourne's Monash University. “It was a major conceptual leap for many people,” he says.

    Seymour's new work offers the first explanation of how P. selloum manages to get oxygen into the flower without the lungs and circulation system available to hot-blooded mammals. In an upcoming issue of the Journal of Experimental Botany, Seymour reports that it all begins when the plant “breathes” in large amounts of oxygen through stomata, or pores, in its flowers. The oxygen moves through the little spaces between florets, the tiny flowers dotting the plant's spiky blooms, in the sterile male band and then diffuses through the stomata and into the interstitial gas spaces. The oxygen eventually makes its way to mitochondria, power packs located inside the cells of florets, where it is used in the oxidation of lipids to release heat energy.

    A key factor in a plant's “breathing” is the density of the stomata, its “airway openings.” Seymour found that each floret has an average of 168 stomata, 1/20 the number in a leaf. “The airways are matched precisely to supply oxygen to the center of the floret and no more,” Seymour explains. More stomata would cause excessive heat loss through evaporation, he speculates, while fewer would not let in enough oxygen for heat production.

    The warm flowers release scent that attracts the large scarab beetle, Erioscelis emarginata, the only species that pollinates P. selloum. When the beetles arrive, the outer leaf envelopes them, creating what Seymour calls a “nightclub for beetles.” The insects “enjoy” themselves overnight and depart the following evening after being dusted with new pollen.

    “I think it's amazing that a plant can sustain the sort of metabolic [activity] that we expect of some mammals and birds,” says Philip Withers, an expert in animal heat production at the University of Western Australia in Perth. Adds Seymour: “They do everything that animals do, except get up and walk … for now.”


    Canada Seeks Economic Payoff Across Species

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    OTTAWA—Canada has committed $175 million toward a $400 million initiative in functional genomics that aims to promote economic prosperity as well as a better understanding of disease. The first round of winners, announced last week by Genome Canada, emphasizes regional development as well as top-notch research on humans, plants, and animals.

    “I've been dreaming about this for 20 years,” says Willie Davidson, dean of science at Simon Fraser University in Burnaby and co-leader of a project to map the roughly 60 chromosomes of the Atlantic salmon. “What Genome Canada has allowed us to do is to dream and to dream big.”

    Last year, the government supported the creation of a nonprofit corporation to develop a national genomics initiative that would span agriculture, health, forestry, fisheries, and the environment (Science, 10 March 2000, p. 1732). Genome Canada so far has raised $400 million from a combination of federal, provincial, and industry sources, with the hope of additional funding. Officials have now awarded grants for the first 17 projects, which were selected from among 31 proposals by a scientific panel chaired by C. Thomas Caskey, president of Houston's Cogene Biotech Ventures Ltd. “It was science first,” says Genome Canada president Martin Godbout. “But we [also] wanted to become world leaders in very specific niches.”

    Nine of the 17 projects are health-related and will receive a combined $67 million over 3.5 years. Three others, totaling $8 million, will examine ethical, legal, and social issues. Two environmental grants total $5.3 million, and there are individual grants in agriculture ($13.4 million), fisheries ($3.4 million), and forestry ($6.7 million). “It allows us to have a very coordinated, integrated approach” to genomics research, says Graham Scoles, head of the department of plant sciences at the University of Saskatchewan in Saskatoon and team leader for a project on how abiotic stresses such as cold, drought, heat, and salinity affect wheat and canola.

    View this table:

    The potential economic payoff from those sectors is a primary reason for the initiative, according to Minister of Industry Brian Tobin. “The government of Canada is convinced that there are enormous dividends to be had by strategic investment to promote innovation in certain key sectors,” he says. The initiative is unique in being “integrated across species,” adds Bartha Knoppers, a professor of law at the University of Montreal, whose team will compare international policies in such areas as DNA sampling and banking. “That is something that no other program in the world has done,” she notes.

    In addition to the project grants, Genome Canada will spend $47 million to establish five geographically distributed “science and technology” sites. Equipped with the latest technology, they will serve all scientists in their regions. It has also allocated $1.4 million a year for 3.5 years to five regional genome centers to manage the projects and develop new ones, commercialize findings, and solicit additional funding. A second competition is scheduled for this fall, with awards to be announced in the spring of 2002.


    Robots Enter the Race to Analyze Proteins

    1. Robert F. Service

    BERKELEY, CALIFORNIA—Structural biologists are about to get a helping hand in their effort to map the three-dimensional arrangement of atoms in proteins. Earlier this week, researchers at the Advanced Light Source (ALS) here were scheduled to begin using a new robot to automate the laborious process of mounting protein crystals in a synchrotron's x-ray beamline and then collecting and analyzing the data. Once fully operational, the robot could boost the number of protein structures that can be solved at a beamline nearly 10-fold to about 1000 per year. A similar setup is also gearing up at the European Synchrotron Radiation Facility in Grenoble, France. With these and other robotic systems now on the drawing boards, the throughput of protein structures is poised to make a revolutionary jump.

    “We would like it to be so automated, you could walk away from it for days or weeks at a time,” says Thomas Earnest, who is overseeing the installation of the new robotic beamline at the ALS. The automation is needed, says Earnest, because the current speed of x-ray crystallography is a bottleneck for genomics programs trying to reveal the genetics, structure, and function behind each of the body's more than 30,000 proteins. If researchers are to have a hope of doing that anytime soon, they need to industrialize the process of solving protein structures, Earnest says.

    Initially, however, the robotic beamline's primary mission will be to speed drug discovery. The beamline's $2.4 million price tag has been split between the San Diego biotech company Syrrx and the Genomics Institute of the Novartis Research Foundation (GNF) in La Jolla, California. Research teams at GNF led by chemists Ray Stevens and Peter Schultz have already automated earlier steps in the crystal-making process of purifying proteins from bacteria and coaxing them into crystals. With the robotic beamline now set to go on line, “we're finally seeing everything come together,” Stevens says. Once complete, Syrrx and GNF plan to use the automated systems to look at the way hundreds of different small drug candidates bind inside the active sites of different proteins, information that they can then use to design better binding compounds, Stevens says.

    High-throughput studies of a wider variety of proteins should be close behind. Like all ALS beamlines, the new facility will dole out 25% of its beam time to general users (Syrrx and GNF will get the other 75%), and Earnest says teams at several other synchrotrons have already started looking into duplicating the robotic beamline at their facilities. Some groups are working to automate not only sample handling and data collection, as at ALS, but also the work of creating the proteins, purifying them, growing crystals, and processing and analyzing the x-ray data. “To be successful in high-throughput crystallography, all the pieces have to work together,” says Keith Hodgson, director of the Stanford Synchrotron Radiation Laboratory in Palo Alto, California.

    Although the ALS robot can't do everything, it's an important piece, Hodgson says. It houses a liquid nitrogen-cooled Dewar loaded with 64 separate protein crystals ready for analysis. Researchers sitting at a terminal outside the experimental hutch simply type in the order of crystals they want to study. The robot selects the first crystal, removes it from the Dewar, mounts it, centers it in the beam, and fires brief test pulses to find the optimal alignment for collecting data. Then, depending on the quality of the crystal, either the software instructs the beamline instruments to collect a full set of x-ray data, by tracking how the beam of x-rays ricochets off repeating planes in the crystal, or it simply moves the machine on to the next crystal.

    Eventually, Earnest says, the ALS team plans to incorporate additional software packages that then automatically process and analyze the data, spitting out final structure in the end. “The ultimate goal is crystals in, structures out,” he says. He expects that it will be another 2 years before the full process is complete.


    A Plan to Release Data Within Six Months

    1. Eliot Marshall

    AIRLIE HOUSE, VIRGINIA—In the 1990s, gene sequencers were under the gun to make their raw data public as rapidly as possible. Now, it's the turn of the gem-cutters of biology—the people who decipher the shape of protein molecules—and some are not too comfortable with the notion. Last week, international leaders of the field rejected a proposal that labs now gearing up to roboticize the study of proteins give away structural coordinates immediately, or within 3 weeks of completion. Instead, at a meeting at an estate here in Virginia's horse country, they agreed to speed up data release, but on a timetable that will allow for the filing of patent applications.

    The plan for immediate data release was drafted at a meeting a year ago in Hinxton, U.K., home of the Wellcome Trust's genome center.* Many said it reflected the ideals of British scientists, who were among the leaders in pushing for rapid release of genome data. But several members of the Airlie group said the proposed short deadlines wouldn't allow enough time to refine and validate structural information. Others, noting that structural data may be valuable for drug design, argued frankly that too-rapid data release would impede patenting. In the end, the group endorsed the release of “most” protein structures from high-throughput labs “as rapidly as possible,” with a maximum delay of 6 months for proteins of “special interest.” Today, the rule is that investigators release coordinates when they publish a structure.

    The strongest opposition to the Hinxton plan came from Japanese delegates, who said it can take many months to process proteins and prepare U.S. patent filings. Toichi Sakata, representing the agency that funds Japanese structural biology—the Ministry of Education, Culture, Sports, Science, and Technology —indicated that Japanese taxpayers want a return on investments in protein analysis in the form of intellectual property. The Japanese group proposed the 6-month limit.

    Once the Japanese had spoken, senior European and U.S. scientists said they liked the 6-month delay, too. Udo Heinemann of the Max Delbrück Center for Molecular Medicine in Berlin saw a “fundamental difference” in the way structural genomics is carried out in Europe and the United States. He said his funding agency views his work as being closer to drug development than basic biology. Joel Janin of the Laboratory of Structural Enzymology and Biochemistry in Gif-sur-Yvette, France, felt that “the average European group's view is probably closer to the Japanese position” than the Hinxton model. And biophysicist Stephen Burley of the Rockefeller University in New York City said, “I'm not sure that there's agreement within U.S. groups” that protein structure data are commercially “precompetitive.”

    A minority objected to the 6-month rule but didn't dissent. “This is a complete reversal” of earlier goals, said Cyrus Chothia, a theoretician of structural biology at the Medical Research Council Laboratory of Molecular Biology in Cambridge, U.K. He chided his colleagues for what he saw as a retreat from data sharing. One meeting organizer detected signs of gambler's fever in the patent discussion: “It reminds me of the lottery,” he said. “Very few people will win, but everyone dreams they will.”

    Participants did agree, however, to increase data sharing and avoid duplication by exchanging lists of “target” proteins in advance. And they outlined a new system of fast peer review and electronic publication, bypassing paper journals to get results out quickly. But they declined to adopt a plan advanced by the U.S. National Institute of General Medical Sciences (NIGMS), part of the National Institutes of Health (NIH), to create a central, public Web site at NIH listing targets claimed by each lab. NIH will do this for its own grantees. One group that specifically opposed listing its own targets is a private consortium led by the Wellcome Trust, which is recruiting about 10 company sponsors for a program to solve and publish 200 protein structures per year (Science, 30 March, p. 2531). A trust attorney explained that the companies do not want to tip competitors to potential research plans, but are willing to give away structures once they've been completed.

    NIGMS director Marvin Cassman ascribes the difference between the Hinxton and Airlie meetings to the fact that “last year, structural genomics was pie in the sky; this year, the pie is on the plate,” and everyone is looking for a slice. NIGMS is leading the effort to speed up protein analysis, having awarded $150 million in structural genomics grants to seven centers last year (Science, 29 September 2000, p. 2254). The centers funded under this program are likely to be held to more rigorous data-release standards than the rules adopted at Airlie, Cassman said. NIGMS official John Norvell explained that relatively few families of proteins are represented in the public databases at present, and NIGMS is pushing its centers to identify new proteins at the rate of about 200 per year by 2006.

    • * The First International Structural Genomics Meeting took place in Hinxton, U.K., in April 2000; the second, at Airlie House near Warrenton, Virginia, 4 to 6 April, 2001. The third will be in Berlin.


    Galaxy Mappers Detect Wiggly Cosmic Order

    1. Charles Seife

    BALTIMORE, MARYLAND—A wiggly pattern in the way galaxies are arrayed has yielded a new recipe for the early universe. Last week, at meetings on both sides of the Atlantic Ocean,* astronomers working on the ambitious Two Degree Field (2dF) galaxy survey announced that they had seen subtle variations in the distribution of matter at different scales. The discovery provides a new method of calculating the amounts of different types of matter in the cosmos shortly after the big bang.

    “That would be extremely exciting if they've seen it,” says Max Tegmark, a physicist at the University of Pennsylvania in Philadelphia. Knowing the ratio of ordinary “baryonic” matter to unseen dark matter in the universe is key to deciding among competing cosmological models, Tegmark says.

    The 2dF survey, which uses the 4-meter Anglo-Australian Telescope in Coonabarabran, Australia, is halfway through its attempt to map 250,000 galaxies. The distribution of those galaxies depends crucially on events in the very early universe. For a few hundred thousand years after the big bang, matter and energy were a seething plasma, ringing with the concussion of the cataclysmic explosion. Pressure waves rattled through the plasma, compressing matter, letting it expand, and compressing it again. As the universe grew and cooled, the compressed regions gave rise to massive clusters of galaxies, while the rarefied ones stayed relatively free of matter. So by looking at the distribution of galaxies and voids, astronomers can figure out what the acoustic waves in the early universe were like, just as they can with the distribution of peaks and dips in the strength of the cosmic background radiation (Science, 28 April 2000, p. 595; 19 January 2001, p. 414).

    Different types of matter, however, transmitted those pressure waves in different ways. Ordinary baryonic matter would have compressed and expanded very strongly, as it was constantly being slammed about by the radiation pressure. “Exotic” dark matter —the kind believed to consist of so-far- undetected particles—would have oscillated more weakly, as it barely interacts with radiation. (That's what makes it invisible to telescopes.) Thus, wiggles in the distribution of galaxies—bumpy features in the graph that describes clumps of matter and voids on various scales—can reveal how much of the primordial matter was baryonic.

    “When you make plots, you do see bumps, hills, and valleys,” says Cambridge University astrophysicist Ofer Lahav, a member of the 2dF team. “When we all first saw the pictures, we were very excited.” Although Lahav advises caution in interpreting such preliminary results, he says the measurements indicate that baryonic matter in the early universe weighed in at 5% of the mass needed to give space the shape that cosmologists prefer. That's smack-dab between the 4% based on theories of how atomic nuclei were generated in the early universe and the 6% implied by measurements of cosmic background radiation.

    The 2dF data collected so far aren't strong enough to settle any cosmological arguments. University of Chicago astrophysicist Michael Turner thinks the results are intriguing but reserves judgment for the moment. “What's great about cosmology now is that the results are just going to get better,” he says. “We don't have to debate each other until we're blue in the face. There's more data coming.” Further observations by the survey and the rival Sloan Digital Sky Survey could provide a more definitive answer within the year.

    • * “The Dark Universe,” Baltimore, Maryland, 2–5 April; Royal Astronomical Society National Astronomy Meeting, Cambridge, United Kingdom, 2–6 April.


    Big Bang's New Rival Debuts With a Splash

    1. Charles Seife

    BALTIMORE, MARYLAND—Will the big bang make way for a big splat? It might, if a new idea by four cosmologists catches on. Their theory, posted online* and unveiled here last week in a surprise talk at the Space Telescope Science Institute, provides what other scientists are calling the first credible alternative to the reigning big bang model and its long-standing add-on, inflation.

    “It's a neat idea. In a way, we didn't have any competitors to inflation until now,” says David Spergel, an astrophysicist at Princeton University. “I'm more comfortable at the moment with inflation, but that's probably because I've lived with it longer.”

    Inflation theory has been cock of the walk since the early 1980s, when physicist Alan Guth concocted it to solve several problems that had been plaguing big bang theorists. For instance, the universe appears to be “flat” (a technical term describing the large-scale curvature of space) and has roughly the same properties everywhere—features that a simple big bang model can't easily explain. Inflationary theory proposes that the very early universe went through an amazingly violent and rapid expansion for less than 10−32 second, which gave the universe precisely the flatness and isotropy that we see today. Almost every cosmologist accepts inflation, and for 20 years no one has come up with a new scenario that so well matches scientists' observations.

    “This is a fairly ambitious project,” admits Princeton physicist Paul Steinhardt. Along with his student Justin Khoury and physicists Neil Turok of Cambridge University and Burt Ovrut of the University of Pennsylvania in Philadelphia, Steinhardt has created what they call the “ekpyrotic model”: a version of the early universe that explains flatness and isotropy without invoking inflation.

    At first glance, the new model—based on an extension of string theory known as M-theory—seems surreal. It takes place in 11 dimensions, six of which are rolled up and can safely be ignored. In that effectively five-dimensional space float two perfectly flat four-dimensional membranes, like sheets drying on parallel clotheslines. One of the sheets is our universe; the other, a “hidden” parallel universe. Provoked by random fluctuations, our unseen companion spontaneously sheds a membrane that slowly floats toward our universe. As it moves, it flattens out—although quantum fluctuations wrinkle its surface somewhat—and gently accelerates toward our membrane. The floater speeds up and splats into our universe, whereupon some of the energy of the collision becomes the energy and matter that make up our cosmos. Because both the moving membrane and our own membrane start out roughly flat, our postcollision universe remains flat as well. “Flat plus flat equals flat,” says Steinhardt.

    Because the membrane floats so slowly, it has a chance to equilibrate, giving it more or less the same properties over its entire surface, although the quantum fluctuations cause some irregularities. That explains our universe's roughly (but not exactly) isotropic nature. But whereas inflation solves the problems of flatness and isotropy by a quick, violent process, Steinhardt points out, “this model works in the opposite sense: slowly, but over a long period of time.” Another attractive feature is that it gets rid of the singularity at the beginning of the universe: Instead of a pointlike big bang, the universe is formed in a platelike splash.

    “It's the first really intriguing connection between M-theory and cosmology,” says Spergel. “This is sort of an Ur-big bang.” And although ekpyrotic theory might seem like an import from cloud-cuckoo-land, future real-world experiments should be able to tell whether it or inflation is correct. The two models send different sorts of gravitational waves rattling around the universe—waves that might one day be detectable by successors to current gravitational-wave experiments.

    Experimental verification might take a less welcome form. The model's name, Steinhardt explains, comes from the Stoic term for a universe periodically consumed in fire. That is because at any moment another membrane could peel off, float toward us, and destroy our universe. Indeed, Steinhardt says, we might have already seen the signs of impending doom. “Maybe the acceleration of the expansion of the universe is a precursor of such a collision,” he says. “It is not a pleasant thought.”


    An Orbital Confluence Leaves Its Mark

    1. Richard A. Kerr

    Whenever they could, the 19th century geologists who split time into epochs chose as a boundary a grand transformation of the period's dominant animal group. But some intervals dragged on for so long without an appropriate transformation that geologists chose another marker to break them up. For the division between the Oligocene and the Miocene, they picked an episode of sediment erosion. At the time, nobody knew its ultimate cause. Now, paleoceanographers have finally put their finger on it—and in the process, they've identified a new way to make an ice age.

    On page 274 of this issue, paleoceanographer James Zachos of the University of California, Santa Cruz, and colleagues report that 23 million years ago a rare combination of the shape of Earth's orbit and the tilt of its rotation axis led to a brief climatic cooling and buildup of ice on Antarctica. This, in turn, lowered sea level and exposed the shallow sea floor to erosion, creating the Oligocene-Miocene boundary. The convergence of orbital variations would be “a very reasonable explanation” for the glacial boundary event, says paleoclimate modeler Thomas Crowley of Texas A&M University in College Station, as well as further support for the power of orbital variations to influence climate.

    New insights into climatic events of the Oligocene-Miocene transition come from sediment laid down between 20 million and 25.5 million years ago in the western equatorial Atlantic and retrieved in two Ocean Drilling Program cores. From these two cores, Zachos and colleagues extracted complete, finely detailed records of two sets of stable isotopes preserved in the carbonate skeletons of microscopic bottom-dwelling animals called foraminifera. The changing ratio of oxygen-18 to oxygen-16 along a core reflects varying bottom-water temperature as well as changes in the volume of glacial ice in the world. The ratio of carbon-13 to carbon-12 reflects changes in the geochemical cycling of carbon, in this instance most likely due to changes in the productivity of ocean plants. Zachos and colleagues developed a time scale by matching cyclic variations in core sediment properties, such as color, to changes in the shape or eccentricity of Earth's orbit and in Earth's axial tilt, each of which keeps a steady orbital beat through the ages. Then they applied this scale to their isotope records.

    The time scale allowed them to determine the pace of climate and carbon-cycle changes. Now they could compare the timing of events on Earth with that of orbital cycles. They found that ocean climate throughout the 5-million-year record varied in step with orbital variations, from the 400,000-year and 100,000-year variations in orbital eccentricity to Earth's 41,000-year nodding as it changes tilt. And the strength of the climate response varied in proportion to the strength of the orbital variation, especially in the case of the 100,000-year variation. Such strict correlation, with climate lagging slightly behind orbital variations, convinced them that the orbital variations were altering climate back then. When eccentricity was low, climate cooled. It also cooled when tilt steadied to only modest variations without excursions to high tilt.

    Those are the same climate-orbit relations invoked to explain climate change during the past million years, including the coming and going of the ice ages tied to changing eccentricity. In the case of tilt, the connection between orbital variation and climate is straightforward—when the planet tilts far over, high-latitude land gets extra sunlight and warmth in the summer, discouraging the year-to-year accumulation of snow that would otherwise form ice sheets. When tilt is low, there's less summer sunlight in the high latitudes and glaciation picks up.

    Inspecting the entire 5-million-year record, Zachos and colleagues found one geologic moment when the orbital forces for climatic cooling and ice sheet building came together. At 23.0 million years ago, eccentricity dropped to low levels and variations in tilt nearly disappeared. Simultaneously with these ideal orbital conditions for ice sheet formation, bottom waters cooled and ice volume increased. An ice age had arrived, at least by the standards of a time when ice was typically limited to modest amounts on Antarctica. When the orbital confluence disappeared, 200,000 years later, so did the extra ice. The excursion into deeper glaciation may have been helped along, they say, by a preceding million-year-long decline in the greenhouse gas carbon dioxide that is suggested by the trend in carbon isotope composition.

    Researchers are pleased that there's more to the Oligocene-Miocene boundary than a conveniently timed gap in sedimentation. “It's a fabulous data set,” says paleoceanographer Kenneth Miller of Rutgers University in Piscataway, New Jersey. The coincidence of an “orbital anomaly”—simultaneous extremes of two different orbital variations—and a major climate event suggests a new way for orbital variations to trigger climate change, says Crowley. “The idea that ‘cold' summer orbits drive you into glaciation and ‘hot' summer orbits drive you out is one of the linchpins” of the orbital theory of climate, he says. At the Oligocene-Miocene boundary, climate seems to shift under an orbital double whammy.


    Rising Global Temperature, Rising Uncertainty

    1. Richard A. Kerr

    Climate researchers are grappling with a growing appreciation of climate prediction's large and perhaps unresolvable uncertainties, while remaining steadfast that the threat justifies action

    The headlines in January were dramatic. “Scientists Issue Dire Prediction on Warming; Faster Climate Shift Portends Global Calamity This Century,” said The Washington Post. “Warming of Earth Raises New Alarm,” cried the International Herald Tribune. The source of all this media excitement was a dramatic increase in the worst-case projections of climate change over the next century. The latest report from the United Nations-sponsored Intergovernmental Panel on Climate Change (IPCC)—the closest thing to a global scientific consensus in the contentious business of climate forecasting—said the world could be as much as 5.8°C warmer in 2100 than it is today. Five years ago, the panel set the upper end of the range at 3.5°C. Climatologists, however, were more impressed by something that drew little public notice: The range of the IPCC's projections has actually widened over the past 5 years.

    To many climate modelers, this is not surprising. Climate forecasting, after all, is still in its infancy, and the models rely on a sparse database: a mere 100 years of global temperatures. Most agree that this database now shows that the world has warmed over the past century and that greenhouse gases are the prime suspects. But while new knowledge gathered since the IPCC's last report in 1995 has increased many researchers' confidence in the models, in some vital areas, uncertainties have actually grown. “It's extremely hard to tell whether the models have improved” in the past 5 years, says climatologist Gerald North of Texas A&M University in College Station; “the uncertainties are large.” Climate modeler Peter Stone of the Massachusetts Institute of Technology says, “The major [climate prediction] uncertainties have not been reduced at all.” And cloud physicist Robert Charlson, professor emeritus at the University of Washington, Seattle, adds: “To make it sound like we understand climate is not right.”

    In the politically charged atmosphere of climate forecasting, uncertainties are often seized upon as excuses for inaction. That worries many of the researchers who believe the stubborn uncertainties in climate forecasting are being downplayed. Most of them see a need to begin controlling greenhouse gases now. “We can't fully evaluate the risks we face,” says Stone. “A lot of people won't want to do anything. I think that's unfortunate.” Greenhouse warming is a threat that should be taken seriously, say Stone and others toward the skeptical side. Possible harm could be addressed with flexible steps that “evolve as knowledge evolves,” says Stone. By all accounts, knowledge will be evolving for decades to come.

    The challenge for climate researchers—and the accompanying uncertainty—come in three arenas: detecting a warming of the globe, attributing that warming to rising levels of greenhouse gases, and projecting warming into the future. As it happens, new knowledge reported by IPCC clearly narrows the uncertainties inherent in the detection problem and strengthens the link to greenhouse gases, but it leaves projection of future warming more uncertain.

    “The detection problem seems to me to be almost solved,” says observational climatologist David Gutzler of the University of New Mexico in Albuquerque. The IPCC puts global warming over the 20th century at 0.6° ± 0.2°C, as measured by instruments near Earth's surface. That's a broader range than IPCC reported in 1995, which might suggest increasing uncertainty, but back then, less effort was put into quantifying uncertainty. Now the range is pegged at the 95% confidence level, making it “very likely” the world has warmed, according to the parlance adopted for the first time by IPCC. “The most dramatic difference since ‘95 is the decrease in the uncertainty” associated with recent warming, says statistical climatologist Michael Mann of the University of Virginia in Charlottesville, who contributed to the report. He credits the increased confidence to more sophisticated and effective statistical techniques for analyzing sparse observations.

    The globe very likely did warm, but “attribution is much harder,” notes Gutzler. To pin the warming on increasing levels of greenhouse gases requires distinguishing greenhouse warming from the natural ups and downs of global temperature. In 1995, IPCC found that, despite remaining uncertainties, “the balance of evidence suggests that there is a discernible human influence on global climate.” A rather wimpy statement, but it was the first positive attribution made by IPCC. This time around, the attribution statement is dramatically beefed up: “… most of the observed warming over the last 50 years is likely [66% to 90% chance] to have been due to the increase in greenhouse gas concentrations.”

    That's stronger than the draft statement leaked last spring (Science, 28 April 2000, p. 589), which is fine with modeler Jerry D. Mahlman, who recently retired as director of the National Oceanic and Atmospheric Administration's Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. “I'm quite comfortable with the confidence being expressed,” says Mahlman, who was not involved in writing any part of the report. Mahlman cites three developments that increase his confidence. First, it's warmer than it was, even warmer now than in 1995. Second, the current warmth looks extreme, even unique, in the past 1000 years. And third on Mahlman's list is the performance of the climate models.

    The report states that confidence in the models has increased. Some of the model climate processes, such as ocean heat transport, are more realistic; some of the models no longer have the fudge factors that artificially steadied background climate (Science, 16 May 1997, p. 1041); and some aspects of model simulations, such as El Niño, are more realistically rendered. The improved models are also being driven by more realistic climate forces. A sun subtly varying in brightness and volcanoes spewing sun-shielding debris into the stratosphere are now included whenever models simulate the climate of the past century.

    With all the new improvements, the most sophisticated models can now simulate the bumpy rise in global temperature seen in the past 100 years—including the once mysterious rise and temporary plateau at midcentury, now attributed to the cooling effects of aerosols. The models are “getting quite a remarkable agreement” with reality, says modeler John Mitchell of the Hadley Centre for Climate Prediction and Research in Bracknell, United Kingdom, who headed the report's detection and attribution chapter. All of this gives Mahlman and many others confidence that most of the warming is likely due to increasing greenhouse gases.

    “That's stretching it a bit,” says satellite climatologist John Christy of the University of Alabama, Huntsville, who was an author of the chapter on observed climate change. Stone says a confident attribution to humans “may be right,” but “I just know of no objective scientific basis for that.” They and others agree that the dramatic 20th century warming, following millennium-long records of a cooler world, has a certain visceral appeal. But they remain cautious about the ability of the models to attribute the warming to greenhouse gases. “I don't know that they reproduce climate any better” than they did 5 years ago, says climate modeler Tim P. Barnett of the Scripps Institution of Oceanography in La Jolla, California. Climate modeler Jeffrey Kiehl of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, agrees that “we have made progress, but sometimes progress means you learn you need to know more.”

    For Kiehl, a striking example of increasing uncertainty is the pollutant hazes of aerosol particles from fires of all sorts, from fossil fuel burning to cooking fires. Any model must be told how much aerosol there is and how it will behave—whether it's bright enough to reflect solar energy back to space and cool the planet or dark enough with soot to absorb solar energy and warm Earth. It could also cool the atmosphere indirectly by forming new cloud droplets that would reflect solar energy even better than the aerosol particles.

    “The more we learn [about aerosols], the less we know,” says Kiehl. That's evident in the body of the IPCC report. It says that the uncertainties are so large that a best estimate with error bars of the indirect cloud effect of aerosols is still impossible. In fact, the report increases the range of possible aerosol cloud effects over 1995 estimates. Now they span from no effect to a cooling large enough to almost compensate for the total warming from all current greenhouse gases.

    In addition to uncertainties about what to put into models, many researchers see looming—and frustratingly recalcitrant—uncertainties in the way models respond to inputs. “The uncertainties are large—as large as 20 years ago,” says Texas A&M's North. The traditional measure of model uncertainty is the range of climate sensitivity, defined as the amount the atmosphere would warm if atmospheric carbon dioxide doubled. The first official look at the greenhouse problem, a 1979 U.S. National Research Council study headed by the late Jule Charney, concluded that a carbon dioxide doubling—which is expected by the end of the century—might warm the world as little as a modest 1.5°C or as much as a disastrous 4.5°C. The 1.5° to 4.5°C range of climate sensitivity has been repeated unchanged in four IPCC reports now—it's like Planck's constant, quips one modeler, unchanging with time.

    An unchanging climate sensitivity and its implied lack of progress bother most researchers. Mahlman puts the best face on it by arguing that although the range hasn't changed, the chance that the real sensitivity falls somewhere in that range has increased over the years, from 2 in 3 in 1979 to perhap. 9 in 10 today. North, although able to go along with the IPCC's statement attributing 20th century warming to greenhouse gases, sees the “huge range of climate uncertainty among the models” as a sign of fundamental problems. “There are so many adjustables in the models,” he says, “and there is a limited amount of observational data, so we can always bring the models into agreement with the data.” Models with sensitivities to CO2 inputs at either extreme of the range can still simulate the warming of the 20th century, he notes, suggesting that adjustables like aerosols and clouds are compensating for the sensitivity differences.

    The uncertainties give some researchers pause when IPCC so confidently attributes past warming to the greenhouse, but projecting warming into the future gives almost everybody the willies. When the IPCC report came out in January and the headlines trumpeted the prospects for a scorching fin de siècle, climate researchers were instead struck by the growing recognition of uncertainties. There was not only the unchanging climate sensitivity, but also a growing realization that where humans are involved, prediction gets even harder. The near doubling of the range of possible warming is due largely to expectations that, rather than fouling the air more and more, countries will likely clean up their acts, reducing aerosol emissions and the compensating cooling they would have produced. This is all well and good, but “social uncertainty is hard to discuss,” says Mahlman, “because we don't have a clue how people are going to react 30 years from now. The scientific problem you evaluate, the social problem you just hand-wave.” Witness President George W. Bush's derailing of U.S. participation in the Kyoto Protocol for controlling greenhouse gas emissions.

    What policy changes would researchers who struggle daily to understand the climate system recommend in the face of this cascade of uncertainties. Most see cause for concern about warming, despite all the doubts. “A number of uncertainties are still with us,” says Kiehl, “but no matter what model you look at, all are producing significant warming beyond anything we've seen for 1000 years. It's a projection that needs to be taken seriously.” Modeler Linda Mearns of NCAR would emphasize a goal of identifying all the uncertainties rather than quickly narrowing the known ones, but “there's no evidence the problem will go away. It's clear there's still great concern about the future.” Even Washington's Charlson, who chides IPCC for not addressing “big scientific uncertainties,” concludes that because “the evidence for chemical change of the atmosphere is so overwhelming, we should do something about it.”


    Greenhouse Warming Passes One More Test

    1. Richard A. Kerr

    Are humans indeed warming the world? If so, will future warming be big enough to matter· Confident answers depend in large part on the credibility of climate models. Greenhouse critics claim modelers can get any answer they like about warming simply by adjusting any of the numerous inputs whose values in the real world remain uncertain. Climate model running on the warm side· Crank in a bit more pollutant haze to shade the planet and cool it down, they say, and everything will look fine. Modelers have long argued that constraints such as the need to simulate current climate and the history of atmospheric warming keep their models more honest than that. Now a new, independent reality check from the ocean has strengthened their case.

    On pages 267 and 270 of this issue, two groups of climate researchers report that two climate models have passed a new test: simulating the warming of the deep oceans during the past half- century. Their success “provides stronger evidence climate is changing,” says climate modeler Simon Tett of the Hadley Centre for Climate Prediction and Research in Bracknell, United Kingdom, “and it's likely due to human influence.” However, a conflict between the two studies underscores the difficulties in gauging how bad greenhouse warming could be.

    Why worry about the ocean, when greenhouse warming of the atmosphere is what life on the surface will have to deal with? “The ocean is the flywheel of the global climate system,” explains climate modeler Tim P. Barnett of the Scripps Institution of Oceanography in La Jolla, California. The ocean holds so much heat that it tends to steady the rest of the climate system. “If there's one place you want to get it right, it's there,” he says. Last year, oceanographer Sydney Levitus of the National Oceanic and Atmospheric Administration (NOAA) in Silver Spring, Maryland, and his colleagues reported that the top 3000 meters of oceans worldwide had gained 18.2 × 1022 joules of heat between 1955 and 1996 (Science, 24 March 2000, p. 2126). In their new paper, they calculate that less than a tenth as much heat as that went into warming the global atmosphere and melting sea ice and glaciers. Their conclusion: If you're keeping track of the heat trapped by the strengthening greenhouse, the ocean is almost all that matters.

    With that pivotal role in mind, both Barnett and Levitus tested greenhouse warming in a climate model against how the ocean has actually warmed. Barnett used the Parallel Climate Model developed at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, and Levitus used the model from NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. Both groups drove the warming with the increasing greenhouse gases of the past century, and both found that the models' world oceans warmed by just about as much as observed. And it appears that the ocean warming was likely due to increasing greenhouse gases, not the random oscillations of the climate system that modelers call internal variability. “The rising heat content of the past 50 years is way out of the bounds of internal variability” produced in a model, says modeler Thomas Delworth of GFDL, a co-author of the Levitus paper. The warming in the model ocean so closely matched the strength and geographical distribution of the actual warming that Barnett calculated with confidence exceeding 95% that human-produced greenhouse gases are behind real-world warming.

    While the results of the two models further support the emerging consensus that humans are warming the world (see main text), they also drive home problems with making predictions from models. Just how bad warming will get by the end of the century, say, will depend on how much greenhouse gas—principally carbon dioxide—enters the atmosphere and how strongly the climate system reacts to it, a property called climate sensitivity. For more than 20 years, researchers have been estimating that climate sensitivity to a doubling of carbon dioxide is between a modest 1.5°C warming and a searing 4.5°C. Indeed, the NCAR and GFDL models reflect that recalcitrant uncertainty in their climate sensitivities of 2.1°C and 3.4°C, respectively.

    How, then, can the two models agree about the past century of ocean warming· The explanation may lie in one of the remaining knobs on the climate machine: aerosols, the microscopic particles of sulfate, soot, and organic crud produced by fossil fuel burning, biomass burning, and volcanoes. Researchers are still figuring out how much aerosol of each sort is up there, how effectively each absorbs solar energy or reflects it back to space, and how each affects the number and size of cloud particles, another potent player in the climate system. The two models assumed different fossil-fuel aerosol histories, and the NCAR model ignored volcanic aerosols—discrepancies that may have compensated for differences in the models' sensitivities. As a result, says climate modeler Myles Allen of the University of Oxford, “both models could be right for the wrong reason.” But whichever is more realistic, Allen says, the finding that real-world warming is not likely due to internal variability stands—although clearly, some better informed knob twiddling is still in order.


    Long-Term Data Show Lingering Effects From Acid Rain

    1. Kevin Krajick*
    1. Kevin Krajick is a writer in New York City.

    Some U.S. ecosystems are recovering slowly from the effects of acid rain, sparking a call for more stringent pollution controls

    Acid rain is like the proverbial bad penny: Every time you think you've passed it off, it shows up in your change again. Studies over the past few years have turned up more and more evidence of its lingering effects. Now a major research synthesis provides the most comprehensive view to date—it is not encouraging—and prescribes a drastic cure. This call for more stringent controls comes just in time to fuel what promises to be a ferocious debate over stricter federal regulation of not only acid emissions but also the main greenhouse gas, carbon dioxide.

    Progressively tougher pollution rules over the past 3 decades have reduced U.S. emissions of the main acid rain ingredient, sulfur dioxide (SO2), by about 40% from its 1973 peak of 28.8 metric tons a year. By 2010, SO2 emissions should be less than half of 1973 levels. But in the March issue of BioScience, 10 leading acid rain researchers say victory toasts are premature. They say power plants, the main contributor, must cut SO2 emissions another 80% beyond the current mandate to undo past insults to sensitive soils and waters in the northeastern United States and, by implication, elsewhere. They also assert that these reductions, which would amount to an overall 44% cut in sulfur emissions, may bring only partial recovery to fish and trees by 2050. At the same time, acidifying emissions of nitrogen oxides (NOx)—still relatively less regulated—are level or even mounting, causing collateral damage.

    “Coming from such a consensus, this study solidifies many things,” says Rona Birnbaum, chief of the Environmental Protection Agency's (EPA's) acid rain assessment program. “There was uncertainty especially over long-term soil impacts. Now it's undeniable.” James Galloway, chair of environmental sciences at the University of Virginia in Charlottesville, adds that “the old con- trols were clearly not enough. Acid rain is a lot more complex than we at first thought.”

    The problem is that acid-causing substances have built up in the ground and are still causing cascading chemical effects that could continue for decades. The first good evidence of acid rain's long-term effects came in 1996 from New Hampshire's Hubbard Brook Experimental Forest, where data have been collected since the early 1960s. That study showed that half the base cations of the nutrients calcium and magnesium, which neutralize acids, had leached from soils in the past few decades; as a result, vegetative growth was near a standstill (Science, 12 April 1996, p. 244). The researchers blamed excess acids in the soil for dissolving the cations into drainage waters much faster than weathering bedrock below could replenish them. Adding to the insult, they said, was the fact that smokestack scrubbers installed to reduce particulates were also removing soot rich in calcium that had previously replaced some nutrients the acids were leaching.

    The new study, a collation of further Hubbard Brook data and recently published studies from many other sites, adds considerable breadth and detail. Gregory Lawrence, a research hydrologist with the U.S. Geological Survey (USGS) and a co-author, says many northeastern soils “have a long memory” for acid deposition. Residual sulfur and, to a lesser degree, nitrogen are being slowly released by microbes and plants, he says, bumping the cations off the soil particles they normally cling to. This makes the cations highly soluble and easily washed away during rains and snowmelts. As a result, nutrient levels at many sites are showing little improvement, and some are actually worsening. Some of the hardest hit areas overlie calcium-poor sandstones in the Catskill mountains in New York state, where nearly all nutrients have disappeared in places, right down to glacial till and bedrock. Work by Lawrence and others also shows that once nutrients are depleted, excess acids mobilize the soils' abundant aluminum; usually held in harmless organic form, aluminum is poisonous when it dissolves.

    Effects on aquatic life have been known since the early 1990s. Some 15% of lakes in New England and 41% in New York's Adirondack Mountains are chronically or episodically acid, says the report, and many such lakes have few or no fish.

    Good evidence of effects on trees has been slower in coming, but that too has mounted lately. Although the exact role of acid deposition is unclear, extensive die backs and loss of vigor among red spruces and, more recently, sugar maples have been chronicled for at least 10 years. Co-author Christopher Cronan, a biologist at the University of Maine, Orono, says maples especially in Pennsylvania appear to be malnourished; at some sites, aluminum is glomming onto rootlets, blocking uptake of whatever meager nutrients are left. In spruces the problem is compounded. In 1999 researchers at the University of Vermont in Burlington traced much of their decline to baths of acid fog and rain that leach calcium directly from needles—a reaction that leaves needle membranes unable to cope with winter freezing. It's death by a thousand cuts, says Cronan: Instead of killing directly, acid rain usually leaves trees susceptible to drought or insects, which finish the job. Weakened conifers in southern Appalachia, for instance, are now being defoliated en masse by invading woolly adelgids, exotic parasitic insects. The chain reaction continues. The endangered spruce-fir moss spider, a small tarantula that needs shade from the trees, is now found at just a few known sites, according to entomologists.

    This is just one signal that the problem is not limited to the Northeast. Arthur Bulger, a fish ecologist at the University of Virginia in Charlottesville and also a co-author, says that acid rain effects are now becoming more apparent in the Southeast, some 20 years after they appeared in the Northeast. He and colleagues have chronicled these effects in a paper in the Canadian Journal of Fisheries and Aquatic Sciences last year and another in press at Hydrology and Earth System Sciences. The reason for the time delay, says Bulger, is that southern soils are generally thicker than northern ones and thus able to sponge up far more acid before leaking it to the surrounding environment. But now that soils are saturated, acid levels in nearby waters are skyrocketing. Bulger's colleagues have studied 50,000 kilometers of streams; in a third, he says, fish are declining or are already gone. He predicts another 8000 kilometers will be affected in coming decades.

    Until recently, acid rain has been largely off the radar screen in the western states, in part because the population is smaller and the coal that fuels power plants there is much lower in sulfur. But now the region's cattle feedlots are booming, as is the human population. The former churn out lots of manure, and the latter insist on driving more and larger motor vehicles. Both produce acid- causing NOx. Jill Baron, a USGS ecologist in Fort Collins, Colorado, and colleagues are studying 300- to 700-year-old spruces in the Rockies. In the journal Ecosystems last fall, they said trees downwind of populous areas show high levels of nitrogen and low ratios of magnesium in their needles. Nearby streams are also showing dramatic changes in their populations of diatoms, shifting from species that do well in the region's usually nutrient poor waters to those common in overfertilized waters. “Too bad most people get a lot less excited about diatoms than about fish,” says Baron. “We're not nearly as bad as the East, but we're beginning a trajectory that will take us there.”

    The Northeast research team has assembled its data into a model to project the possible impact of various emissions cuts. Lead author Charles Driscoll, director of Syracuse University's Center for Environmental Systems Engineering, says that at current regulatory levels, the most sensitive environments such as Hubbard Brook will probably cleanse themselves, but very slowly. Chemical balances might return by about 2060, he says; after that, lake zooplankton might come back within 10 years; some fish populations, 5 or 10 years after that. If Congress were to call for an 80% SO2 reduction from power plants below the current target for 2010, streams would probably bounce back by 2025, and some biological recovery in them might come by 2050.

    “The question of soil and trees is a lot harder to answer,” says Driscoll. In a communication to Nature last October, John Stoddard, an EPA scientist in Corvallis, Oregon, suggested that some soils with high sulfur-adsorbing capacities such as those in the Southeast might take centuries to recover.

    The BioScience study received major attention on Capitol Hill, where momentum for stricter air controls has been building. Just before the study came out, President George W. Bush announced that he was breaking his campaign promise to limit CO2 emissions—produced largely by the same power plants implicated in acid rain—raising the political stakes in the debate. With the 1990 Clean Air Act amendments up for reauthorization this spring, a half-dozen bills have been knocking around, proposing 40% to 65% reductions in both SO2 and NOx. On 15 March, New York Representative Sherwood Boehlert and Vermont Senator James Jeffords, both Republicans, introduced companion proposals calling for a 75% cut in SO2 below what is currently mandated and a 75% cut in NOx from recent levels—and a rollback of CO2 to 1990 levels. Boehlert, head of the House Committee on Science, said the BioScience paper “is a wake-up call, and it should lead anyone who truly believes in science-based policy to support acid rain control.”

    Bush has said he will support stricter acid controls, although he hasn't mentioned any numbers. Moderate Republicans, who are furious that he reneged on CO2, vow to keep all the pollutants tied together—a strategy that could greatly complicate a solution. Dan Riedinger, a spokesperson for the power industry's Edison Electric Institute, says the call for huge reductions now “is a little premature.” He points out that ozone-curbing regulations scheduled to start in 2004 will also cut acid emissions and that some controls mandated in 1990 kicked in only last year. “The current program needs to be given more time to work,” he says. “We always knew it would take decades.” That last part may be the only thing on which everyone agrees.


    Where the Brain Tells a Face From a Place

    1. Laura Helmuth

    Cognitive neuroscientists are beginning to figure out how the brain recognizes a chair as a chair

    NEW YORK CITY—It's a complicated world out there, visually, full of things that look a lot alike. Yet people rarely identify a TV remote control as a cell phone or confuse a pencil with a swizzle stick. The brain makes sense of this jumble of incoming visual stimulation with the help of an approximately 20-square-centimeter patch of tissue called the ventral temporal cortex. It lies just internal to the ears, on the bottom surface of the brain. In the past few years, brain imaging studies have identified one region of this tissue that specializes in recognizing faces and another that processes places. More recently, researchers have found that even mundane objects such as shoes, chairs, and plastic bottles also light up distinct areas in this part of the brain. But as Nancy Kanwisher of the Massachusetts Institute of Technology (MIT) points out, “it's ridiculous to think that there's an area of the brain for every conceivable category of objects.”

    So just how does the brain know that a chair is a chair· Researchers aren't sure how object recognition works, but several presentations on 27 March at the Cognitive Neuroscience Society meeting here brought them closer to an answer. Some speakers reviewed a classic debate about whether face areas are specialized for faces or for making detailed discriminations within a category. Another study—using a new technique for analyzing the entire landscape of activity in the ventral temporal cortex, rather than just the areas that light up the most—suggests that different parts of the area may respond to certain features of an object. And one researcher identified what some of those features might be that allow the ventral temporal cortex to recognize objects.

    Psychologist Bruce McCandliss of Cornell University compares figuring out how the ventral temporal cortex works to a huge mapping project. “The first explorers came in and found mountain peaks [of activation] and put their names on them. Then someone else came in and noticed the valleys. Now other people are starting to look at the whole landscape and say, ‘Hey, here are some basic geothermal principles'” that explain why certain peaks and troughs of activation correspond to different types of visual objects. “We're just starting to get to that level,” McCandliss says.

    Navigating, social primates

    People are experts at recognizing faces. They discriminate faces quickly and accurately, yet if you could see through the eye stalks of some alien, human faces would look almost identical. Early evidence that this ability is localized in the brain came from certain stroke patients; strokes that hit the ventral temporal cortex can cause a syndrome called prosopagnosia—the inability to recognize faces.

    It's tricky to figure out which brain region within a large lesion once processed faces, however, so researchers have turned to functional magnetic resonance imaging (fMRI) for a more precise determination. The technique reveals, voxel by voxel (like a computer screen's pixel-by-pixel display), which virtual boxes of brain are most active when someone is looking at a particular type of object. Kanwisher and her colleagues, for example, have used functional brain imaging to pinpoint an area in the ventral temporal cortex that lights up most when people view normal faces as opposed to scrambled faces, hands, houses, or other objects.

    A few centimeters away from the face area, a region of cortex lights up when people view all kinds of buildings and landscapes. This place area doesn't give a hoot or an action potential about faces, and the face area likewise doesn't zing when someone's looking at a place—further supporting the notion that these areas are highly specialized.

    So where did face-specific and place-specific areas in the brain come from· Kanwisher says “there are two very compelling, very different stories”: experience and evolution. Evolutionarily, faces and places are “the most important categories that navigating, social primates have to recognize.” But a lifetime of experience could explain the brain's specialization as well. After all, people recognize their friends and find their way to the bus stop every day.

    Both probably play a role. Newborn babies prefer to look at pictures of faces compared to other objects, supporting the idea that people are born with some predisposition to favor faces. But children who had cataracts for the first few months of life have subtle difficulties in discriminating similar faces, suggesting that experience trains face processing as well.

    Faces, birds, and greebles

    Not everyone agrees that the brain necessarily treats faces differently from other objects. The face area does respond robustly to faces, says Isabel Gauthier of Vanderbilt University in Nashville, Tennessee, but “it looks like it's not coding for faceness per se.” Rather, she suggests that faces are special because of how people look at them. Every snowflake may have a unique shape, but people still see each one as snow. In contrast, people identify faces individually.

    Suspecting that the face area specializes in making fine-grained distinctions among individuals in a category, Gauthier and her colleagues tested experts in nonface categories: birdwatchers and car enthusiasts. When expert birders looked at pictures of cars, their face areas just shrugged. But when birders saw pictures of songbirds, the level of activity in their face areas jumped. The opposite was true for carwatchers, showing that the face area isn't tuned only to faces, the team reported last year.

    The subjects in this experiment had about 20 years of experience with birds or cars. The researchers wanted to know how quickly the brain can authorize the face area to respond to new stimuli, so they designed a category of similar objects called greebles (see photos). Greebles have a few different features that can be configured in slightly different patterns. At first, they all look pretty much alike. But with about 8 hours of practice, subjects can identify individual greebles and say what family they're in and what gender they are. (Greebles are social creatures.)

    Gauthier and her colleagues tested whether greeble experts used the face area to distinguish among the little critters—and they do. At the conference, she reviewed these studies and suggested that the critical factor linking greebles, birds, cars, and faces is the strategy experts use to differentiate among objects in the category. Experts don't focus on individual features but recognize the bird or car as a whole. She speculates that someone who took up antique dealing or radiology would likewise, with practice, use the same recognition strategies and thus activate the face area when distinguishing Louis XIV chairs or spotting tumors on a computed tomography scan. “Maybe there is an innate disposition for faces,” she says, “but that can't be the entire story.”

    Mountain peaks or landscapes

    The ventral temporal cortex doesn't fire its neurons exclusively to faces and other expertly recognized objects. Several groups have found distinct patterns of activation for many types of objects, although the activation isn't as strong or as localized as that for faces and places. These studies—like those that have mapped the face-specific areas—traditionally record voxels that surpass some threshold of activation when a subject looks at a particular object compared to some other object. In contrast, Jim Haxby's group at the National Institute of Mental Health has developed a technique that surveys the entire topography of activation in the ventral temporal cortex—the hills and valleys as well as just the peaks of activity. Haxby and his colleagues looked at this broader pattern of activity when they showed subjects pictures of shoes, plastic bottles, faces, places, and other objects, and they came up with some intriguing results.

    Haxby reported that he could predict from the pattern of activity which category of object a person was looking at. Furthermore, if he excluded the voxels corresponding to the face area, he could still use the activation in the rest of the ventral temporal cortex to predict whether someone was looking at a face, suggesting that the entire area participates in face recognition. And if he excluded all voxels except those in the face area, he could use the face area's activation to predict whether someone was looking at a shoe or a cat, say.

    Analyzing fMRI data in these new ways demonstrates that the brain can maintain “unique representations of an essentially unlimited variety of faces and other objects,” Haxby says. He suggests that the ventral temporal cortex may be organized according to yet-unknown features, just as other parts of the visual system are tuned to edges or orientations or colors.

    “This is an important set of findings,” says MIT's Kanwisher. “People have been making lots of loose talk [that representations of objects can be distributed widely in the brain], but this is the first serious demonstration I've seen that the information is present in many areas of the ventral pathway.”

    What sorts of features might the ventral temporal cortex be tuned to· Rafi Malach of the Weizmann Institute of Science in Rehovot, Israel, presented some of the first clues. Face and place areas aren't in the exact same spots in everyone's brains, but the face area is almost always slightly lateral to the place area. Malach suspected that this might have something to do with the fact that people usually see faces in the center of their visual fields, but when they look at a landscape, they scan a much larger area. He presented people with the same pictures either in the center of their visual field or at the edges. Central images activated parts of the ventral temporal cortex close to the face area, while images at the edge of the visual field activated place areas.

    Haxby suggests that the ventral temporal cortex isn't organized as a patchwork of specialized face, place, and shoe areas, but as a distributed map of different features, such as where something usually falls in the visual field, as Malach's work suggests. In this way, this part of the brain could represent a panoply of objects and categories according to how much they activate different feature-sensitive spots.

    With studies such as Haxby's and Malach's, “people are moving to a framework level” of understanding object representation, says Cornell's McCandliss. But it will be some time before they know for sure how the brain tells a warbler from a greeble.


    Paleontological Rift in the Rift Valley

    1. Michael Balter

    A bitter dispute over rights to hunt for fossils in the Tugen Hills indicates that the old way of regulating paleontology in Kenya is in flux

    TUGEN HILLS AND NAIROBI, KENYA—Martin Pickford stands in the middle of a long, dry gully lined with tamarind and acacia trees. A light breeze ushers a handful of cottony clouds across the blue African sky toward Lake Baringo, some 20 kilometers to the east. Pickford points with a sunburned arm to a spot on the gully's bank. “That is where we found the humerus,” he says proudly. “And just over here, one of the femurs and an upper canine tooth, and further down there, parts of the mandible and the molars.” Last October, a team led by Pickford, a geologist at the Collège de France in Paris, and Brigitte Senut, a paleontologist at France's National Museum of Natural History, found 13 fossil fragments of what they believe is the earliest known ancestor of modern humans. Although scientists are still debating whether the fossils belong to the human family (Science, 23 February, p. 1460), their undisputed age of about 6 million years—roughly the time when genetic evidence suggests that the human line split from that of the chimpanzees—means that these remains could help researchers untangle the increasingly twisted roots of the human evolutionary tree. And just last month, during a 2-week season here at the foot of Kenya's rugged Tugen Hills, Pickford and Senut found several more fossils—including the middle portion of a lower jaw—that they believe also belong to this claimed early hominid, which they have named Orrorin tugenensis.

    Such a dramatic find would normally be cause for rejoicing among human origins researchers. Instead, Orrorin's discovery has set off a bitter internecine battle. Pickford and Senut's very right to excavate here has been challenged by some other scientists, most notably anthropologist Andrew Hill of Yale University, who claims that the pair is encroaching on turf his team has been studying since the 1980s. Hill and other researchers argue that Pickford and Senut have flouted long-established rules governing paleontology research in Kenya. Pickford and Senut deny these charges, countering that they have acted legally and followed all required procedures. They maintain that a campaign against them has been orchestrated primarily by paleontologist Richard Leakey, a claim Leakey vehemently denies.

    Turf battles among paleoanthropologists are nothing new. For decades, scientists working up and down the great Rift Valley of Africa and Asia—where many of the world's most important hominid fossils have been found—have fought over the right to unearth these precious keys to humanity's evolutionary past. The fossil wars have left wounds that have taken years to heal (Science, 14 January 1983, p. 147; 11 December 1987, p. 1502; and 14 April 1995, p. 196). But the fight over the Tugen Hills seems to run deeper than most of these other disputes, and it has implications for the way paleontology will be managed and conducted in a country that holds vital importance for the field.

    Pickford and Senut see the battle as a struggle over the power of the National Museums of Kenya (NMK) and the Leakey family, which pioneered paleontology in Kenya and has long dominated the NMK. The NMK has traditionally held virtual veto power over permit applications, and it is the official repository for fossils unearthed in the country. Pickford and Senut's work is being supported by a rival museum system—called the Community Museums of Kenya (CMK)—established in the late 1990s with the help of native Kenyans who argue that their heritage has been monopolized for too many years by too few researchers. They are getting important support from government officials, apparently including Kenyan President Daniel arap Moi, a member of the Tugen tribe who is originally from the Baringo area.

    Although most researchers agree that wider access to fossil sites is a desirable aim, many observers are fearful that the rivalry between the two museums could ultimately create chaos by leading to overlapping permits and competing claims and, perhaps, limit access to the fossils themselves.

    Deep roots

    There are four principal actors in the drama now being played out in Kenya: Martin Pickford, 57, who was born in the United Kingdom but moved to Kenya when he was 3 years old; Richard Leakey, 56, born in Kenya to the famous fossil-hunting pair, Louis and Mary Leakey; Hill, 54, a British subject now living in the United States; and Eustace Gitonga, 50, a Kenyan who once headed the NMK's exhibits department and who is now director of the CMK.

    The paths of some of the players first crossed many years ago. Pickford and Leakey, for example, attended high school together in Nairobi, and Pickford says the two were “great buddies” during those days, visiting each other's homes and spending weekends and holidays together. Leakey agrees that he and Pickford were once good friends. Pickford and Hill were anything but buddies. The two were graduate students together during the 1970s at the University of London under the legendary geologist William Bishop, who died in 1977. “Andrew [Hill] despised Martin [Pickford], and vice versa,” says a colleague who knows them both well. “He [Pickford] wasn't the sort of person I wanted to be chummy with,” Hill says.

    Although they disliked each other, Hill and Pickford followed similar trajectories. Both took an interest in the Baringo area: In 1970, Hill and other colleagues found some australopithecine remains near the lake, and in 1974 Pickford discovered a molar in 6-million-year-old sediments that he now believes may belong to Orrorin. And after they received their doctorates in 1975, both ended up working at the NMK, Hill as a research fellow and later administrator with an institute linked to the museum, and Pickford as head of the museum's antiquities and monuments department. By that time, Leakey had become director of the NMK, a post he held from 1968 to 1989.

    Both Hill and Pickford left Nairobi by the early 1980s. But they coincidentally encountered each other at the museum again on 2 July 1985. Pickford was passing through Nairobi, together with Senut, on his way to a research site in Uganda, and Hill was doing research of his own in the museum. What happened on that day is a matter of intense dispute, but it set the stage for the subsequent turf wars.

    On 3 July, Leakey wrote to Pickford charging him with attempting to steal documents from the museum's archives and banning him from the NMK's research facilities. The documents in question were some of Bishop's notebooks, which his widow had donated to the museum archives.

    Pickford insists he was only borrowing the notebooks to photocopy them and had signed them out in the archives' logbook—a version of events that is supported by Senut. He is convinced that Hill had something to do with the accusation against him, because, he says, Hill was keeping an eye on him while he searched through the archives. Leakey says Pickford's borrowing of the notebooks was “exceedingly irregular” and that there was “circumstantial evidence” that he intended to steal them.

    Over the next 9 years, Pickford wrote repeatedly to Leakey, the museum's board of directors, and finally the government ministry in charge of the museum, protesting his innocence and trying to clear his name. But he remained banned from the museum. Because the NMK's endorsement of a scientist's application to the government for a research permit was traditionally required to do paleontology work in Kenya, his banishment from the museum amounted to a blanket ban on doing research in the country.

    Declaration of war

    In 1995, Pickford teamed up with Gitonga to declare public war on Leakey. Gitonga, an artist who constructed many of the museum's exhibits, had his own beef with the NMK's former director. He says Leakey forced him to leave the NMK in 1987 after accusations—which Gitonga denies—that he had mishandled funds allotted to erect a large sculpture of a dinosaur in front of the museum. Pickford and Gitonga launched a broadside against their common enemy in the form of a book entitled Richard E. Leakey: Master of Deceit. It recounts not only their own battles with Leakey but also those of other researchers, both Kenyans and foreigners, whose correspondence with Leakey they had managed to acquire. The book paints a highly unflattering portrait of Leakey, accusing him of various schemes and manipulations designed to increase his power and personal wealth at the expense of the museum and other scientists.

    To many researchers, as Pickford himself puts it, the book was evidence “that I had finally gone off my rocker.” Pickford adds that he is “not proud” of the book but insists that he had “no alternative” but to publicly attack Leakey. “For 10 years I had tried to get my scientific rights reinstated, but all my efforts [had] failed,” he says.

    Pickford and Gitonga's public attack on Leakey was just the first step in their campaign to end what they saw as the NMK's monopoly on paleontology research in Kenya. Since 1983, when Kenya's current antiquities laws were passed, NMK officials have interpreted these laws to mean that the museum had to approve all paleontology research permits before they could be issued by a government ministry and that any fossils discovered were the property of the government. “These fossils are very rare,” says the NMK's current director, archaeologist George Abungu. “There must be a government institution responsible for them.”

    But Pickford and Gitonga reject this interpretation. Gitonga, who is well connected in Kenyan politics, began quietly working with a group of anti-Leakey politicians and government officials to establish the CMK as a rival institution. “The primary issue was recapturing the national heritage of Kenya, which had been exploited in a lopsided manner by a small group of people,” Gitonga says.

    The CMK, whose headquarters occupy a suite of offices in downtown Nairobi, was established in 1997. According to Gitonga, its annual operating budget is about $65,000, raised primarily from its own board of directors as well as other private donors. Its eight-member board of directors includes Gitonga, Senut, and Collège de France prehistorian Yves Coppens, as well as a ranking official in the Ministry of Agriculture and Rural Development and a local representative from the Baringo area.

    Gitonga and other CMK board members say that they have received considerable support from President Moi. The organization has also found an important ally in Andrew Kiptoon, a former research minister and the current parliamentary deputy from the Baringo North district, which includes the site where Orrorin was discovered. Kiptoon's district also covers the town of Kipsaramon, where in 1993 Hill and his co-workers found the remains of a 15-million-year-old fossil ape called Equatorius (Science, 27 August 1999, pp. 1335 and 1382) and where the CMK is building a regional museum—the first of 23 such museums it is planning to construct. In June 1998, Kiptoon invited Pickford to begin working again in the Baringo area. Shortly afterward, the CMK began applying for research permits for Pickford and his co-workers—and that is when the battle began in earnest.

    The turf war erupts

    On 30 October 1998, the head of the department responsible for issuing research permits, which at that time was housed in the Office of the President, approved Pickford's application to conduct paleontology research in three Kenyan provinces, including the Baringo area. And on 30 November, the same official signed the permit itself, a blue card with Pickford's photo attached. But when NMK officials, including Abungu, heard about the permit, they challenged its legality and demanded that it be rescinded.

    Pickford's file at the research department—the department has since been transferred to the Ministry of Education, Science, and Technology—includes a carbon copy of a letter dated 2 November 1998 revoking the permit, pending approval by the NMK. Pickford has long contended that this letter is a forgery, citing among other things the fact that it is dated 28 days before the permit was issued and that he did not receive a copy until a year later (see sidebar). This letter has been at the center of allegations by Abungu, Hill, Leakey, and others that Pickford's activities in Kenya have been illegal.

    Events came to a head just over a year ago. On 17 March 2000, Pickford was arrested by an officer of the Criminal Investigation Division (CID)—Kenya's FBI—while collecting fossils in the Lake Turkana area, where the Leakey family has discovered numerous hominid remains over the past decades (Science, 23 March, p. 2289). The arrest came 3 days after Leakey, who was then head of the Kenya civil service, wrote to Abungu alerting him that Pickford was at Turkana and urging him to contact CID to “intercept” him. Abungu says he complied with this request, including sending NMK personnel to help the CID officer find Pickford. But Kenya's public prosecutor declined to pursue the case, and Pickford and the CMK are now suing Leakey and the NMK for damages.

    The current head of the research permit department, Addy Kaaria, insists that Pickford's permit is not valid because it has been revoked. Kaaria is supported in this assertion by the education ministry's permanent secretary, Japhet Kiptoon (brother of Andrew), who told Science that Kaaria “knows the files” and that his word on who does and does not have a permit is “official.” But Kaaria also says that Senut—along with two other scientists who work with the team, from Japan and Spain—does have a valid permit to conduct research in the Baringo area. But so, too, does Hill.

    Pickford and Senut have repeatedly claimed that Hill has not worked in the area since the 1993 discovery of Equatorius, and Andrew Kiptoon told Science that before inviting the pair to work in Baringo he checked with local people who also said Hill had not been around for years. But Hill's field notebooks and the testimony of other members of his team—as well as researchers who have visited the team in the field—indicate that he has conducted lengthy field studies in the vicinity during most of the past 8 years.

    In February 1999, Leakey sent a fax to Collège de France's Coppens, who sponsors Pickford and Senut's work in Kenya, asking him to “use [his] influence” to stop Pickford from “moving onto another colleague's site.” But Coppens told Science that although he “regrets” the conflict between the two teams, he has “full confidence” that Pickford and Senut are conducting their research in an “honest manner.”

    Yet the presence of two teams working in the same area is alarming to many researchers. “It is bad for the science,” comments John Yellen, the U.S. National Science Foundation's archaeology program director. “The fossils lose an enormous amount of their value unless put in the proper stratigraphic and chronological context.” Yellen adds that this context can be obscured if “one group screws up what the other group is doing.” Richard Potts, director of the human origins program at the Smithsonian Institution in Washington, D.C., agrees. He argues that modern paleontological methods—which attempt to place fossil finds “into large-scale time and space relationships,” including analysis of ancient environments and climates—“require access to large expanses of terrain without conflicting or competing activities by different research groups.”

    Potts, who has worked in Kenya for many years himself, adds that the ultimate solution may be for the NMK and CMK to work out “a well- coordinated strategy” to pursue common research goals. Indeed, some researchers believe that the CMK is here to stay, as either a rival or a partner to the NMK's traditional authority over paleontological research. “The bottom line is that we have to go along with what the Kenyan government says, whether we like it or not,” says Yellen.

    Even Abungu agrees that the NMK's monopoly should be ended. “Pickford and Gitonga may be going about this the wrong way, but what they are saying is right,” he says. Indeed, Abungu told Science, he is now working on a proposed revision of Kenyan law that would allow universities, as well as nongovernmental organizations like the CMK, to sponsor scientists for research permits—a right the CMK insists it already has. But the proposed law would still require fossils to be housed in the NMK's collections, a provision Gitonga and other CMK officials say they will fight. (The remains of Orrorin are currently kept in a bank vault in Nairobi, although CMK leaders say they plan to build a museum in the city that would include a safe storage area for fossils.)

    CMK's challenge to the NMK also challenges the Leakey family's influence over paleontology in Kenya. The Leakeys have always been powerful forces in the NMK. (Although Richard is no longer in charge, his wife, Meave, is now head of the museum's paleontology division.) “Perhaps it is a historical accident, but control of the museum has long been in the hands of a family dynasty,” Abungu acknowledges. “We have to open up the playing field.”

    But in Hill's view, something very different is at stake in his dispute with Pickford, Senut, and the CMK: “They are not undermining a Leakey hegemony; they are undermining really good Kenyan laws on antiquities and monuments.”


    The Case of the 'Forged' Letter

    1. Michael Balter

    NAIROBI—On 17 March 2000, geologist Martin Pickford of the Collège de France in Paris was arrested near Kenya's Lake Turkana on a charge of fossil hunting without a permit. According to Pickford, the arresting officer, from the Criminal Investigation Division (CID)—the nation's FBI—presented him with a letter, dated 2 November 1998, stating that his research permit had been revoked. This letter is at the center of a bitter dispute over excavation rights in the Tugen Hills, where Pickford and his co-workers are accused of encroaching on a site long under study by a rival research team (see main text).

    Pickford has long maintained that the letter is a forgery. His opponents, however, have cited it as proof that his recent work in Kenya is illegal. “If the letter is a forgery, Pickford's permit is valid,” says George Abungu, director of the National Museums of Kenya (NMK). “If the letter is real, his permit is not valid.” Ultimately, the Kenyan courts may have to decide which it is.

    Three key documents in this tangled saga all bear the signature of Josephat Ekirapa, then head of the government department that issues research permits, which at that time was housed in the Office of the President: A letter, dated 30 October 1998, notifying Pickford his permit application had been approved; the 2 November letter revoking the permit; and the permit itself, a blue card with Pickford's photo on it, issued 28 days later, on 30 November. Pickford claims he did not receive the revocation notice until more than a year later, when it was handed to him in the NMK parking lot by a researcher now studying in the United States.

    That researcher was Stephen Gitau, now an anthropology graduate student at Southern Illinois University in Carbondale. Gitau, who was doing research at the NMK at the time, says he launched his own investigation into rumors that Pickford was encroaching on the sites of other scientists. His motivation, he says, was his concern for Kenya's “national heritage” and the possibility that “laws were being broken.” He says he went to see Ekirapa twice, first to check on the validity of Pickford's permit, then to get a copy of the letter revoking the permit.

    Gitau says Ekirapa had his secretary type up a duplicate copy of the letter, and he claims that Ekirapa added some additional lines to it, providing more detailed reasons why the original permit had been revoked. In late November 1999, Gitau handed the letter to Pickford, who was passing through Nairobi.

    Pickford says he took one look at the letter and was convinced it was a forgery. In his diary that evening, he noted such irregularities as the letter's date—almost a month before Ekirapa signed his permit—and the fact that it was not typed on government stationery, but rather on a photocopy of old government stationery containing a typographical error.

    Gitau insists that the letter is genuine and was given to him by Ekirapa personally. Ekirapa also insists that the letter is genuine and that he signed it. But he says he does not remember Gitau or Gitau's visit to his office, and he denies adding extra lines to the letter. The file on Pickford's application, now housed in the Ministry of Education, Science, and Technology, contains a carbon copy of the 2 November letter that does not contain the lines at issue.

    Ekirapa says he revoked Pickford's permit because an official in the Ministry of Rural Development had misrepresented Pickford as an employee of the ministry in urging Ekirapa to grant him a permit. The official—whose name was misspelled in the letter Gitau gave Pickford—is Elisha Chesiyna, chair of the board of the Community Museums of Kenya (CMK), a nongovernmental organization that had sponsored Pickford's permit application. Chesiyna, who is now director of the land reclamation department in the Ministry of Agriculture and Rural Development, denies that he misrepresented Pickford as an employee of the ministry; a copy of Pickford's permit application states Pickford's affiliation with the CMK without referring to the ministry. Asked why he signed Pickford's research permit 28 days after he had supposedly revoked it, Ekirapa says the “most likely” scenario is that he revoked the permit after signing it and predated the revocation notice.

    Ekirapa insists that Pickford has “no excuses” for continuing to do research in Kenya after he received the letter. “If he thought the letter was a forgery, he should have come to see me about it,” he says. Pickford says that when he received the letter, he was about to leave Nairobi and turned it over to CMK director Eustace Gitonga for further action. Kenya's public prosecutor has declined to pursue a case against Pickford. Gitonga says the letter will be used as evidence in a lawsuit Pickford and the CMK have filed for false arrest.


    Will Black Carp Be the Next Zebra Mussel?

    1. Dan Ferber

    Fisheries experts worry that, if it escapes, this imported fish could decimate mollusks in the Mississippi River Basin

    For the freshwater mollusks of the Mississippi River Basin, life in recent decades has been no riverboat ride. Still one of the most diverse groups of fauna in North America, with more than 900 species, they have been battered by dredging and channeling of streams, suffocated by silt, and plagued by the invading zebra mussel. Now a war is raging up and down the Mississippi Basin over a threat that biologists say could drive dozens of species of threatened and endangered mussels and snails over the edge.

    At issue is an imported mollusk-eating species of Asian carp, the black carp. Fish farmers say the voracious meter-long fish, which was imported from China in the 1970s by Arkansas aquaculturists, has become an indispensable tool to control fish parasites that threaten their livelihood. But fishery biologists and state conservation officials all over the basin don't want the carp around.

    The carp uses its powerful teeth to eat any mollusk it can get its mouth around. So fisheries and mollusk experts worry that it could decimate populations of native mussels, clams, and snails if it escapes. “It can crush a mollusk up to the size of a golf ball,” says malacologist Kevin Cummings of the Illinois Natural History Survey in Champaign, who is president of the Freshwater Mollusk Conservation Society.

    For years, the black carp was confined to a handful of fish farms and research facilities, where it was used to control snails that carry a tiny trematode that makes hybrid striped bass wormy and unmarketable. To prevent escapees from reproducing in rivers, most fish farmers had used sterile black carp. When carp eggs are treated with heat or hydrostatic pressure soon after fertilization, they develop with three sets of chromosomes rather than two, which makes them unable to produce viable sperm and eggs. Individual fish can be checked for triploidy using a simple blood test.

    Things changed in 1999, when an outbreak of a new trematode parasite on Mississippi catfish farms threatened the catfish industry, leading farmers there to clamor for a cure. Until then, Mississippi had allowed only triploid black carp. But aquaculture researchers and fish farmers insisted that there were not enough triploids to supply the state's catfish farmers, so Mississippi began allowing farmers to transport and stock fertile fish from Arkansas.

    Mississippi's decision sparked a national campaign among biologists to fight the spread of black carp. In February 2000, the Mississippi Interstate Cooperative Resource Association (MICRA), a consortium of 28 state fisheries chiefs in the Mississippi Basin, petitioned the U.S. Fish and Wildlife Service (FWS) to list black carp under a federal law called the Lacey Act. This act bars the import and interstate transport of exotic species likely to cause harm. Fisheries biologists, state officials, and several scientific societies all lobbied to blacklist the fish.

    There's good reason to worry about black carp, says ichthyologist Jim Williams of the U.S. Geological Survey (USGS) Caribbean Research Center in Gainesville, Florida. In a detailed 1996 risk assessment of the fish, Williams and USGS colleague Leo Nico concluded that black carp would survive and reproduce in U.S. rivers, consuming native mollusks and competing with native mollusk-eating fish such as the redear sunfish and freshwater drum. The pair also recommended that fertile fish be allowed only under strictly controlled conditions in hatcheries that produce triploids.

    Although black carp have yet to escape the hatcheries and fish farms, their relatives have. Four other species of Asian carp—three of them imported by the aquaculture industry in the last 40 years—-are now firmly established in the Mississippi Basin, biologists say. Populations of two of them, bighead carp and silver carp, have exploded in the Upper Mississippi Basin in the last 2 years, according to long-term monitoring data. Populations of bighead carp have more than doubled in the past 2 years on a stretch of the Mississippi near St. Louis, Missouri, for example, and bighead carp populations shot up more than 100-fold on an 80-kilometer stretch of the Illinois River near Peoria, says fish ecologist John Chick of the Illinois Natural History Survey's Great Rivers Field Station in Brighton. No one has yet documented an ecological effect of these escapees. But huge numbers of bighead and silver carp are likely to compete for food with native fish that also eat zooplankton, including paddlefish and several species of buffalo fish. If natives decline, that could make it tougher for struggling mussel species to disperse their larvae, which hitch a ride aboard particular species of native fish. All that could happen within the next 5 years, Chick says.

    Despite the opposition, “black carp are our only [proven] option for treating the trematode problem,” insists aquaculturist Anita Kelly of Mississippi State University in Starkville. Other options, including draining the ponds and treating them with a chemical to get rid of snails or using native mollusk-eating species such as the blue catfish and redear sunfish, have not been proven to work on farms. But the blue catfish looks promising, says Kelly. Fish farmers say they'd be happy to have a good alternative. “There's nothing sacred about black carp,” says Mike Freeze of Keo Fish Farms in Keo, Arkansas.

    Even if black carp is listed, fish farmers in states that allow black carp—including the fertile diploids—can still use them, says Norm Stucky, Missouri's fisheries division administrator, increasing the likelihood of escape. He and MICRA are pushing a plan already in place in Missouri to have the government supply farmers with certified triploid black carp and then phase out all black carp over 5 years. FWS officials say they will decide whether to list the black carp as injurious in the next 2 months. In the absence of other federal laws to control harmful fish species, after that it's up to the states.

Log in to view full text

Log in through your institution

Log in through your institution