News this Week

Science  05 Feb 1999:
Vol. 283, Issue 5403, pp. 766

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Research Council Says U.S. Climate Models Can't Keep Up

    1. Richard A. Kerr

    When the U.S. Global Change Research Program (USGCRP) recently began to look at what greenhouse warming might mean for different regions of the United States, it quickly ran up against a problem: No U.S. modelers had simulated changing regional climate in enough detail. So USGCRP policy analysts had to rely on climate models from Canada and Britain until one U.S. model finally made the grade. Even then, that model had to be partly run on computers in Japan and Australia because the United States doesn't have enough computer power at its climate centers. The obvious, but surprising, conclusion is that even as the United States tries to shape the world's greenhouse warming policy, it lags in the race to simulate climate. A new report by a committee of the National Research Council (NRC) makes that judgment official.

    According to the report, published last month, the problem is twofold: The United States lacks a coordinated strategy for building models, and it has limited supercomputer power available to run them. “There was a problem, and there is a problem still,” says modeler David Randall of Colorado State University in Fort Collins, whose letter with three colleagues jump-started the NRC review 3 years ago. Modelers clearly need more computing power, Randall says, but also “we need newer, better models, newer and better ideas. The U.S. models are competent, but … the U.S. was the best in the business.” There are already signs of progress, says NRC Climate Research Committee chair Thomas Karl of the National Climatic Data Center in Asheville, North Carolina, “but we can't lose momentum now; other nations are making significantly faster progress. We have to be sure we do things more wisely.”

    The shortage of computing power is the most easily quantified problem. As of a year ago, the most powerful climate modeling computer in the United States was a Cray T90, according to information provided to the NRC's Climate Research Committee by Bill Buzbee, recently retired computing director at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. The Cray runs at a speed of 15 gigaflops, or 15 billion floating point operations per second, but a year ago atmospheric modelers in Australia, France, the United Kingdom, and Canada were running models at 20 to 45 gigaflops, largely on Japanese-brand supercomputers. Today, NCAR has a new and more powerful climate computer, the ORIGIN 2000 system—but at 10 gigaflops, it still ranks only 102nd among world supercomputers as rated by a standard speed test, and it falls well behind the meteorological and climate modeling computers of the United Kingdom, Canada, Australia, and France (see table).

    Solving the computer power problem will be far harder than measuring it. The computers most sought after by modelers are sold by Japanese companies, but the U.S. Department of Commerce has set huge financial penalties for buying them, because their below-cost price is seen as unfair competition for U.S. computer makers. Commerce's “antidumping” order prevented NCAR from buying its Japanese computer of choice (Science, 30 August 1996, p. 1177). That is “a quite serious problem that increasingly affects the international competitiveness of the U.S. climate modeling community,” says the NRC report. Even if the trade problem is somehow resolved, warns Buzbee, “federally funded organizations won't be able to go near Japanese computers” because of the political sensitivity of using foreign supercomputers.

    View this table:

    Help is on the way, though. Two weeks ago, Vice President Al Gore announced the Clinton Administration's $366 million plan to beef up information technology in the fiscal year 2000 budget (Science, 29 January, p. 613). If approved, most of the money would help develop computers that could run at 40 trillion floating point operations per second by 2003. Among the tasks of such “teraflops” machines is boosting regional climate forecasts. Forty teraflops would outpace even the 32 teraflops that Japan's Science and Technology Agency is aiming for by 2002.

    Solutions for climate modeling's other problem—the lack of a national strategy—are not so clear. About a decade ago, the climate modeling community rejected a focused, Manhattan Project-style approach to building a single national climate model (Science, 22 November 1990, p. 1082). Instead, modelers favored a multiagency strategy that was nevertheless supposed to involve close coordination among the Department of Energy, NASA, the National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation. But the NRC committee found that each agency independently establishes priorities and that research important to policy-makers, such as regional modeling, falls through the cracks. “We could get better organized in the U.S.” says Karl.

    “We abandoned a top-down” approach, agrees Jerry D. Mahlman, a modeler and director of NOAA's Geophysical Fluid Dynamics Laboratory, “but we've also been in anarchy, where everybody does their own thing.” Such chaos is common, and often productive, in other research settings. But in this case it has helped to delay the integration of model components such as atmosphere, oceans, and chemistry. Also, there was little cross-fertilization among models, so that researchers effectively worked on “their own pet models,” says modeler Tim P. Barnett of the Scripps Institution of Oceanography in La Jolla, California. What American researchers lack, he says, is “a mechanism for easily integrating new research results” from outside groups into their models.

    The NRC report urges that USGCRP agencies agree on a national set of goals, coordinate funding, and reorder the priorities of individual modeling groups to achieve them. It notes some encouraging developments along those lines. One is NCAR's “new-generation” model, which was developed and is being improved with outside assistance. After some rocky times, “we are working interactively” to improve model components, says Mahlman. “We're not there yet,” adds Barnett, “but I'm very optimistic about the future.”


    Fruit Fly Researchers Sign Pact With Celera

    1. Elizabeth Pennisi

    It may not be as dramatic as a Capulet-Montague marriage, but the announcement last week that experts from the private and public sectors are joining together to decode the fruit fly's genome could signal a new period of cooperation in genome research.

    For years, private entrepreneurs and academics have clashed over how quickly genome data should be released. Now, aided by a huge investment from equipment manufacturer Perkin-Elmer of Norwalk, Connecticut, the two camps have agreed on a joint plan to sequence the DNA of the fruit fly Drosophila melanogaster and a timetable for making the sequence public. It could soon become the most complex organism completed; its genome, about 160 million bases, is two-thirds larger than that of the worm Caenorhabditis elegans, completed in 1998 (Science, 11 December 1998, p. 1972). Yet the partnership aims to finish in record time—by December 1999. The project could also pave the way for a similar collaboration to sequence the human genome.

    The details of the fly project were released on 27 January by the National Human Genome Research Institute (NHGRI). It is bringing together a group of NHGRI-funded sequencers led by Gerald Rubin of the University of California (UC), Berkeley, with the staff of Celera Genomics of Rockville, Maryland, a company started last year by Perkin-Elmer and DNA sequencer J. Craig Venter. A memorandum of understanding signed by Venter, Rubin, and UC officials says that by April, Rubin's group will supply 12,500 bacterial clones holding fragments of Drosophila DNA to Celera, which aims to do the sequencing by July. Finished data—in groups of 2000 or more bases of contiguous DNA (contigs)—will be released to the public by 1 January 2000 “at the latest.”

    Venter said he is “delighted” to join forces with the academics. And NHGRI director Francis Collins said last week that the agreement “marks the beginning of a productive collaboration … that should give the research community the fruit fly sequence more rapidly than previously predicted.” Collins also predicted a $10 million savings.

    Most of the difficult preparatory and finishing tasks will continue to be done by academics, Collins said. Rubin and his colleagues at the Lawrence Berkeley National Laboratory in California, the Baylor College of Medicine in Houston, and the Carnegie Institution of Washington have been identifying landmarks along the 160 million bases of the fly genome and tagging them for mapping purposes. They had hoped to sequence all the gene-rich regions—about 125 million bases long—by the end of 2001. Already, they've completed 20% of this work.

    Instead of the conventional approach of serial sequencing and assembly that most academic groups have been using, Celera will use a battery of new Perkin-Elmer automated machines to sequence all the clones in parallel and fit the results together afterward. By July, the company plans to begin releasing contigs simultaneously to the public and to Rubin's consortium. Rubin's group will fit the puzzle together, relying in part on low-precision sequencing of the genome by a team headed by Richard Gibbs at Baylor. All this will require a “nimble” touch, says Rubin: “Our work will be much more like a scientific experiment” than a normal production job.

    If successful, this effort could set the stage for collaboration on sequencing the 3 billion bases of the human genome. Venter last year announced that Celera hoped to do this on its own in 3 years. Federal research officials then offered to collaborate.

    Last week, Collins said he was moderately optimistic about the prospects for a broader agreement that might include Celera. In an interview with Science, he said that “active discussions are going on right now on how to put together a memorandum of understanding on the human genome” to include U.S. publicly funded scientists, those supported by the Wellcome Trust in Britain, and “any private entity that's interested.” There is still “some tension,” he said, over how data will be annotated and released, as nonprofit and private sponsors have different aims. Working out the details will be “complicated,” because Celera clearly wants to stake a claim to human genomic data.


    Groups Sue to Tighten Oversight of Rodents

    1. David Malakoff

    A coalition of animal rights advocates—including three academic scientists and a biomedical supply company—announced this week that it will go to federal court to force the U.S. government to redefine mice, rats, and some birds as animals under a major animal protection law. The change is needed, they argue, to bring the country's most widely used experimental animals under federal regulations that require researchers to consider alternatives.

    U.S. Department of Agriculture (USDA) officials say that the redefinition would stretch its cash-strapped enforcement program to the breaking point and could also mean higher expenses for some research facilities. And some scientists see the pending legal battle as simply the latest tactic in a long campaign by activists to eliminate the use of all animals in research. But observers on both sides of the dispute agree that the lawsuit—announced Tuesday at a Washington, D.C. forum on the issue organized by the National Academy of Sciences*—represents the most serious legal challenge ever to the 27-year-old rules.

    The controversy centers on a clause in the Animal Welfare Act (AWA), the nation's flagship animal protection law, which directs the Secretary of Agriculture to define as animals any “warm-blooded animal as the Secretary may determine is being used, or is intended for use, for research.” Although the law specifically includes dogs, monkeys, hamsters, and other animals in the definition—and specifically excludes farm animals—it is silent on the status of rats, mice, and birds bred for research. In 1972, USDA declared the three creatures nonanimals. That definition exempted researchers from a host of AWA regulations, including annual facility inspections and the need to consider alternatives when designing experiments.

    Over the last decade, however, animal rights activists have stepped up efforts to change the department's interpretation of the act. In 1994, the Washington-based Humane Society of the United States and the Petaluma, California-based Animal Legal Defense Fund (ALDF) won a federal court ruling that USDA's exclusion was legally “strained and unlikely.” But that ruling was thrown out by an appeals court because the groups couldn't demonstrate that their members have been directly harmed by the regulations. This time, however, even some USDA officials concede that the plaintiffs are likely to vault over that hurdle because several have a financial stake in USDA's definition. The list includes a Pennsylvania group that funds the development of nonanimal research methods and the president of InVitro International, an Irvine, California, company that sells nonanimal lab tests.

    Last April, the same parties had petitioned the USDA to change its animal definition. On 28 January, the agency signaled it was taking the request seriously by publishing the petition, along with its own comments, in the Federal Register ( and requesting public comments by 29 March. But the next day the petitioners decided to sue. The move, says attorney Andrew Kimbrell of the Washington-based International Center for Technology Assessment, who is preparing the suit with the ALDF, was based on USDA's comments, which Kimbrell believes suggested that the agency was preparing to reject the petition. “They first need to acknowledge that they have this obligation—then we can discuss solving some of their funding problems,” he says.

    News of the suit surprised USDA officials. “It is unfair to suggest that we have already decided what we are going to do,” says W. Ron DeHaven, deputy administrator for animal care with USDA's Animal and Plant Health Inspection Service. However, a 1990 study concluded that USDA would have to carve out an estimated $3.5 million from its $9 million enforcement budget to handle the additional oversight. DeHaven says the cost to researchers is unknown at this time.

    Some scientists are worried that any change in the definition could doom animal use in smaller labs, particularly those involved in undergraduate education, by requiring costly new facilities. And they note that publicly funded biomedical scientists must already consider substitutes under guidelines issued by the Public Health Service. The coalition's “objective is to eliminate the use of animals in research; [the alternatives argument] is a pigtail,” says L. Gabriel Navar, a physiologist at Tulane University in New Orleans, Louisiana, and president of the American Physiological Society.

    Kimbrell disagrees and says researchers would be better off joining animal activists to seek the necessary resources for broader regulation. The sooner USDA “starts obeying the law,” he argues, the sooner animal rights lobbyists can fight for the money USDA will need to regulate its newfound wards.

    • * Regulation of the Care and Use of Rats, Mice, and Birds (2 February 1999).


    First Light for a Gamma Ray Flashbulb

    1. Ivan Amato*
    1. Ivan Amato is a science reporter for National Public Radio and the author of Stuff.

    The first laser had hardly beamed its world-changing needle of red light in 1961 when theorists began realizing just how far this new technology could conceivably go. One way was upward through the spectrum, from visible light to the higher energy ultraviolet and x-ray ranges and even into the territory of gamma radiation—the ultimate “light,” energetic enough to blow missiles out of the sky or simulate conditions near stars.

    More easily dreamed than done. But for nearly 40 years, a small research community has set its course toward that goal. And in the 25 January Physical Review Letters, a team of a dozen researchers from five different countries has moved a step closer by showing that a form of hafnium-178 extracted from accelerator waste can release energy stored in its nuclei as a blast of gamma photons, at energies more than 1.3 million times those of the red photons of the world's first laser.

    The gamma rays that emerged from the thin, crusty, plastic-encased film of hafnium were not coherent—their waves were not synchronized, which is a hallmark of a true laser. “This is more like a gamma ray flashbulb,” remarks laser expert Paul Kepple of the Naval Research Laboratory in Washington, D.C. But he and physicist Carl Collins of the Center for Quantum Electronics at the University of Texas, Dallas, the group's leader, think that if researchers can unravel and control the complicated movement of energy within the nuclei, they could be on their way to the ultimate laser. Says Kepple: “Who knows where it is going to go?”

    Even if it falls short of a laser, the phenomenon the researchers have observed, called induced gamma emission, could find plenty of uses. A tabletop gamma machine, with its supershort wavelengths, could push photolithography—the process that traces microcircuit patterns—to atomic dimensions, serve as an energy source for an x-ray laser, or sterilize areas contaminated by microorganisms released, for example, by terrorists. Says Collins: “You could set off something the size of a match head” to do the job. But team member Paul McDaniel of the Air Force Research Laboratory in Albuquerque, New Mexico, says the science is what has him going. “It's the newest physics that I have heard of.”

    Unknotting a nucleus.

    In the outer shell of an excited hafnium-178 nucleus, two protons and two neutrons have identical spins and orbits. After absorbing an x-ray, the nucleus relaxes into a lower energy state, with opposite spins and orbits in each pair, and emits a gamma photon.

    A gamma ray laser would work differently from existing laser types, which all pump electrons in some gaseous, liquid, or solid lasing medium to an excited state and then stimulate them to emit radiation coherently as they relax to their ground state en masse. The only way to get atoms to emit gamma-caliber photons is to achieve the same trick with their nuclei, pumping a large population of them into deformed, excited states called isomers and getting them to relax to their normal shapes all at once.

    With most nuclei, the gamma-emitting isomers give off their energy too quickly for a large population to develop, no matter how fast more energy is pumped in. But isomers of some isotopes are much longer lived. Collins and colleagues had first managed to coax gamma emission from tantalum-180, a naturally occurring isotope with a proportion of its nuclei fixed in an excited isomeric state. A blast of x-rays triggered the tantalum-180 nuclei to relax into their ground-state arrangement and emit their stored energy.

    The photons that emerged from the tantalum had about the same energy as the triggering x-ray photons, but an accelerator-produced isomer of hafnium-178 can emit gamma photons many times more energetic. The isomer normally leaks its energy over a half-life of 31 years. But calculations by Collins's team had shown that ordinary x-rays could discharge that pent-up energy, by triggering the complex nuclear rearrangements needed for the nuclei to relax to their ground states.

    Two years ago French researchers reported that they had succeeded in triggering the hafnium-178 emission. They have not provided further details, however, and some researchers are skeptical. Now Collins and his colleagues have tested their own sample of hafnium-178, a waste product of the production of radioisotopes for medicine. Working in Dallas, the researchers aimed a dentist's x-ray machine at the sample and detected an answering pulse of gamma rays, at energies 60 times those of the triggering x-rays.

    Now they are preparing follow-on experiments to map out how x-ray energy goes into the hafnium-178 isomers and then triggers the excited nuclei to relax. Collins says it could be just the beginning of a new research area some are calling quantum nucleonics, marked by precise manipulation of nuclear structure. Not to mention a step toward the ultimate laser beam.


    Trigger for Centrosome Replication Found

    1. Elizabeth Pennisi

    At the turn of the century, a European biologist named Theodor Boveri suggested that a small body near the cell nucleus might be a key to cancer. Called the centrosome, it replicates before cell division and then, via the protein cables that radiate from it, helps pull the duplicated chromosomes apart into the daughter cells. Boveri proposed that errors in this process could derange cells. Over the following decades, however, his idea got lost, as researchers concentrated on understanding the specific gene malfunctions that lead to cancer. New findings, some of which are described in this issue, may now help revive interest in Boveri's notion.

    On page 851, Greenfield Sluder and his colleagues at the University of Massachusetts Medical School in Worcester report that they have identified a trigger that helps tell the dividing cell to copy its centrosome. It's a new role for a familiar character: an enzyme called Cdk2, already known to help drive cells through the division cycle when activated by another protein, known as cyclin E. Later this month, another team, led by cell biologist Tim Stearns of Stanford University, will publish similar findings in the Proceedings of the National Academy of Sciences. “It's a key discovery,” says William Brinkley, a cell biologist at Baylor College of Medicine in Houston. “The biochemistry of how centrosomes replicate is beginning to come into view.”

    By helping explain how the cell cycle and centrosome replication are linked so that the centrosome is copied just once and at just the right time, the finding may help researchers figure out how the replication might go awry, with potentially disastrous consequences. Because “the balance in the genome is maintained through the [centrosomal] machinery,” as Robert Palazzo, a cell biologist at the University of Kansas, Lawrence, puts it, a centrosome that replicates too often, or fails to replicate at all, can leave the daughter cells with abnormal chromosomal compositions. They might lose a tumor suppressor gene, for example, or acquire extra copies of growth-promoting oncogenes—conditions that predispose cells to cancer. Indeed, researchers have recently found that cancer cells, even in their earliest stages, have abnormal numbers of centrosomes.

    The two teams succeeded in identifying a trigger for centrosome division because they came up with assays for studying centrosome replication outside living cells. “The ability to assay that is a significant achievement that should not be underestimated,” comments Palazzo.

    Both assays use extracts of frog eggs. In a test tube, the DNA and centrosomes in these extracts can still replicate just as they do in intact eggs. Normally, both activities are coordinated so that they occur at roughly the same time in the cell cycle. But Sluder's group found that a chemical inhibitor blocks DNA synthesis in the egg extracts yet somehow does not affect centrosome replication. With the cell cycle thus stalled out, the researchers could monitor how various substances affect centrosome duplication without worrying about the normal blocks to the copying that kick in as the cell cycle progresses.

    The first substance the Sluder team tested was the Cdk2-cyclin E (Cdk2-E) complex, reasoning that because the complex is involved in prodding cells to begin making new DNA, it might also regulate the centrosome duplication that seems to happen at about the same time. To test that idea, Sluder's postdoctoral fellow, Edward Hinchcliffe, obtained a specific inhibitor of Cdk2-E activity, a modified version of a frog protein called Xic1, from James Maller at the University of Colorado School of Medicine in Denver.

    The team monitored the inhibitor's effects by using time-lapse photography to follow the increase in the numbers of centrosomes over time in their microscope's field of view. They had already learned that, without the inhibitor, three-quarters of the aster-shaped centrosomes replicate three times in a 6-hour period, and most of the rest replicate twice. The inhibitor greatly reduced this centrosome copying; 79% doubled just once and none doubled three times. Conversely, adding extra Cdk2-E overcame this effect, allowing the centrosomes to replicate multiple times. “This is clean evidence that we have one very important set of [proteins] that are essential for [centrosome] replication,” says Brinkley.

    Meanwhile, at Stanford, Stearns's group had taken a slightly different approach. The researchers had decided to look closely at the Cdk2-E complex after first finding that two naturally occurring Cdk2 inhibitors, proteins called p21 and p27, block centrosome replication in developing frog embryos. But rather than observing the effect of these inhibitors on the overall increase in the numbers of centrosomes, the Stanford team used deconvolution microscopy to watch what happens to the centrioles, the two bundles of short microtubules that form the core of the centrosome. We could “see precisely what's going on inside the centrosome,” says Stearns.

    Normally, after 1 hour in frog-egg extract, the paired centrioles in each centrosome have separated, presumably taking the first step toward duplication. But in the presence of p21 or p27, Stearns and his Stanford colleagues, graduate student Kathleen Lacey and pathologist Peter Jackson, found that the centrioles stayed put. “We both showed that Cdk2-E is probably the thing that's driving centrosome duplication,” Stearns says.

    Many questions remain, including how Cdk2-E triggers the duplication and what its molecular partners are, as it apparently doesn't act alone. In the October 1998 Nature Genetics, Brinkley and Subrata Sen at the M. D. Anderson Cancer Center in Houston reported that they had cloned a gene that when overexpressed in mouse cells resulted in extra centrosomes. This gene is also overexpressed in cancer patients. It may act in conjunction with Cdk2-E and, when in excess, “lead to a lot of chaos and genetic instability” and eventually, cancer, Brinkley notes. And that, he adds, “was Boveri's original notion.”


    Insulator Gives Plastic Transistors a Boost

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    Anyone who has dropped a laptop computer or mobile phone knows, to their cost, that they are not tough. But the glass and brittle semiconductors that make their displays prone to shattering could one day give way to a material that is cheap, easy to manufacture, and tough—a material pretty much like plastic. Before an all-plastic display makes a commercial debut, however, researchers will have to overcome a major drawback of polymer electronics: Polymer transistors, which would be needed by the thousands in a display, require impractically high voltages to make them work. Now, by simply changing an insulating material in a polymer transistor, a team of IBM researchers reports on page 822 that they have cut the voltage it needs to a level comparable with the amorphous silicon used in today's displays.

    “This is excellent work,” says plastic transistor pioneer Francis Garnier of the CNRS Laboratory of Molecular Materials in Thiais, France. Says Cambridge University physicist Richard Friend, “[Such] molecular semiconductors have now been built up as very credible materials for technologists.”

    The team, led by Christos Dimitrakopoulos of IBM's T. J. Watson Research Center in Yorktown Heights, New York, skirted a long-standing problem in polymer electronics. Polymers suffer from low mobility—essentially the speed at which charges, either electrons or electron gaps called holes, move through the material when a voltage is applied. By tweaking polymers' chemical structure, researchers had managed to improve their mobility by five orders of magnitude—enough to make plastic semiconductors. That development, in turn, opened the way to polymer transistors.

    The basic design starts with a substrate carrying a metal electrode called a gate. Over the gate and substrate goes a layer of insulator followed by a layer of organic semiconductor such as pentacene, topped off by two more contacts, one on either side of the buried gate, known as the source and the drain. Normally, a voltage between the source and drain will produce only a trickle of current because charge carriers get caught in “traps,” current-impeding locations in the polymer. “We expect that these traps are related to structural defects, such as grain boundaries or dislocations, and to impurities,” says Dimitrakopoulos.

    Applying a voltage to the gate, however, attracts charge carriers—holes in pentacene's case—from elsewhere in the semiconductor layer into the region above the gate, where they fill up some of the traps, allowing a freer flow of charge carriers from the source to the drain. Although this gate voltage effectively “switches on” the transistor, it still requires voltages in the region of 100 volts at all three electrodes to achieve this result, because the mobility of charge carriers is so low in a polymer. “Such voltages are incompatible with real applications,” says Garnier.

    Charging up.

    Charge mobility of plastic conductors has climbed since the mid-1980s.


    So Dimitrakopoulos's team dodged the problem: Instead of trying to improve the mobility of the pentacene semiconductor directly, they sought to fill more traps by changing the insulating layer. “We replaced silicon dioxide, which was the gate insulator used, with an insulator with a much higher dielectric constant,” says Dimitrakopoulos. The team used barium zirconate titanate, which has a dielectric constant of 17.3 compared to the 3.9 of silicon dioxide. Dielectric constant is a measure of a material's ability to transmit an electric field. A higher constant will channel more of the gate's electric field to the semiconductor and so pull in many more holes—“enough to fill [all] the trapping states [and leave] extra carriers that are free to travel,” says Dimitrakopoulos.

    With the new insulator, it took a change in gate voltage of just a few volts, rather than a few hundred volts, to alter the source-drain current by more than five orders of magnitude. The performance of these transistors now rivals that of the amorphous-silicon transistors, the type of low-cost transistor used in active-matrix displays, says Dimitrakopoulos, who adds that his group now hopes to integrate their transistors into similar displays. Friend says such work can only heighten industry's interest. “The level of interest is of an entirely different order than it was 2 years ago.”


    AIDS Virus Traced to Chimp Subspecies

    1. Jon Cohen

    CHICAGO, ILLINOIS—Most AIDS researchers have long believed that HIV-1, the main form of the AIDS virus, jumped from chimpanzees into humans. But there have been scant data to support this thesis, which has allowed theories to flourish ranging from the ridiculous (the government made the virus) to the scientifically implausible (a poliovirus vaccine introduced it) to the highly speculative (an unidentified species is the main host). Now Beatrice Hahn from the University of Alabama, Birmingham, and co-workers have pieced together what is being hailed as the best case yet for the chimpanzee connection.

    Chimp ranges.

    All three close relatives of HIV-1 were found in P. t. troglodytes.


    Hahn's genetic detective work—which she described in the keynote speech here at the opening of the largest annual AIDS conference held in the United States* and is published in this week's issue of Nature—indicates that different subspecies of chimps harbor different strains of HIV-like viruses, and that one particular chimp subspecies found in a region that includes Gabon, Cameroon, and Equatorial Guinea is the source of human HIV-1 infections.

    That region had been identified before as the likely epicenter of the human disease (Science, 15 May 1992, p. 966). But some researchers, including Hahn, doubted that chimps were the original reservoir of HIV-1 because a virus isolated from one chimp bore little resemblance to human strains, and some regions where chimps live do not have HIV-1 epidemics. The new analysis changed her mind: She now argues that some subspecies may not harbor the virus, and others may be infected with a strain that is not as likely to spread epidemically in humans. Furthering the case, Hahn noted in her talk that a French group led by the Pasteur Institute's Françoise Barre-Sinoussi will report later at the meeting that they have found three chimps from Cameroon infected with an HIV-like virus.

    Vanessa Hirsch, a primate researcher at the U.S. National Institute of Allergy and Infectious Diseases who, like Hahn, helped establish the link between HIV-2—a much rarer cause of human AIDS—and sooty mangabey monkeys, is impressed by the data. “Everyone has kind of been pussyfooting around the question of whether chimps were the origin,” says Hirsch. “As more isolates are studied, it becomes more believable.”

    Hahn began studying HIV-1's origins in 1995 when she received a call from Larry Arthur at the National Cancer Institute. A decade earlier, Arthur had tested 98 captive chimps in the United States to make sure they did not harbor HIV-1. He found that one animal, a pregnant 26-year-old named Marilyn, had antibodies against HIV-1, but she died in childbirth a few days later. Arthur put tissue samples from the chimp aside and forgot about them until he cleaned out his freezer. Did Hahn want to study Marilyn? he asked.

    Hahn jumped at the chance. Scientists until then had found HIV-like viruses in only three chimps, two of which came from Gabon and the third from what was then Zaire. (The viruses do not appear to cause disease in the animals.) An analysis of the genetic sequence of these isolates, named SIVcpz, had revealed that the two Gabonese strains were closely related to HIV-1 strains found in humans, but the Zairian strain was quite different.

    Hahn and her colleagues found that Marilyn harbored an SIVcpz virus similar to the Gabonese strains. They then analyzed the DNA in the mitochondria of cells from the four animals to determine which particular subspecies of chimp they came from. They found that the Gabonese animals and Marilyn all belonged to Pan troglodytes troglodytes, while the Zairian animal belonged to the subspecies P. t. schweinfurthii.

    Hahn believes that SIVcpz may have been in chimps for hundreds of thousands of years, and that viral strains have evolved to be specific to particular chimp subspecies, which are isolated geographically by rivers. P. t. troglodytes—whose natural range “coincides precisely” with the regions in Africa that have had HIV-1 infections in humans for the longest period—appears to be the source of the HIV-1 strains that now infect humans, Hahn concludes. She believes that three separate transmissions occurred, each of which gave rise to one of the three main groups of HIV-1. Although the exact route of transmission is unknown, Hahn speculates that butchering chimpanzees and other animals for so-called “bushmeat”—a practice she notes is common in parts of west equatorial Africa—may have provided the link.

    Hahn concedes that the chimp-human link would be stronger if researchers could show directly that SIVcpz is widely prevalent in at least some wild chimp populations. She and other researchers now hope to do these analyses, but they face a potential problem: The chimps are being driven to extinction. Hahn says she hopes her findings will ultimately discourage people from eating chimps, and focus more attention on the question of why the virus that decimates human immune systems rarely harms our closest primate relative.

    • * Sixth Conference on Retroviruses and Opportunistic Infections, 31 January to 4 February 1999, Chicago, Illinois.


    Dietary Data Straight From the Horse's Mouth

    1. Virginia Morell

    Like the horse and carriage, horses and grasses have a long history together, or so paleontologists have thought. When modern grasses appeared some 20 million years ago, the thinking went, the teeth of ancient equines evolved to crop this new food, developing the high crowns seen in modern horses, and their owners changed from deerlike browsers of shrubs and trees to pure grazers. But on page 824 of this issue, Bruce MacFadden, a paleontologist at the University of Florida, Gainesville, and his colleagues show that in horses, at least, the tooth can fool the eye.

    By analyzing tooth wear and chemical traces in the teeth, the researchers found that some of the closest ancient relatives of today's horses were primarily browsers—despite having teeth shaped like those of a grazer. “This is the best study to date on horse dietary behavior and change,” says John Rensberger, a paleontologist at the University of Washington, Seattle. “They've taken a novel approach that challenges the traditional interpretations” of equine tooth shape, providing a model for analyzing the diets of other extinct mammals.

    MacFadden's team teased apart the dietary preferences of six horse species that shared the grassy plains of Florida about 5 million years ago. The horses ranged from small, three-toed types to the heftier, one-toed Dinohippus mexicanus, one of the closest relatives of modern horses. All six species bore the dental hallmark of a grazer: high-crowned (or hypsodont) teeth with enameled ridges that can cut a stalk of grass as neatly as a lawnmower blade. But MacFadden puzzled over how all six could make a living cropping grass. “Ecological theory says that they'd have to partition the environment somehow; that some of them must have looked for another food,” he says.

    With his colleagues, MacFadden analyzed the carbon isotopes in the horses' teeth and their patterns of wear to show that that's what actually happened. Grazing horses typically eat grasses, which in many regions use the so-called C4 photosynthetic pathway to turn carbon dioxide into sugars and starches. Such plants incorporate different amounts of the isotopes carbon-12 and carbon-13 than do C3 plants, which are primarily trees and shrubs. Animals that eat different plants retain different amounts of isotopes in their teeth.

    The team's analyses showed that some of the six horse species ate solely C4 grasses, but others chewed a mix of C4 grasses and C3 shrubs and twigs, and a couple, including D. mexicanus, fed mostly on C3 plants. Similarly, a microscopic analysis of tooth enamel showed that some species, again including D. mexicanus, did not have the characteristic abrasive marks incised by a purely grass diet but were pitted and scratched like the teeth of a browser in spite of their high crowns. “We used to interpret those high crowns as a sure sign of a grass diet,” but that's not certain anymore, says Michael Woodburne, a paleontologist at the University of California, Riverside.

    MacFadden says that D. mexicanus apparently had high-crowned teeth simply because its grass-eating ancestors did. When the species switched back to eating browse, its teeth did not change. He suggests that the high-crowned teeth represent an “irreversible” evolutionary change, but others are less sure. “It may be that horses invented a tooth that's simply good for eating anything—trees, shrubs, grass,” says Paul Koch, a vertebrate paleontologist at the University of California, Santa Cruz.

    All six horses eventually went extinct for some reason. Ironically, the future belonged to the big, one-toed browser, D. mexicanus, which gave rise to the oldest known species of modern horse—which was a grass-eater—some 4.5 million years ago. “That's the biggest surprise of all—I'd never have guessed that horse was a browser,” says Koch. “It shows the power of these combined techniques.”


    Becoming a Symbiont Is in the Genes

    1. Evelyn Strauss

    Some couples enjoy a harmonious intimacy that many people can only dream of. Such is the alliance between certain bacteria and plants, in which each partner provides an essential gift, with the microbes nestled comfortably within the plant. At the meeting, molecular biologist Sharon Long of Stanford University described several new genes that may play key roles in a bacterium's transition from independence to symbiosis. Although the lessons offered by these cross-kingdom pairs are unlikely to enhance human relationships, they may someday open the door to improved crop production.

    After infecting legumes, Rhizobia fix nitrogen by grabbing the gas from pockets of air in the soil and converting it into ammonia, which the hosts use to make protein. In return, the plant provides the bacteria with nutrients and private quarters in root nodules. Researchers know how legumes and Rhizobia first catch each other's attention, but the pas de deux after the sparks fly and before the microbes have settled into their new home remains largely a mystery. Long's team is catching the first glimpses of this courtship's middle stages. “We're looking at a step that's largely unknown, and we may uncover unanticipated things that will help us understand what makes symbiosis succeed or fail,” she says.

    To figure out what molecules are involved, Valerie Oke, a postdoctoral fellow in Long's lab, set out to look for bacterial genes that turn on at some time in the interval after a bacterium latches onto the tip of a root hair and before it starts fixing nitrogen. Already, Oke has uncovered several surprises. One gene encodes a protein that resembles fasciclin I, which in insects glues nerve cells together and guides them during embryonic growth. The researchers don't know how the bacteria use this protein, but they have shown that plants infected with rhizobial strains lacking the gene fix nitrogen inefficiently. Further evidence that the gene plays a role in symbiosis comes from Daniel Gage, a microbiologist now at the University of Connecticut, Storrs, who independently isolated it when he was a postdoc in the Long lab during a search for proteins that Rhizobia secrete when infecting host plants.

    Another of the genes Oke identified appears to produce a protein similar to ones that disarm destructive oxygen molecules in a variety of organisms. This suggests that the bacteria encounter such harmful molecules after infecting the plant, says Long, which hints that the legumes might at first consider Rhizobia enemies. The presence of the gene, she says, raises questions about where to draw the line between pathogen and symbiont. On the other hand, she says it's equally plausible that the plants are not mounting a defensive response, and that the gene's presence may simply reveal “something basic about plant physiology.”

    “I'm very impressed and excited by this new work,” says R. James Cook, a plant pathologist at Washington State University in Pullman. Exposing the intimate details of the plant-bacteria relationship could help researchers such as Cook find ways to improve the efficiency of nitrogen-fixing bacteria and even transfer some of their abilities to other species. For instance, strains of Pseudomonas and Bacillus make antibiotics that help protect plants from root infections, but these bacteria live “casually” in the vicinity of the plants, says Cook, and thus are at the mercy of fluctuations in soil conditions and other environmental whims. Rhizobia, meanwhile, have a safe haven. “I don't know if these genes are what we're waiting for,” he says, “but it would be exciting to find out if they can enhance the ability of these antibiotic-producing bacteria to associate with roots.”

    In another fantasy, scientists would figure out how to introduce nitrogen-fixing bacteria into crops that aren't cozy with helpful microbes, such as wheat and corn. This arrangement could make it possible to grow those plants in nitrogen-poor soil without fertilizer—a boon to farmers. Such a project would require more detailed information about the genetic contributions to the symbiotic relationship from both plants and bacteria. Long says she's pessimistic that the endeavor will succeed anytime soon, but “we're gathering the tools to test the idea. I wouldn't have said that 2 years ago.” Even if scientists can't figure out how to spark romance between every crop and nitrogen-fixing bacterium, they seem to be well on their way to figuring out what makes certain plant-microbe pairs click.


    Grammar's Secret Skeleton

    1. Kathryn S. Brown*
    1. Kathryn S. Brown is a science writer in Columbia, Missouri.

    It's been more than 3 decades since scientist-dissident Noam Chomsky hit his colleagues with a controversial theory: that babies learn how to speak so easily because they're born with a sense of grammar that transcends individual languages. Chomsky, a linguist at the Massachusetts Institute of Technology (MIT), challenged researchers to find simple rules that govern all languages, from Arabic to Zulu. Now some linguists claim to have at last uncovered key elements of this universal grammar—and their findings, presented at the meeting, are rekindling a debate over whether grammar is innate.

    Guglielmo Cinque got the fireworks started at a packed symposium with his description of common grammar elements spanning dozens of cultures. Cinque, a linguist at the University of Venice in Italy, recalls being struck several years ago by how certain adverbs—“always” and “completely,” for example—appear in the same order in a sentence in languages as disparate as Italian, Bosnian, and Chinese. Looking more closely, Cinque realized the same was true for auxiliary verbs, particles, and other parts of speech.

    He and his students then set out on a linguistic odyssey, surveying word order and meaning in some 500 languages and dialects. After more than 4 years of sifting through grammatical analyses and querying native speakers, the researchers found that every language consists of sentences based on a verb phrase surrounded by modifiers in predictable patterns. Because this core structure does not vary, Cinque concludes that “our human species imposes these rules on language as part of our genetic endowment.” Cinque, who lays out his argument in a new book, Adverbs and Functional Heads: A Cross-Linguistic Approach (Oxford University Press, 1999), adds: “It's an accident of birth, like having five fingers instead of seven.”

    More focused comparisons have yielded similarly provocative conclusions. In another talk, MIT linguist David Pesetsky, a former student of Chomsky's, examined the Question Rule, or the arrangement of parts of speech in a question. At first glance, questions appear strikingly different in many languages. In English, for example, we ask, “Whose book did Mary buy?” In Russian, the same question, “Chju Marija kupila knigu?” (translated word for word), comes out as “Whose Mary bought book?” Comparing these sentences and equivalents in Bulgarian and Okinawan, a Japanese dialect, Pesetsky and students Paul Hagstrom and Norvin Richards have discovered a recurring syntax theme: No matter what their native tongue, people consistently place variations on the word “whose” and accompanying words at one end of a sentence.

    Both sets of findings are compelling, says Victoria Fromkin, a linguist at the University of California, Los Angeles (UCLA). “The universal properties they've found—combined with the fact that children show an amazing ability to pick up language—make a very strong case that our species is biologically endowed with a set of rules for communication,” she says. But some researchers contend that Cinque, Pesetsky, and their colleagues are overreaching. “We all agree that humans have a language faculty,” says UCLA's Edward Keenan. “What's at issue is how specific it is.” But regarding precise rules for a universal grammar, he says, “the evidence just isn't in.” Pesetsky demurs: “I think we're tapping something basic here.”


    Magnetic Cells: Stuff of Legend?

    1. Melissa Mertl

    Many animals are attuned to a world hidden from our perception: Bats bounce sound waves off prey, snakes slither through grasslands awash in infrared light, and sharks hunt in the electrical trails of their next meal. Now scientists have taken a step closer to confirming the existence of another sense: the ability to use Earth's magnetic fields to navigate on starless nights or in turbid waters. Migratory and homing animals such as birds, bees, and fish seem to possess a built-in compass, and at the meeting Carol Diebel of the University of Auckland in New Zealand presented new findings on the iron-laden cells that may provide these creatures with a legend to Earth's magnetic road map.

    Two years ago, a team led by Diebel's colleague Michael Walker showed that captive trout could be conditioned to nudge a bar and receive food when the fish detect a magnetic field. In a 1997 report in Nature, the group traced this magnetic sense to nerves rooted in tissue, rich in iron crystals, located in the trout's nose. Now Diebel and Walker have found that these crystals—presumably composed of magnetite, a mineral used in the first humanmade compasses—are polarized like a bar magnet, and that they appear to be strung together in chains inside so-called magnetoreceptor cells. Scientists who have been engaged in a decades-long hunt for these cells—and have endured derision for working in a field tarnished by dubious research—say they feel vindicated. “This is the last nail in the argument for these things being the magnetoreceptor cells,” says Joseph Kirschvink, a geobiologist at the California Institute of Technology in Pasadena who first proposed the magnetite-based magnetoreception theory 20 years ago.

    Over the past 3 decades, magnetite has been found in life-forms as diverse as bacteria, birds, whales, and humans. Although the 50-nanometer particles are just the right size to act as bar magnets in the body (crystals too large would set up interfering fields, while those too small would create unstable fields), locating the magnetite-bearing cells in higher organisms has been like trying to autopsy a living person: The tissue-dissolving methods used to identify magnetite turn the sample to mush.

    Diebel knew her team was looking for a needle in a haystack. Organisms contain very little magnetite and the cells that harbor it could be anywhere, as magnetic fields pass through the body relatively unimpeded. “You could spend forever trying to find nanometers of crystal in millimeters of tissue,” she says. “We had to invent new methods every step of the way.” Her group turned to a magnetic force microscope, running it less than a hair's width above thin slices of trout nose tissue. Positive charges in a magnetic field attract the magnetized probe, and negative charges repel it. The trout snout lit up the computer screen. “We were jumping around the room when we saw this blip” representing a magnetic dipole, Diebel says.

    The group next used a confocal microscope to map structures within the putative magnetoreceptor cells, arrayed in the shape of a three-leaf clover. They discerned what appeared to be chains of magnetite that resemble those in bacteria which respond to a magnetic field. How the chains may function in multicellular organisms is anybody's guess, but Kirschvink speculates that changes in Earth's magnetic field twist the chains, perhaps forcing open ion channels that send signals to the brain.

    Not everyone is convinced the scientists have uncovered the whole picture. “These cells appear to be involved in magnetoreception, but their role in behavior is still unclear,” says John Phillips, a neuroethologist at Indiana University, Bloomington, who studies how light helps newts orient themselves in their surroundings. To tie the mechanism to behavior, researchers still must try to disrupt the cells and show an effect on navigation. The work could open the door to exploring an enduring biological mystery: whether, unwittingly, we use the traces of magnetite in our own bodies to make more sense of this bewildering world.


    Fishing for Toxic Chemicals

    1. Jocelyn Kaiser

    Many toxicologists can remember being dogged at some point by people opposed to chemical tests on animals. But when Richard Winn, a toxicologist at the University of Georgia, Athens, asked one protester how she would feel if lab mice and rats were retired in favor of fish, she answered, he recalls, “that because fish don't have faces, she would be much more comfortable.” If other animal activists feel the same way, then Winn has moved a step closer to easing the disapproval that he and his colleagues often feel. At the meeting, he described a promising new line of fish for screening toxic chemicals.

    To enhance their ability to detect the mutations that chemicals might cause, in the mid-1980s toxicologists began equipping lab mice with bacterial genes that can be pulled out and screened for damage. This is much easier than, say, screening the entire mouse genome or waiting for tumors to develop. Now, Winn and his colleagues have introduced the same bacterial genes into fish, and they found in early tests that the transgenic fish are just as sensitive at picking up mutations as the modified rodents are.

    Experts caution that more work will be needed before the fish find widespread use as guinea pigs. But if the research does pan out, the fish should help make toxicology testing cheaper as well as less politically sensitive than it is in rats or mice. A standard 2-year testing regimen on rats, for example, can cost $1 million or more; nobody's done such an estimate for fish, but keeping a fish costs “pennies a year,” compared to about 20 cents a day per rodent, Winn says.

    Hoping to find an alternative, Winn turned to medaka, the small, Japanese, freshwater fish already used for toxicology tests. These fish had previously been modified so that they carry a foreign gene to detect mutagens, but this target is so small that it can pick up only certain small-scale chemical effects—those that alter a single A-T base pair.

    In the first phase of its experiment, Winn's group took two bacterial genes, called LacI and cII, which are used in mice to detect mutations caused by chemicals, and spliced them into a bacterial virus. The researchers then injected this bacteriophage into medaka eggs, where it carried the new genes into the nucleus. The fish that developed, Winn found, carried the genes in all their cells and had a low rate of spontaneous mutations in those genes.

    Winn's group next dumped a standard mutagen, N-ethyl-N-nitrosourea (ENU), into the fish tanks and, after waiting 1 to 16 hours, ground up the fish and retrieved the bacterial DNA for analysis. The researchers found that they could detect even slight genetic changes, charting a two- to threefold increase in mutations at low exposures to ENU.

    Winn also described a transgenic medaka with a third gene called LacZ, which he says works well for detecting radiation-induced damage. Radiation tends to knock out or rearrange big chunks of DNA, and this gene is big enough—and its carrier, a circular piece of DNA called a plasmid, is sturdy enough—that there is sufficient DNA left for analysis after a radiation hit. In collaboration with University of Georgia colleagues who work at the site of the Chernobyl reactor accident and at nuclear waste dumps, Winn has begun exposing these fish to radiation-tainted sediments and looking for effects on the gene.

    “I was really thrilled” to hear about Winn's progress, says toxicologist Barbara Shane of Louisiana State University in Baton Rouge, who studies cancer in mice. She and others are eager to begin tests on transgenic medaka. Winn cautions, however, that more work is needed to prove that the fish give predictable results with many more chemicals. “We're still at an early stage, but it's time to talk about it,” he says.


    More Questions About the Provider's Role

    1. Michael Hagmann

    Of all sexual arrangements, monogamy is the rarest of the rare—only a small percentage of animals do it. Why did our ancestors adopt such an unusual arrangement? One theory has been that early human fathers provided food for their mates in exchange for fidelity. But according to reports presented at the meeting, multiple partners and blurred lineages have developed in some modern cultures. The existence of such societies, one anthropologist suggests, raises questions about whether the nuclear family, with a faithful couple at its heart, arose as a result of the food-for-fidelity bargain. Other evolutionary models, such as mitigating males' fierce competition for access to females, may better explain monogamy's origin, argues anthropologist Kristen Hawkes of the University of Utah, Salt Lake City.

    For a long time, many anthropologists thought monogamy arose in our early ancestors as part of an unspoken bargain, in which males bought fidelity by filling the family cooking pot, seeking to avoid investing resources in another man's child. In this view, the gender bargain was a key adaptation that set off the evolution of our genus, Homo. But when Hawkes heard about isolated primitive societies in which paternity is often fuzzy, she started wondering whether other models could better account for such diverse family arrangements.

    While living among the Aché tribe of Eastern Paraguay and later the Hadza tribe in northern Tanzania off and on for several years, Hawkes and her team found signs of a remarkably egalitarian society. After the men returned from a hunt, each family received equal portions of meat. This doesn't fit the bargain hypothesis, Hawkes says. What's more, the few Hadza men who scored a kill—not an easy task when hunting big game with a daily failure rate of 97%—had younger (and therefore presumably more fertile) wives and other sexual partners, and they fathered more children than other men did. The hunters' offspring also had a higher survival rate, perhaps because the fathers tended to mate with skilled, hard-working women who gathered most of the food by foraging for plants.

    Other research presented at the meeting also undermines the bargain hypothesis. Among the Aché as well as the Barí of Columbia and Venezuela, the belief that a child can have several fathers (a phenomenon called partible paternity) is quite common, says anthropologist Stephen Beckerman of Pennsylvania State University in University Park. About 24% of the Barí children and 63% of the Aché children had more than one cultural “father,” and all of the fathers offered the children food gifts and protection. Such children had a survival advantage, Beckerman reports, noting that “80% of the children with secondary fathers survived to age 15, compared to only 64% of the children with a single father.” Thus, “partible paternity is a poke in the eye for the bargain hypothesis.” Beckerman says this shows that being certain of paternity is not necessary in some human cultures—and therefore may not have been “a crucial element in the evolution of modern humans.”

    If not male provisioning, then what did spur monogamous arrangements? Hawkes has her own theory: Monogamy arose as “negotiations between males” about access to females, to cut the high risks of direct fighting.

    Other anthropologists like Hawkes's critique. “It was way too simple an idea that monogamy is exclusively based on male provisioning,” says Frank Marlowe, a biological anthropologist at Harvard. Still, not everyone is convinced. “Some anthropologists bitterly disagree with [Hawkes],” says Nicholas Blurton-Jones, a biological anthropologist and professor emeritus at the University of California, Los Angeles. “But she's gaining ground.”


    2000 Budget Plays Favorites

    1. David Malakoff*
    1. With reporting by Jocelyn Kaiser, Michael Hagmann, David Kestenbaum, Eliot Marshall, Jeffrey Mervis, Robert F. Service, and Gretchen Vogel.

    Computing initiative gets top billing and biomedical research gets short shrift in a very uncertain budget year

    When Clinton Administration science officials gathered on Monday to unveil the president's fiscal year 2000 R&D budget proposal, the lineup of speakers symbolized what's hot this season—and what's not. On hand to highlight their agencies' spending plans were National Science Foundation (NSF) chief Rita Colwell and Energy Secretary Bill Richardson (DOE). But conspicuously absent was National Institutes of Health (NIH) head Harold Varmus, one of the featured speakers last year as Vice President Al Gore announced plans for a record dollar increase for biomedical research spending and hefty boosts for other major science agencies. This year, Clinton is asking Congress for a meager 2.1% increase for NIH, to $15.9 billion—and Varmus decided to honor a previous commitment to speak at a scientific conference in Colorado.

    Charting a course.

    Energy Secretary Bill Richardson and senior Administration officials discuss FY 2000 budget that shows rise in spending on basic science.


    NIH officials insist that the size of their proposed budget is not related to Varmus's absence. (NIH deputy director, Ruth Kirschstein, sat at the podium but was the only one of seven senior science officials who did not offer prepared remarks.) But Varmus would have had a hard time putting a positive spin on the comparison between NIH and the other major R&D agencies. If approved by Congress, NSF would get a 7% hike in its research budget—the biggest percentage growth of any major science agency—as part of a 6% overall increase. And despite a relatively skimpy 3% raise proposed for DOE's $7.2 billion R&D budget, Richardson believes “science has done relatively well in the early skirmishes” of what promises to be an uncertain budget year for research funding. Administration officials also spoke repeatedly about their support for basic research, up 4% to $18.2 billion, most of it university based and awarded competitively after peer review. Even NASA Administrator Dan Goldin and Department of Defense (DOD) R&D chief Jacques Gansler, whose budgets are declining, managed to praise the Administration's support for science.

    Overall, the White House is asking Congress to trim federal spending on R&D by about $1 billion in the budget year that begins in October (see table). But civilian R&D is slated for a 3% increase, to $39.8 billion, while defense R&D—most of which is at the applied research and development end of the spectrum—would decline by $2.2 billion. In the unlikely prospect that those numbers hold up in Congress, the Administration would reach a long-held goal: For the first time since the late 1970s civilian research would claim a majority—51% to 49%—over defense R&D.

    A more sensitive ratio is the growth in biomedical research compared with the rest of the scientific landscape. Colwell noted a recent NSF study that showed a spectacular growth in the share of federal funds going to biomedical research between 1970 and 1997, from 29% to 43%, at the same time the share going to the physical sciences and engineering dropped from 50% to 33%. (The slice for the social sciences has also shrunk slightly, while computer science has grown.) “Society cannot live by biomedical bread alone,” she intoned. Presidential science adviser Neal Lane said that the 2000 request is intended to reestablish “an optimum balance between health care research and other scientific disciplines.” But he added that an increase for NIH beyond the 2% requested “would be fine” as long as Congress meets the Administration's requests for the other agencies.

    The unquestioned star of the show was a previously announced computing initiative called Information Technology2 (IT2) that aims to wire the entire nation to a faster, more capable electronic highway (Science, 29 January, p. 613). But even Washington insiders remain confused about whether the $366 million request for the six-agency initiative, to be led by NSF, represents “new money,” as the Administration claims. “It looks like a grab bag of some new programs and some old, but it's hard to tell,” said one congressional aide. Brushed aside like an old toy is the Administration's previous favorite computing initiative, the Next Generation Internet, pegged for level funding at $110 million to continue hooking leading research sites up with faster connections.

    Another cross-agency initiative, this one an ecological research program, was crafted in response to a presidential advisory report last year calling for more biodiversity research (Science, 14 August 1998, p. 895). Called Integrated Science for Ecosystem Challenges, the $96 million initiative will target such areas as harmful algal blooms, invasive species, habitat conservation, and information management and monitoring. The Interior and Agriculture departments get the largest shares in the five-agency effort.

    How these and other initiatives will fare in Congress, however, remains to be seen. While Clinton has promised to use most of a $76 billion surplus to shore up the Social Security retirement system, Republican leaders are calling for tax cuts. If the White House cannot reach a deal with Congress, all bets are off. Some researchers take heart from a similar impasse last year, which ended in a raid on the surplus, resulting in major increases for NIH and other science programs.

    Here are highlights of the overall R&D request by individual agencies:

    · NIH: Presenting the Department of Health and Human Services budget, Secretary Donna Shalala admitted that she wasn't happy about what was happening to biomedical research. An overall 2.1% raise for NIH would fail to keep pace with the rate of inflation in the biomedical sector, she noted. And it would result in a sharp drop from last year's peak in the number of new and competing grants awarded next year to independent biomedical researchers—from 9171 to 7617. NIH's total portfolio would stay at a record level of about 30,000 grants, however.

    As Kirschstein explained, a flat budget would force NIH to trim the rate of new awards in order to make good on its commitments to earlier grants, which last an average of 4 years. As part of the year 2000 chill, NIH has proposed no compensation for inflation in grants this year.

    “This business of going up and down on NIH budgets and science budgets in general is something that every college president wrings their hands about,” Shalala conceded. “I don't think the roller coaster [approach] is an effective way to invest in this nation's scientific infrastructure.” However, she noted that NIH “did go up very high last year,” and that the Administration's request would still keep NIH on track for a 50% increase in its budget by 2003.

    Biomedical research activists—who earlier set a goal of increasing NIH's budget by 15%—are already mobilizing. William Brinkley, a cell biologist at Baylor College of Medicine in Houston and president of the 56,000-member Federation of American Societies for Experimental Biology (FASEB), said, “We are disappointed that the president has not chosen to maintain the momentum that we helped create last year in favor of doubling the NIH budget.” Brinkley was preparing to fly to Washington to join with other societies on a campaign for a 15% hike, the next step on the road to doubling. He said he was worried about the confusing message this budget proposal would send to students considering careers in science.

    · NSF: The agency is placing its heaviest bets in two areas, information technology and biocomplexity, that are favorites of its new director, Rita Colwell. Thanks to some creative bookkeeping, NSF has calculated that activities in these two categories alone would receive slightly more than the overall $217 million increase for the agency's entire budget, which would rise by 5.8% to $3.95 billion. (The discrepancy is explained in part by NSF's decision to absorb two programs highlighted last year—Knowledge and Distributed Intelligence and Life and Earth Environments—into the new initiatives.)

    Numbers aside, NSF hopes to sell the idea that information technology benefits every discipline, and that the whopping 41% increase—an extra $110 million—for software and networking research in its computer science directorate will also help biologists, geologists, and astronomers despite anemic increases ranging from 2.6% to 4.5% for NSF's six other directorates. “It will transform the way we do science,” says Colwell. In particular, NSF hopes for $36 million to buy or lease a 5-teraflops machine that would be available to all researchers on a competitive basis.

    The $50 million biocomplexity initiative is spread among several directorates, notably biology and geosciences. It will encompass such activities as robotic exploration of hostile environments, remote sensing, and genomics as well as long-term ecological studies and inventories. Colwell has also put her stamp on an education initiative that she inherited from her predecessor, Neal Lane, by trumpeting $7.5 million for fellowships to graduate students who want to teach in elementary and secondary schools. “It will have an impact well beyond the dollars spent,” she says, “by improving science content in the classroom and making these students better teachers.”

    NSF's major research facilities account includes $7.7 million to build a nationwide $75 million earthquake engineering simulation network, $16 million to continue funding detectors for CERN's Large Hadron Collider, $8 million for final design of the antennae on the proposed millimeter array telescope, and $17.4 million for the new South Pole station and added logistical support in Antarctica. A tight budget forced NSF officials to delay plans to begin building a $70 million plane for atmospheric research, and a $25 million Polar Cap Observatory is off the table after continued congressional opposition led by Senator Ted Stevens (R-AK).

    View this table:

    ·DOE: On paper, DOE's request is down $14 million from last year. But after subtracting several one-time expenses (including 1999 money to buy radioactive material from Russia), the department reasons that its $17.84 billion request is a 4.1% boost. The nuclear weapons stockpile stewardship program will get $131 million of the new money, pushing its allotment to $4.5 billion. Another $208 million will fund projects to promote energy efficiency and renewable energy resources, in part to reduce CO2 emissions.

    The $1.3 billion Spallation Neutron Source at Oak Ridge National Laboratory in Tennessee, which would deliver streams of neutrons for materials, biological, and other research, is penciled in for $214 million this year, up from $130 million. DOE's share of the IT2 program, which it calls the Scientific Simulation Initiative, would put $23 million toward modeling complex systems, including climate and combustion engines, and $47 million to develop ultrafast computers and the tools to use them.

    · EPA: Officials play down a drop, from $562 million to $535 million, in the budget for the Environmental Protection Agency's Office of Research and Development (ORD) by noting that the 1999 figure included $50 million Congress added for 1-year projects that EPA did not request. “Take earmarks away, and it's really about a 4.4% increase,” says ORD Assistant Administrator Norine Noonan. About $61 million will fund studies of the health effects of fine soot to help the office implement a National Research Council-designed research program. The total also includes $110 million, a 15% increase, for the extramural grants program; about $6.5 million for a new coastal environmental monitoring program; and—as part of new Administration cross-agency initiatives—$2 million for studies of pollutants and asthma and $5 million for ecosystem studies.

    · DOD: A 2% rise in the overall defense budget masks an 8% decline in the amount for research, development, testing, and evaluation. Within this $34.4 billion R&D figure, basic research remains level at $1.1 billion. A few high-visibility programs such as bioweapons and computer technology defenses are targeted for raises. But they don't grow by enough to cheer big research institutions that rely on DOD support. Basic defense science is “essentially flat funded,” says George Leventhal of the Association of American Universities, who sees “not a lot of joy” in this budget. He will be visiting Congress to argue that military threats facing the nation are technology-related, requiring a larger investment in science.

    · NASA: For the sixth straight year, NASA Administrator Dan Goldin will face trying to do more with less. The space agency's overall budget would decline by 1% and science and technology programs by 4% to $5.4 billion, with several research programs facing major cuts to free up more than $1 billion toward completion of the international space station. The money will be used to build several major components—including a propulsion module—that the Russian government was scheduled to provide. But as Russia's economic woes have mounted, members of Congress and some Administration officials have pressed Goldin to make backup plans, and next year's budget drops a $600 million bailout that Goldin proposed last year.

    Scientists planning experiments aboard the station would be among those hardest hit by the cuts. The station research program will lose $200 million next year, or about one-third of its funding. Goldin put a positive spin on the reduction, saying that delaying the science would put it “more in phase with” station construction, aiming at completion in 2003. “We didn't want the research equipment to be ahead of the assembly,” he said, hinting that the research funds could be restored later.

    Several programs to develop new high-speed airplanes were also victims of the station's problems, and canceling them allowed the agency to save $162 million. Earth and space sciences were spared, registering small increases. Some of the funds will be used to launch the first three of a new generation of relatively inexpensive Earth-observing satellites, part of NASA's continuing efforts to stretch sparse budgets. The increases also reflect a busy year in 2000, with planned launches of a host of spacecraft to explore Mars and other celestial bodies.

    · NIST: As in recent years, the White House is again calling for a healthy budget increase for the National Institute of Standards and Technology, up $94 million this year to $735 million. A large chunk of that boost—$55 million—will provide more money to build a $218 million electronics measurement and standards laboratory at NIST's Gaithersburg, Maryland, facility. The agency's core lab program would get a $9 million boost, to $285 million.

    Continuing its phoenixlike rise from the dead, NIST's Advanced Technology Program (ATP) is again on the wing. The program was targeted for death by congressional Republicans 4 years ago but withstood the attack by trimming its plan for ambitious growth. Now, the Administration is asking for $239 million, an 18% increase over last year. Acknowledging that ATP “has not been a terribly popular program among some folks on the Hill,” NIST director Ray Kammer says the increase is “ambitious, but I believe it is strongly justified by the results” of individual projects, which join companies and university researchers working in a variety of fields.

    · NOAA: The forecast is less rosy for research at the National Oceanic and Atmospheric Administration. Research is slated to go up a meager $5 million from last year, to $292.6 million. Much of that increase is slated for GEOSTORM, a satellite program to monitor solar winds. Within the oceans and atmospheric research area, the agency plans to shift some money around, adding roughly $23 million to climate and air quality research—including a new climate modeling supercomputer—while docking oceans and Great Lakes research virtually the same amount. Elsewhere within the budget, the agency plans to spend $51.6 million to start building the first of four “acoustically quiet” research ships for fisheries research and stock assessment.

    · USDA: The Department of Agriculture's peer-reviewed research program would get a 68% boost to $200 million under the Administration request. The Administration is hoping that Congress has finally seen the light on this program: Last year, it approved a $22 million increase, to $119 million, the first significant rise in a program begun in 1991. Food safety research, focused on fruit and vegetable contamination, would get an additional $15 million, to $36 million.

    · Geological Survey: The Department of Interior's natural science agency budget has requested $838.5 million, an increase of $40.6 million, or 5%, over last year. The plan provides $18.5 million more for research on the management of resources and ecosystems, adds $13.5 million to improve the nation's natural disaster warning and response system, and injects $10 million for mapping the landscape and its biological resources. The Administration is also requesting $5.6 million for research and monitoring of declining amphibian populations. These increases would be offset by major cuts in coastal and mineral studies, research on ecosystems and their habitats, and marine research in the North Pacific and Bering Sea.

    All these proposals were delivered on Monday to a bitterly divided Congress, from a president on trial in the Senate for perjury and obstruction of justice, in a fiscal climate characterized by tight budget limits and record surpluses. Given those variables, even seasoned Washington hands are reluctant to predict how they will fare.


    New Drills Augur a Great Leap Downward

    1. Kevin Krajick*

    Using lasers and heat, researchers are designing new ways to delve deep into Earth and other planets

    For more than 100 years, humans have dug deep holes with the same tools: dynamite and drills. Today's diamond-tipped machines routinely go thousands of meters down in search of fuel, minerals, geologic data, and deep microbes (Science, 2 May 1997, p. 703). But even they seem to have hit the wall at 10 or 12 kilometers—just 0.2% of the way to Earth's core—so there is no way to get at most of our planet. At shallower depths, new scientific and commercial enterprises are demanding cleaner, cheaper holes. So engineers are exploring what they call “revolutionary drilling,” totally new ways to dig deep into Earth—or any other planet.

    “We will need to design lightweight instruments and techniques that can drill at high speeds and extreme depth,” NASA Administrator Dan Goldin said last week at a workshop on revolutionary drilling in Washington, D.C.** The scientists and engineers at the meeting offered a range of candidates already being tested on a small scale, including lasers, melting devices, and machines that pulverize rock with heat. With NASA setting its sights below the surfaces of other worlds, researchers are hoping for a big boost in interest and funding.

    In theory, there is no limit to how far rotary drills can go. The world's deepest excavation—the 12.3 kilometer Kola Hole, in the former Soviet Union—was made with rotary drills. But after 20 years of work, by 1992 Kola was plagued by cave-ins, both physical and financial; it has since collapsed to 8.7 kilometers and drilling has stopped. The deepest U.S. hole was 9.6 kilometers, made by Oklahoma gas drillers in the 1970s. Ambitious projects to go deeper always hit the same basic limits: heat, pressure, and money. Temperature mounts quickly and wrecks equipment—at 9 kilometers down, the temperature reaches 260ºC or more, and the pressure can crush the metal casings that line drill holes. Moreover, many holes that technically could be drilled just cost too much; the main obstacle to affordable geothermal power, for example, is the fact that it lies under hard igneous rocks that are costly to penetrate.

    Much of the impetus and funding to develop new methods may soon come from NASA, which needs new tools to search for microbes below the surface of Mars and, later, below the icy crust of Jupiter's moon Europa. “We won't have the luxury of rotary drills,” says physicist Geoff Briggs, head of NASA's Center for Mars Exploration. “They're heavy, complicated, and energy intensive.”

    One new technology that may be useful in space is the subterrene—an electrical resistor capped by a bullet-shaped ceramic tip that simply melts its way down. You just turn on the juice to heat the resistor and stand back, says James Blacic, a researcher at Los Alamos National Laboratory in New Mexico. Once the molten rock cools, it forms a strong, glassy lining for the hole, a result that may do away with cumbersome metal casing.

    Such a machine might be powered by solar energy and so could be useful on Mars. Planetary geologists think that traces of liquid water rather than ice—thought necessary for life—may start around 3 kilometers below Mars's frozen surface. Before going that far, NASA officials are planning a first exploratory hole of 200 or 300 meters, perhaps as soon as 2009. “That means we have to think about the digging method right now,” says Briggs. Europa is also thought to harbor liquid water—an ocean of it, below perhaps 10 kilometers of ice. “Figuring out how to get at that has become even higher on the list than Mars,” says John McNamee, project manager for the NASA Solar Probe.

    Subterrenes may have earthly uses too. The Los Alamos researchers have already used it to make a drainage hole near an ancient kiva at nearby Bandelier National Monument in New Mexico; a conventional drill might have destabilized the structure. But this machine has its own limits. It currently goes only about a meter per hour—too slow for most commercial uses—and tests so far have sent it down only 30 meters.

    For faster work, the U.S. Department of Energy (DOE) has already field tested a technique called thermal spallation. The idea is to heat rock so suddenly and severely that it expands and shatters, explains Jeff Tester, head of the Energy Lab at the Massachusetts Institute of Technology. The heat can be applied with many tools, including electric arcs, microwaves, and flames. And researchers already know it works: Back in 1988 a DOE team at the Rock of Ages granite quarry in Barry, Vermont, dug a 300-meter hole with what was essentially a rocket engine suspended in the hole like a plumb bob. The engine exhaust gets rid of the pulverized rock bits by blowing them to the top. “You don't want to hang your head over the hole—it's pretty aggressive,” says Blacic. Spallation seems limited to rocks with big crystals that come apart in chunks, such as granites—but those are the kind that typically overlie geothermal sources.

    Perhaps the sexiest new drilling technologies are spin-offs from Star Wars lasers, originally designed to blow enemy satellites out of the skies. Congress ordered Star Wars technology opened to peaceful uses in 1996, and now the Gas Research Institute (GRI), the scientific arm of the U.S. gas industry, is aiming the lasers at rocks in the lab. In one recent laboratory test, the Army's million-watt Mid Infra Red Advanced Chemical Laser at White Sands, New Mexico, cut through a 15-centimeter piece of sandstone in 4 seconds—10 times faster than a rotary drill. Other tests suggest that these lasers can cut granite, which is much harder than sandstone, 100 times faster than drills do. Lasers can pulverize, melt, or vaporize rock, depending on the intensity and duration of the beam. Says petroleum engineer Darien O'Brien, a GRI investigator, “You might see a hot light coming out of the hole, or sometimes just a white fluffy powder we call dust bunnies.” Tests of a truck-mounted device that could be taken into the field may come within a year, says O'Brien.

    However, lasers need lots of energy, so they may be best suited for small-diameter research holes on Earth rather than commercial operations or space missions. They may help, for example, to retrieve samples of deep microbes. Today, such samples are often plagued by contamination from surface organisms because drillers must circulate slurry up and down the hole to lubricate drill bits and carry up rock cuttings; the mud inevitably carries bacteria. Lasers and other high-temperature methods automatically sterilize the walls of the holes they make; small exploratory side passages could then be made to pick up biological samples, says microbiologist Thomas Phelps of Oak Ridge National Laboratory in Tennessee.

    New digging methods such as a semi-robotic thermal spallation device may also allow deeper mines with less risk to people—although working conditions won't be pleasant. The deepest mine in the world is the 3777-meter Western Deeps gold mine outside Johannesburg, South Africa, where the rock reaches 60ºC; the only thing keeping the miners from cooking is cold air pumped from the surface. The veins may continue to 7 kilometers, but to keep expanding downward, companies must deal with daily earthquakes and high-pressure outbursts of rock. “We feel we can get further,” says Ray Durrheim, head of deep-mining programs at the South African Council for Scientific and Industrial Research in Johannesburg. “Rock is rock. Further down is more of the same, just hotter and hotter.”

    • * New York journalist Kevin Krajick is writing a book about diamond prospectors.

    • ** “Revolutionary Drilling Technologies,” 27–29 January, sponsored by the National Advanced Drilling and Excavation Technologies Institute of MIT.


    Pulsar Weather Map Shows Storms on a Strange World

    1. James Glanz

    Flickers in the radio beam from a spinning neutron star may reveal a procession of compact, energetic storms marching around its polar cap

    Two radio astronomers, peering at the polar cap of a dead star 3000 light-years away, have put to shame the weather report graphics that show cold fronts and low-pressure zones rambling across the earthly continents. The spinning, collapsed star, called a pulsar, could easily fit within the city limits of Chicago, and the astronomers may have mapped as many as 20 individual “microstorms”—each probably an explosive hail of gamma rays and subatomic particles—circulating around a region of the cap that is no more than 400 meters across. The storms appear as transient, drifting flashes within the beam of radio waves that emanates from the pulsar's cap.

    Astronomers had puzzled over the flashes before. Now, by observing a pulsar with an especially regular sequence of them, Joanna Rankin of the University of Vermont, Burlington, and Avinash A. Deshpande of the Raman Research Institute in Bangalore, India, may have traced them back to features near the stellar surface—like keen listeners deducing the pattern of holes on a player-piano roll from the tune it plays. “This is the first time that any analysis of [the subpulses] has been possible to such remarkable accuracy,” says Raman Institute astronomer V. Radhakrishnan. And some astronomers think the work, presented at an American Astronomical Society meeting in Austin, Texas, last month, holds a clue to precisely how and where pulsars generate their intense radio beams in the first place.

    The spinning object at the heart of a pulsar is believed to be a neutron star, a superdense ball of matter left behind after an ordinary star explodes as a supernova. As gravitational forces crush the neutron star to roughly 20 kilometers across, its magnetic field gets compressed and amplified as well, becoming more than a trillion times as intense as Earth's.

    The fields sprout like cowlicks from each cap, and as they are whipped around by the rapid spin, they act like the whirling dynamos in a terrestrial power station, creating electric fields that accelerate electrons, positrons, and other charged particles. Accelerated charges radiate, and somehow they act in concert to generate “lighthouse beams” of radiation. Because the caps are offset from the spin axis, the beams rotate through space like a lighthouse warning distant ships of a reef, and astronomers pick up blips of radio waves if a beam's orientation allows it to sweep past Earth.

    That much seems clear, but not much more. “Pulsars have been around for a long time,” says Jonathan Arons, a theorist at the University of California, Berkeley, “so people assume the problem must be solved.” But theorists have struggled to explain the great intensity of the beams and debated whether they originate close to the surface or thousands of kilometers up, where the magnetic fields are whipping around at nearly the speed of light. And they have had even less success in explaining the subpulses that dance, drift, and disappear within the clocklike blips of the main radio emission.

    One problem was the fickle nature of the subpulses, says Rankin. But she and Deshpande focused on a pulsar in the constellation Leo whose subpulse behavior, while complicated, was regular and reproducible. It was as if a dancer was going through the steps of a complicated waltz, then repeating them over and over. The team mapped this choreography in vivid detail with the 305-meter-diameter radio dish in Arecibo, Puerto Rico, then analyzed the results.

    Rankin and Deshpande found that the data pointed to a train of radio-emitting hot spots, which were marching around the outside of the polar cap. The pulsar they chose, which spins on its axis once every 1.1 seconds, was ideal for analyzing such behavior, says Rankin, because its lighthouse beam just grazes Earth. Thus, each time the beam sweeps past, Arecibo observes a different slice through the slowly circulating train, allowing the full pattern to be mapped.

    The map showed that a sequence of 20 hot spots, circulating about once every 37 seconds, remains stable for minutes at a time, then jumps to a pattern with just two spots (see∼jmrankin). “They've seen a pattern that they can follow all the way around,” says Alice Harding, a theorist at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

    Rankin and Deshpande think they are seeing an effect first predicted by Malvin Ruderman of Columbia University and Peter Sutherland of McMaster University. In their theory, the electric fields generated by a pulsar's spin are strong enough to cause “sparks” near the surface, like breakdowns above an ordinary electrode. Charged particles generated in these electrical storms could stream up the magnetic field lines and eventually act as a kind of antenna to send out the radio waves that are observed as the subpulses. Meanwhile, according to some well-known physics of charged particles in electric and magnetic fields, the microstorms—each about 100 meters high—would drift around the polar cap.

    That picture has implications for the source of the main beam, says Harding: It suggests that the underlying physics could be similar and, she says, “argues for radio emission near the surface of the star.” Ruderman, however, isn't convinced that the new work says anything about the main beam. “I personally don't see how it reflects directly on the mechanism for radio emission,” he says. And among other criticisms, Arons argues that even the source of the radio hot spots may not lurk right at the stellar surface. The action could be taking place anywhere along the line of sight, he says.

    Rankin replies that Ruderman and Sutherland's picture of electrical storms near the surface is the only reasonably complete explanation for the hot spots at the moment. “It would be absurd not to map these emission centers back onto the surface,” she says. But she concedes that the real test will come in studies of the “weather maps” on other pulsars. Astronomers will be watching as closely as picnickers glued to their TV sets during a bad-weather weekend.


    New Minister on a Mission to Modernize

    1. Robert Koenig

    Three months into her new job, Germany's research and education minister is stirring up the system, while promising not to go too Green

    BONN—When Germany's new Social Democrat-led government swept into power last fall, after 16 years of rule by the conservative Christian Democrats, the nation's scientists greeted the change with mixed emotions. The new government, elected on a tide of high expectations, formed a coalition with the eco-friendly Green Party, and some scientists fretted that the Greens would be put in charge of the research and education ministry. After a few weeks of nail biting, however, they breathed a sigh of relief when the job went to Edelgard Bulmahn, the Social Democrat's parliamentary “shadow minister” for science and education. But that doesn't mean that the research community is in for a smooth ride.

    In a wide-ranging interview with Science last week, Bulmahn made it clear that she intends to shake up the research system, with a long list of goals that include loosening bureaucratic restraints on German research institutes, modernizing the university system, bolstering programs to promote women and independent young scientists, and more closely linking research efforts at universities and independent research institutes. “We now have a research system that is supported by separate columns: Max Planck Society basic research institutes, applied research institutes, university research, and national research centers,” Bulmahn says. “We must significantly strengthen the connections among these different research organizations.”

    Bulmahn has also inherited some thorny political problems from her predecessor, Jürgen Rüttgers, and some controversial policies of her new government are creating still more. The Social Democrats and their Green coalition partners have pledged to phase out nuclear power in Germany, prompting fierce debate over the future of the nuclear industry and nuclear research. A recent revival of militant animal-rights activism in Germany has led to demands for stricter laws in that area, while activists have threatened some prominent researchers. And federal and state officials continue to struggle over how to restructure the nation's troubled system of higher education and how to improve research.

    Bulmahn, who studied political science at the University of Hannover and then rose through Germany's political landscape, has so far drawn qualified praise from the research community. Although many scientists are wary of some positions being taken by the new government, she is given credit by research and university leaders for moving quickly to tackle some difficult problems. “She has the courage to try to modernize an old-fashioned system,” says Klaus Landfried, president of the HRK, the conference of university rectors and presidents.

    One reason for nervousness in Germany's laboratories is concern over how far the government will lean toward Green Party policies. Bulmahn has sought to reassure the scientific community that the Greens' presence will not lead to a sea change in research policy in fields such as biotechnology and nuclear fusion. “I don't foresee any fundamental change” in policy on gene technology, she says. “This government believes that biotechnology is a key technology for Germany.” However, Bulmahn says the government feels that more research is needed into “the long-term impact of genetically modified plants and microorganisms on the environment.” On the animal-rights issue, Bulmahn notes that “we already have strict laws about the treatment of animals in research,” and says she does not support an effort by activists to amend the German constitution to guarantee animal rights.

    The new government's stance on nuclear power is also causing jitters. Although the research ministry no longer has jurisdiction over applied research in nuclear energy and other energy sources, it remains responsible for federal policy on research reactors, including the controversial FRM-II neutron source in Munich (see sidebar), and for those facilities—mainly national research centers and Max Planck institutes—that carry out fusion research. Bulmahn says that, at present, she does not foresee any major change in Germany's support for the fusion research programs, in part because it is largely coordinated and partly funded by the European Union's Euratom program. “The budget for the new European research program already has been decided,” she says.

    That's reassuring to researchers such as Klaus Pinkau, director of the Max Planck Institute for Plasma Physics, who helps coordinate Germany's fusion research program. Pinkau told Science that the new government has not backed down on commitments that include the construction of an ambitious new fusion research facility in Greifswald. “Germany sees itself as an important part of the international effort on fusion research, and I am told the government has no intention of withdrawing from that effort,” he says.

    Bulmahn also has inherited Germany's substantial commitment to the international space station—at $1.5 billion, about 40% of Europe's contribution to the project. “I disagree with the decision to concentrate so many resources on crewed space projects,” she says, but acknowledges that she has no choice but to stand by Germany's commitment. Her own preference, she says, is for robotic missions and Germany's own space projects.

    Reforming the rigid higher education system—and related research—is also high on Bulmahn's agenda. Scientific research at some German universities is in difficult straits, partly because the number of students has grown faster than have public resources for new buildings and staff. In her first budget battle, Bulmahn convinced the Cabinet last month to approve a modest increase in federal funding for a program to improve the higher education infrastructure. She also backs the HRK's call for universities to have more flexibility in spending federal funds; in return, she wants research at higher education institutes to be subjected to outside evaluation. “We don't have a good evaluation system for universities,” she says.

    Sharing the concerns of other experts, Bulmahn worries that many talented young German researchers become frustrated during the difficult years between getting a Ph.D. and becoming a professor. In German-speaking Europe, that involves a long “Habilitation” process, during which postdocs do major research projects under the strict supervision of professors. Bulmahn and key allies, such as biochemist Ernst-Ludwig Winnacker—president of Germany's DFG granting agency—are pushing hard to switch to a U.S.-style “assistant professor” system, a step that the HRK has endorsed for natural science faculties. And the proposed new research budget includes about $18 million for a new DFG program to promote more independent research by postdocs.

    Treading lightly on the male-dominated turf of Germany's scientific elite, Bulmahn also wants to ensure that women have a better chance to rise to leadership in science and higher education. “The number of women in leading scientific positions in Germany is too low—far lower than the numbers of women in such positions in the U.S. or even in Britain or in France,” she complains. “We want to encourage scientific organizations to employ more women, and to give more women a chance to excel.”


    What to Do With FRM-II?

    1. Robert Koenig

    BONN—One of the thorniest issues on the slate of Germany's new research and education minister, Edelgard Bulmahn, is what to do about Munich Technical University's (TU's) planned neutron source, a research reactor dubbed FRM-II, now under construction in the Munich suburb of Garching. Many members of Bulmahn's Social Democrat party and the eco-friendly Green Party have for years opposed FRM-II's design because it will use highly enriched uranium (HEU) fuel. Use of HEU as a reactor fuel is now discouraged by most governments because the material is the key ingredient of some types of fission bombs.

    When the two parties formed the nation's new coalition government last fall, activists pressed for a quick decision to delay or even cancel construction of the new reactor. But last month, the government opted to put off any action on FRM-II until a new “expert group” assesses whether it is technically and financially feasible to redesign the reactor to use low enriched uranium (LEU) fuel. The expert group, whose members were named last week, includes four German physicists and two nonproliferation experts.

    The panel is expected to submit its findings this summer, and Bulmahn told Science that no decision on converting FRM-II will be made until then. “My ministry and the foreign ministry, which is responsible for nonproliferation policy, are discussing the details,” she says. Also involved in that decision will be the environment ministry and the Bavarian state government, which strongly backs the project.

    Meanwhile, construction of the $500 million neutron source continues, and TU officials confidently predict completion on schedule in 2001. Physicists in Garching say they believe the expert panel will agree with them that a redesign—now that construction of FRM-II is nearly two-thirds complete—would delay the project by several years, cost tens of millions of dollars, and result in a less desirable neutron source. “To use LEU fuel, we would have to double the power, and we would end up with a second-class neutron source,” says Wolfgang Gläser, a prominent TU experimental physicist who helped develop the concept for FRM-II.

    The Garching physicists have allies among U.S. neutron researchers, including Nobelist Norman F. Ramsey of Harvard University, who wrote recently in a letter to the president of TU that “many of [the FRM-II's] most interesting research applications are dependent on the use of highly enriched uranium.” But experts at Argonne National Laboratory in Illinois have argued for years that FRM-II could be redesigned to use LEU fuel. Dave Baurac, an Argonne spokesperson, says “we are convinced that it is possible to build a research reactor that can meet all the FRM-II performance goals while using LEU fuel. But we have not been provided with the information required to assess what is technically or economically feasible” at this point in FRM-II's construction.

    Wolf-Michael Catenhusen, the German research ministry official who will lead the expert panel, told Science that the committee will likely “integrate Argonne's information and expertise” in its deliberations, but he noted that Argonne's previous recommendations had assumed a redesign before—rather than in the midst of—construction.

    TU officials were a bit miffed that none of their scientists were included in the expert panel, but took heart that four prominent nuclear physicists with experience in converting reactors—including Eckehard Bauer of the ILL neutron source in France and Peter Armbruster of Germany's GSI heavy-ion research institute—will bring expertise to the group. Says Gläser: “This decision should be made on the basis of science, not politics.”

Stay Connected to Science