News this Week

Science  28 Jan 2000:
Vol. 287, Issue 5453, pp. 558

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

  1. 2001 BUDGET

    Clinton Seeks 'Major Lift' in U.S. Research Programs

    1. Andrew Lawler

    Sometimes good news travels so fast that it gets there before its scheduled delivery date. Two weeks before the official roll-out of his proposed 2001 budget, President Bill Clinton unveiled his request to Congress for a hefty 7% increase in programs that account for the bulk of government spending on civilian science and technology. The proposed $2.8 billion hike, which would prove a windfall to researchers exploring everything from the sun to atomic-level machines on Earth, represents a strong commitment to academic research. It also challenges Congress to ease up on spending limits in favor of boosting science.

    “Science and technology have become the engine of our economic growth,” Clinton told students and faculty at the California Institute of Technology (Caltech) in Pasadena in a brief visit on 21 January. “We're going to give university-based research a major lift.” That lift includes a proposed $675 million increase for the National Science Foundation (NSF)—double its previous largest boost and the biggest percentage request since former president George Bush sought 18% in 1992. It would also add $733 million to the National Institutes of Health (NIH), well under its most recent $2.3 billion increase, and jump-start research in the hot field of nanotechnology. The proposals, for the fiscal year that starts on 1 October, are winning cautiously positive reviews on Capitol Hill even among Clinton's Republican opponents.

    Among the highlights:

    · NSF: In recent years, NSF's modest increases have been overshadowed by the astonishing growth at NIH. But Science Adviser and former NSF chief Neal Lane argued successfully within the White House that the foundation should receive a larger slice of the R&D pie. Half of the $675 million increase in the agency's current $3.9 billion budget proposed by Clinton would be allocated to core programs in its six discipline-based directorates. NSF is also asking for $120 million more as the biggest single player in the nanotechnology initiative and $223 million more—including a second 5-teraflops (trillion operations per second) supercomputer—for its continued leadership of an information technology initiative begun last year. “It reflects their faith in NSF that we will address our objectives,” said a happy NSF director Rita Colwell after the president's speech. “And it's necessary to achieve a balance [between biomedical and] nonbiomedical science.”

    · NIH: The Administration's 4.5% boost for NIH research likely will be bettered by lawmakers, but it's higher than last year's anemic 2% proposed increase. One Republican staffer speculates that the bigger request for the $17.9 billion agency was calculated to be “respectable enough so that the Parkinson's and cancer lobbies aren't screaming” but small enough not to interfere with the president's other priorities. Vice President and presidential candidate Al Gore also was behind the proposed increase, a politically popular stance. At the same time, some researchers complain about the Administration's touting of a $1 billion boost, without mentioning that $267 million would simply be passed along to other agencies, notably $182 million to the $200 million Agency for Healthcare Research and Quality for studies on the efficacy of various treatments and procedures.

    View this table:

    · Nanotechnology: The Administration is touting a new National Nanotechnology Initiative that would boost spending for this area, in which researchers manipulate individual atoms and molecules, by 84% to $497 million. Although researchers say that nanotechnology is still in its infancy—and really should be called nanoscience—the promise of creating tiny computer chips or tiny biological machines has caught the attention of White House officials. NSF would get the most money, some $217 million, while the Defense Department, which is eager to exploit nanotechnology for military systems, would spend $110 million, a 57% increase. “This moves in the right direction,” says George Whitesides, a Harvard biochemist. “We need to build a research base—and it's a big, big deal for the country,” noting that Japan and Europe are likewise increasing their spending on the field.

    · Information Technology (IT): The Internet's success continues to keep IT research and development on the Administration's front burner. For 2001, Clinton wants $2.27 billion—a $605 million boost over this year. Lead agency NSF's share would increase 43% to $740 million, with the Department of Energy (DOE) getting $667 million—a 29% boost. This year, however, Congress took away DOE's added portion of the IT pie while funding NSF's entire request.

    · Bioenergy: The 2001 request would increase spending on technologies to convert biomass into fuels and products. Department of Agriculture spending on this initiative would increase 62%, to $115 million, while DOE's share would climb 39%, from $125 million to $174 million.

    · NASA: The space agency's stagnant budget—which has hovered around $13.6 billion for the past few years—would finally rise if Clinton gets his way. “There are a lot of smiles here,” says one NASA manager. Nearly half of the $650 million boost would be spent on science efforts, including a major new initiative to study the sun. The bulk of the remainder would go to a space launch effort designed to come up with a replacement for the 20-year-old space shuttle system.

    Administration insiders credit Clinton's chief of staff, John Podesta, for raising the visibility of science and technology spending within the White House. “This will be viewed as an age of great investment in science and great investment in NIH, and I think we're doing it all within the fiscally prudent policies the president brought to town,” Podesta told reporters on 16 January. And they credit Lane with keeping the pressure on. “In his quiet way, Neal Lane made it his mission” to win across-the-board increases for basic science rather than just for NIH, says one White House staffer. Both Lane and Colwell have been pushing hard for such a balance, and they were joined by Office of Management and Budget officials. Gore and his staff focused on a boost for NIH, Administration officials add.

    The reaction on Capitol Hill was markedly different from the scathing reviews of Clinton's previous budget requests. “I am confident that together we can make fundamental research and development a real priority,” declared Representative James Sensenbrenner (R-WI), chair of the House Science Committee, although he added that science priorities should not be wrapped in “a larger government spending spree.” Representative Nick Smith (R-MI), who chairs the House basic research subcommittee, says he supports the proposed increases as long as they are compatible “with other priorities like strengthening Social Security, paying down the debt, and providing tax relief for working families.”

    Substantial increases may be easier to win this year, because Republican and Democratic leaders have tacitly agreed that a booming economy will allow them to bury the 1997 balanced budget agreement, which set strict spending caps. And a similarly bipartisan agreement that the rosy economy is due in large part to research appears to be emerging. “If you can afford it,” Lane told reporters before Clinton's speech, “you want to increase the federal investment in R&D, because the payoff to the whole country is so high.”

    In his Caltech speech, Clinton foresaw “an era of unparalleled promise—fueled by curiosity, powered by technology, driven by science.” R&D advocates hope to turn those lofty words into hard cash in the months ahead.


    Company Gets Rights to Cloned Human Embryos

    1. Gretchen Vogel

    A U.S. company has received two British patents that appear to grant it commercial rights to human embryos created by cloning. The precedent-setting patents, issued last week on the cloning method that produced Dolly the sheep, have sparked protests from groups concerned about the ethics of biotechnology patents, especially those covering human genes or cells.

    The British government is “the first government in the world that has issued patent protection on a human being at any stage in development,” claims author-activist Jeremy Rifkin of the Foundation on Economic Trends in Washington, D.C. He said he will challenge the patent, arguing that British law forbids giving someone property rights to a human even at the blastocyst stage. The patent, he says, is “breathtaking and profoundly unsettling.”

    The patent gives California-based Geron Corp. exclusive rights to “a reconstituted animal embryo prepared by transferring the nucleus of a quiescent diploid donor cell into a suitable recipient cell” up to and including the blastocyst stage. That claim includes human embryos, says David Earp, Geron's vice president of intellectual property. Last summer, Geron bought Roslin Bio-med, the commercial arm of the government-funded Roslin Institute outside Edinburgh, Scotland, where Dolly was born in 1996.

    The application process was fairly smooth, says attorney Nick Bassil of the London firm Kilburn and Strode, which represented Roslin and Geron. He says the patent office allowed the claim because it is consistent with recommendations from the U.K. Human Fertilisation and Embryology Authority and the Human Genetics Advisory Commission that cloning technology be permitted on human embryos for the development of treatments for disease. The government has imposed a moratorium on any such experiments, however, while an expert advisory group reviews that recommendation. U.K. patent office spokesperson Brian Caswell says European Union directives forbid patents on human cloning, but he suggests that the patent was allowed because it only covers embryos in “the very early stages of development” that would not result in a live birth. “The exercise of any invention would have to be in accordance with the law,” he adds.

    Geron, a biotechnology company which has also supported much of the work on human embryonic stem cells, hopes to develop so-called “therapeutic cloning” to treat human diseases. The process would involve transferring the nucleus of a patient's skin, muscle, or other cell that has been made “quiescent,” or nondividing, to an egg cell to create an embryo. The embryo would be allowed to develop for a few days and then harvested for its stem cells, which would be used to treat the patient.

    The patent surprised others in the cloning field, including commercial competitors. Michael West, president and CEO of the Massachusetts-based Advanced Cell Technology, says the claim is “extremely broad.” “If Geron is right and its claim is to a human embryo, then to my knowledge it's the first time that anyone has claimed ownership of a human embryo,” West says. Advanced Cell Technology received a U.S. patent on nuclear transfer from nonquiescent cells last year, but the patent covers only nonhuman mammalian species.

    Earp says Geron has received word from the U.S. patent office that its claims have been allowed, and he expects the patent in the next few months. However, the U.S. patent office has been reluctant in the past to issue patents covering human material, and the company's U.S. application only covers cloning of nonhuman mammals. Although Earp says Geron also plans to pursue therapeutic cloning in the United States, he says the company “is pursuing a different strategy” to protect its commercial claims.


    Blue Semiconductors Settle on Silicon

    1. Robert F. Service

    Researchers trying to coax light from semiconductors have a case of the blues, but they couldn't be happier. For the past several years they've managed to cajole semiconductor devices containing gallium nitride to emit blue light when pumped with electricity. That has opened a world of possible applications, including converting some of that blue light to other colors and combining them to make a chip-sized replacement for the light bulb. But blue semiconductor lights are still too expensive for such uses, in part because they're grown on expensive substrates such as sapphire. Now a team of U.S. researchers reports progress on a cheaper alternative.

    In the 17 January issue of Applied Physics Letters, Asif Khan and colleagues at the University of South Carolina, Columbia, along with co-workers at Wright Patterson Air Force Base in Ohio and Sensor Electronic Technology Inc. in Troy, New York, describe a scheme for producing blue and green light-emitting diodes (LEDs) on a base of silicon, the cheap and ubiquitous substrate for microelectronics. They've also managed to make small LEDs just where they want them on the chip, an important step toward complex displays made up of thousands of separately controlled lighting elements. The new LEDs atop silicon aren't yet as bright as those grown on sapphire. Still, “it's a good development,” says Fred Schubert, an electrical engineer at Boston University in Massachusetts. “Silicon substrates are cheap and big. So if a silicon LED technology succeeds, it would mean the technology could be very cheap.”

    The garden-variety light bulb, little changed over the last century, costs just pennies to produce but is expensive to run. It pushes electricity through a tungsten filament, turning it white hot and producing 15 lumens/watt of soft white light and a lot of waste heat. Newer compact fluorescents, which excite gases to emit light, do better, turning out 60 lumens/watt. But LEDs, which inject energetic electrons into a solid semiconductor, have the potential to put them all in the shade. As these charges move through, they shed some of their excess energy as photons of light, the color of which depends on the exact combination of materials used in the device. And because this process generates far less heat, it can theoretically produce 250 lumens/watt, says Schubert.

    But so far reality has fallen far short of theory. Billions of tiny cracks and other defects form in the gallium nitride as it is grown, and they resist the flow of electrical charges through the device, generating heat instead of light. To minimize the number of cracks, researchers grow their gallium nitride devices atop sapphire in part because it has a roughly similar crystal structure, making it easier for the gallium nitride to form an orderly lattice. Silicon's lattice is slightly different, and it suffers from other drawbacks as well. Most notably, at the temperatures usually used to vaporize gallium nitride and deposit it on the substrate—around 1000ºC—silicon atoms evaporate off and get mixed up in the gallium nitride lattice, causing more defects that mar its optical properties.

    Still, the lure of silicon's low cost has kept researchers searching for a way to make blue LEDs work. In 1998, an IBM team made some initial progress using a growth technique known as molecular beam epitaxy that works at a relatively cool 750ºC. This produced LEDs that turned out ultraviolet and violet light, albeit about 1/15th the brightness achieved by devices grown on sapphire. Researchers from the New Jersey-based Emcore Corp. improved matters last September by using a technique known as metal-organic chemical vapor deposition to lay down a 20-nanometer-thick layer of buffering material on the silicon substrate and then grow the gallium nitride on top.

    Khan and his colleagues combined the two approaches. First, they used a 700ºC epitaxy technique to lay down a buffer layer of aluminum nitride. They then raised the temperature to 900ºC and used vapor deposition to create gallium nitride. They also went beyond the other teams and used the masks and etching of conventional photolithography to place the gallium nitride only where they wanted it. The result was an array of tiny blue LEDs.

    The new LEDs still only put out about one-fifth the light of those grown on sapphire. But Khan and others say they have other ideas up their sleeve to improve efficiency. The patterning technique also opens up the possibility of making full-color gallium nitride LED displays. Khan says that his team has already managed to make green LEDs simply by adding a little indium to the gallium nitride mix. If the South Carolina team or their competitors can figure out a way to also get red gallium nitride LEDs, it would allow them to integrate both the light emitters and the electronics needed to drive them on the same silicon substrate, which would drastically drop their cost to produce. That promise is enough to keep the lights burning late into the night at semiconductor labs around the globe.


    Discovering the Original Emerald Cities

    1. Erik Stokstad

    Emeralds have turned many an eye green with envy. The ancient Egyptians forced slaves to dig for the precious stones, prized as a symbol of immortality. Centuries later, Romans dominated the trade, setting the gems in gold jewelry. And when conquistadors in the 16th century captured mines in Colombia, they shipped back chests full of eye-popping emeralds that were snapped up by royalty, from Indian maharaji to Turkish sultans. Even today, dealers have no trouble spotting the exceptional clarity and intense color of the Colombian gems. But it's been notoriously difficult to track down the birthplaces of the murkier Old World emeralds.

    Now on page 631, scientists describe a kind of atomic birth certificate that can peg where emeralds were grubbed from the ground. The technique might help dealers authenticate top-quality stones, and it could clear up the mysterious origins of Old World emeralds, including some famous gems. This new kind of detective work “is just the beginning,” says Dietmar Schwarz, a mineralogist with Gübelin Gemmological Laboratory in Lucerne, Switzerland. Indeed, the approach is already providing information on ancient trade routes, and it might someday offer tantalizing hints of long-lost mines.

    Emeralds are a kind of beryl, a mineral made when molten granite thrusts up into Earth's crust, cools, and hardens. Normally drab white or pale green, beryl can acquire a striking verdancy if the granite first muscles through rocks bearing chromium and vanadium. Hot water soaks up these and other elements, then crystallizes. Almost all the world's emerald deposits were formed this way.

    Except in Colombia. There, hundreds of millions of years ago, black shale containing traces of chromium and vanadium washed off the west coast of South America. As the Caribbean Plate pushed eastward against the Brazilian Plate, it shoved the shale-covered sea floor onto the continent and twice created faults in the shale: first 65 million years ago, then again 38 million years ago. The squeezing and folding acted like a giant squeegee, forcing hot water into the black shales where the fluids picked up chromium, vanadium, and other ingredients of emerald. This brew percolated beneath impermeable shale layers until the pressure grew so great it ripped apart the rocks. The solution shot into the cracks, cooled, and gave birth to clear, blue-green emeralds, according to a scenario developed since the mid-1990s. But as researchers reconstructed this geologic history, they discovered more than a recipe for radiance: Colombian emeralds, it turns out, have unique oxygen isotope ratios that depend on where the stones were mined. So did emeralds from many mines elsewhere in the world.

    Intrigued, Gaston Giuliani of the Petrographical and Geochemical Research Center (CRPG)-CNRS in Vandoeuvre-lès-Nancy, France—along with CRPG colleague Marc Chaussidon and Didier Giard and Daniel Piat of the French Association of Gemology—decided to see if they could use this isotopic tag to trace the origins of emeralds in artifacts. First they had to persuade the relics' wardens that they would do no harm. “No one wants you to touch [a precious specimen], no scratching, nothing,” says Fred Ward, an independent gemologist in Bethesda, Maryland, and author of the book Emeralds. But the French group wasn't intending to hack off a piece. To measure oxygen isotopes, the researchers fire a beam of cesium atoms at the emerald, vaporizing a few atoms and leaving a hole a mere 20 micrometers wide and a few angstroms deep.

    Reassured that samples weren't visibly marred in a test run, Henri-Jean Schubnel, the director of the National Museum of Natural History in Paris, and curators elsewhere let the team have a crack at a handful of gems spanning the history of emerald trading—from a Gallo-Roman earring to a thumb-sized emerald set on the Holy Crown of France to treasure from a Spanish galleon. “Gems with this pedigree are jealously guarded by museums, so to get access is quite an accomplishment,” says Terri Ottaway, a geochemist and gemologist with the Royal Ontario Museum in Toronto, who has worked on Colombian emeralds. As expected, the emeralds from the wrecked galleon bore the isotopic signature of Colombian mines. But surprisingly, the stone in the earring turned out to come from the Swat River in Pakistan, demonstrating that the Romans had access to gems from much farther afield than Egypt. And the 13th century French crown, it turns out, is graced by an emerald from the Austrian alps—one that appears to have been unearthed more than 500 years before the first known description of these deposits.

    For gem dealers, isotopes may help tell Colombian emeralds from top Afghani stones, which sometimes resemble each other, says Schwarz, who's working with Giuliani's team to see if oxygen isotopes can pinpoint the origins of rubies and sapphires. Customers care about an emerald's source, Schwarz says, because it helps determine value. Isotopes could also provide an additional tool for spotting synthetic emeralds, which are hard to distinguish from flawless gems. “We have big-time problems with fraud,” says Ward.

    The technique may offer an important new tool for archaeologists, too. They have a hard time tracing stony emeralds, the opacity of which tends to obscure microscopic drops of fluid or other telltale inclusions of a source region. Oxygen isotopes may lift this veil. “It's a great idea,” says Ottaway, “but I'd like to see it tested with more samples.” And who knows: If some ancient emerald turns out to be an isotopic orphan, it may point the way to a mine not found on any map.


    Generating New Yeast Prions

    1. Michael Balter

    For a controversy that many insist is settled, the long-running argument over whether abnormal proteins called prions act alone to cause disease has had amazing staying power. The stakes in the debate are high, because prions are implicated in several fatal neurodegenerative diseases, including human Creutzfeldt-Jakob disease and bovine spongiform encephalopathy, or “mad cow disease.” But while most researchers now support the so-called “protein-only” hypothesis, a vocal minority insists that it is not yet proven—and that an as-yet-unidentified microbe, such as a virus, may team up with the prion protein to devastate the nervous system (Science, 22 October 1999, p. 660). New findings in yeast, reported in this issue and in the January issue of Molecular Cell, may now provide additional comfort for the prion proponents, although skeptics are unlikely to be convinced.

    Prion doubters find the hypothesis hard to swallow because it holds that the proteins can infect tissues and reproduce themselves, thus violating long-standing dogma that a DNA- or RNA-based genome is necessary for such autonomous behavior. During the 1990s, geneticist Reed Wickner at the National Institutes of Health in Bethesda, Maryland, suggested that some bizarre patterns of yeast inheritance might be explained if these single-cell organisms also harbored prions. Soon afterward, University of Chicago geneticist Susan Lindquist's team showed that such so-called “yeast prions” do exist and that they appear to behave similarly to mammalian prions. They cause changes in the yeast's biochemical properties that can be passed on to daughter cells when the yeast cells divide, apparently independently of any infectious microbe or the yeast's own genome (Science, 2 August 1996, p. 580).

    In the current work, reported on page 661, Lindquist and postdoc Liming Li created an artificial prion by fusing part of a yeast prion protein called Sup35 to a normal cellular protein from the rat. Like known yeast prions, this chimeric protein altered the biochemical properties of yeast cells in ways that could be inherited by progeny cells. “The astonishing thing is that the prion property can be transferred to a totally different protein,” says neuropathologist Adriano Aguzzi of the University of Zurich, Switzerland. Indeed, this view is bolstered by a second paper, published in the current issue of Molecular Cell, in which the Lindquist lab has identified a new yeast prion, a protein called Rnq1, and shown that a segment of this protein also confers prionlike activity.

    The new experiments build upon previous work showing that Sup35, like prion proteins that infect mammals, exists in two states: a normal, soluble form, and an abnormal, insoluble conformation. When the abnormal protein contacts its normal counterpart, it can trigger the structural change that converts it to the insoluble form, thus causing the prions to clump. In nerve cells, this causes permanent damage. In yeast, normal Sup35 helps control the yeast's genetic machinery, telling it when to stop translating messenger RNA into proteins. But when abnormal Sup35 dominates, this translational control is lost.

    Researchers have found that a specific segment of Sup35 and Ure2, a yeast prion identified by Wickner, is needed for the proteins to act as prions. Lindquist and Li attached this “prion domain” from Sup35 to the rat glucocorticoid receptor, which controls the transcription of DNA into RNA—an entirely different function from Sup35's. In its soluble form, the new protein, called NMGR, could still induce transcription, as demonstrated by its ability to turn on a gene coding for the enzyme β-galactosidase. But when switched to its prion form, the hybrid protein was no longer capable of turning on the gene. Most importantly, this inactivated phenotype could be transmitted in a heritable fashion between mother and daughter yeast cells.

    To find the new yeast prion, Lindquist and graduate student Neal Sondheimer searched gene databases for sequences sharing the characteristic features of the prion domains of Sup35 and Ure2, which both have large amounts of the amino acids glutamine and asparagine. They hit upon a protein they called Rnq1. Although Rnq1's function is as yet unknown, it can exist in normal and abnormal states like other prions. In addition, when the team substituted Rnq1's prion domain for that of Sup35 and introduced the altered protein into yeast, it had the same biochemical properties as Sup35, thus proving, the authors say, that the prion domains are alone responsible for perpetuating the prion behavior.

    “The new experiments provide an almost incontrovertible argument in favor” of the protein-only hypothesis, at least in yeast, Lindquist says. “One has to come up with some very implausible scenarios to explain all of this with a virus.” But some researchers argue that the new work may not be relevant to mammals. Yale University neuropathologist Laura Manuelidis, a leading prion skeptic, says Lindquist's work with yeast has put the prion within a “more acceptable, experimentally testable paradigm.” But she notes that the yeast prion model “has nothing to do with infectious disease.”

    Despite these criticisms, researchers agree that genetically engineered prions might help resolve the debate over the protein-only hypothesis in mammals, particularly if pure prion preparations unassociated with any suspected virus or other microbe could re-create prion diseases in test animals. So far, attempts to do this with genetically engineered versions of the human prion protein PrP have failed, although this might be due to difficulties in coaxing the protein into the exact conformation necessary for infectivity.

    But in work reported in the October 1999 issue of Nature Cell Biology, the Lindquist group did succeed in expressing mouse PrP in both yeast and cultured nerve tumor cells and getting it to convert to an abnormal form close to that adopted by naturally occurring mouse prions. The team is now testing whether these transgenically produced prions can infect mice. “If they can put in a pure or recombinant PrP protein, made in a virus-free cell, and get something that replicates infectivity in mammals, then I would be convinced [that the protein-only hypothesis] is correct,” says Manuelidis. “I've been waiting 20 years to see that experiment.”


    Publishers Discuss European E-Print Site

    1. Robert Koenig

    While U.S. organizers were putting the finishing touches on a new Web site known as PubMed Central, a group of scientists and publishers met in Heidelberg last week to plan a European counterpart called “E-Biosci.” The U.S. project, due to go online within a week, is billed as a free archive of biomedical papers. It catalyzed the European initiative but will not be exactly the same. E-Biosci, according to organizers, is likely to require tougher peer reviewing for nonpublished articles and may allow publishers to charge some fees for access to their papers.

    The prime mover behind E-Biosci, Frank Gannon, executive director of the European Molecular Biology Organization (EMBO), believes it is “quite feasible” for the European site to begin operating this year, but he acknowledges that no final plan has been agreed on, and long-term funding has not yet been secured.

    The 30 key European players who met on 19 January in Heidelberg to discuss E-Biosci did not set its exact contours. Participants in the meeting included representatives of EMBO, European science publishers, European research organizations, and national science ministries. “Everyone agreed that something has to be done, and quickly,” said Gannon, an Irish molecular biologist. “But follow-up meetings will be needed to decide the best way of solving the problems.” Gannon says organizers must now find long-term financing, iron out technical problems, and drum up support from the European Union (E.U.) and national research agencies.

    The biggest challenge appears to be reaching agreement among Europe's scientific publishers. Stefan von Holtzbrinck, managing director of Macmillan's Nature Publishing Group—which publishes Nature and five related journals—told Science that he supports the idea of such a Web site, but that E-Biosci should give publishers the option of charging fees for access to their publications. “We would not go with any venture that would require you to make all of your content cost-free from day one,” he said. Another European science-publishing executive, who asked not to be named, said he expects that several European publishers “will participate in some way”—perhaps by giving cost-free access to certain journals or articles that were published more than a year earlier. Martin Richardson, publishing director of the Oxford University Press, said he supports “the idea of setting up a European archive” and noted that Gannon and other E-Biosci organizers “are trying very hard to come up with a proposal that will be acceptable to publishers.”

    Gannon said E-Biosci may not insist on entirely free access: “We and PubMed have the same aims. But I think that PubMed will not be able to offer the complete literature,” because publishers may not be willing to share text for free, “and I don't think that E-Biosci will be completely free.” Gannon was pleased by what he called “the positive input” from the publishers represented at the meeting: Macmillan, Elsevier Science, Springer Verlag, Oxford University Press, and Blackwell Science.

    One way in which E-Biosci probably will differ from the U.S. Web site is in limiting the posting of unpublished papers. Gannon said that everyone at the Heidelberg meeting agreed that unpublished drafts and preprints “would have to be seriously peer reviewed,” not simply screened, before being put on the site. PubMed Central, in contrast, may have an adjunct site called “PubMed Express” that will include unreviewed papers. And another European Web site, a private venture called “BioMed Central,” is planning to make draft papers available. Vitek Tracz, chair of the Current Science Group in Britain, issued a press release last week saying that the site, funded by advertising and service fees, would be launched in May (

    Financing is another major issue for E-Biosci. Last summer, EMBO agreed to allocate $511,000 to start the venture, but the project does not yet have any other long-term funding commitments. Although the E.U.'s research commissioner, Philippe Busquin, says he fully supports the concept, Brunno Schmitz—who represented the Research Directorate in Heidelberg—cautions that the European Commission at most “could only provide seed money” for E-Biosci. Gannon said other revenues might come from advertising, science trusts, or from national research councils.

    Representatives of science councils in Scandinavia were among the most enthusiastic about E-Biosci at the Heidelberg meeting, with Finnish molecular biologist Marja Makarow calling the Web site “a very good project that should get started as soon as possible.” But some worry that E-Biosci might hurt scientific societies, which rely on journals for revenue. The European Science Foundation (ESF) may sponsor a symposium on the impact of e-publications on public trust, patenting, and scientific societies. Said Tony Mayer, who heads the ESF secretary-general's office: “We support the E-Biosci concept, but we are concerned about the implications of e-publication in general on the scientific system.”

    Meanwhile, Gannon predicts that E-Biosci “will collaborate very actively with PubMed Central” as part of “a wider global effort” to make scientific publications more accessible on the Web. And the chief organizer of PubMed Central—David Lipman, director of the National Institutes of Health's National Center for Biotechnology Information (GenBank)—says he strongly supports the EMBO initiative and hopes that “Europe will participate as an equal partner.”


    Cubic Compound Makes a Bigger Bang

    1. Robert F. Service

    Alfred Nobel would no doubt be intrigued by a feat of organic chemistry reported in this week's international edition of Angewandte Chemie: the synthesis of what may be the most powerful nonnuclear explosives ever made. If they can be produced in bulk, the new compounds would put dynamite—Nobel's patented invention—to shame.

    The new explosives—heptanitrocubane and octanitrocubane—have been on the drawing board for more than a decade. Their inspiration was a compound with a molecular core consisting of a cube of eight carbon atoms studded with hydrogens, first synthesized in 1964 by Philip Eaton, an organic chemist at the University of Chicago, and his colleagues. Eaton and others later realized that if they could replace the hydrogens with reactive nitro groups—each containing a nitrogen and two oxygens—they'd have an ultradense, and therefore ultrapowerful, explosive.

    But swapping nitros for the hydrogens proved a Herculean task. Eaton's team struggled with the synthesis for some 15 years until at last, in 1998, they found a reaction that tacked on all but the eighth nitro. Now Eaton—along with chemist Mao-Xi Zhang and crystallographer Richard Gilardi of the Naval Research Laboratory in Washington, D.C.—has discovered a more efficient way to construct the seven-nitro heptanitrocubane, as well as the magic mix of ingredients and conditions that tacks on the eighth to form octanitrocubane. “I think it's fantastic,” says Leo Paquette, an organic chemist at Ohio State University in Columbus. “To get all the way to eight nitro groups is clearly a feat. I really had serious doubts that he'd ever get there. It's asking a lot of the molecule to squeeze all those nitro groups into a limited space.”

    That tightly packed structure gives the new nitrocubanes a density of about 2 grams per cubic centimeter (g/cm3), a number closely tied to the explosive power of compounds. By contrast, the density of TNT is 1.53 g/cm3, HMX—the most powerful conventional military explosive in regular use—is 1.89 g/cm3, and Cl-20—another experimental explosive—is closer to the nitrocubanes at 1.96 g/cm3. Eaton notes that other factors also play important roles in the power of an explosive, such as how completely the material combusts when triggered. But because the explosiveness of a compound grows as a square of the density, even small changes in this number can have a dramatic effect. Calculations suggest the new explosive may deliver up to 30% more bang than HMX.

    Eaton's team hasn't produced enough of either compound to test their blasting power. But they have made enough to ensure that they're likely to be stable when jostled, a vital trait for any widely used explosive. What's more, Eaton notes that the eight-nitro compound should be able to adopt a more compact crystalline structure than the one they've observed in samples so far. If they manage to coax it into that tighter structure, they should be able to wring out even more explosive power.

    For now, the synthesis of octanitrocubane remains too impractical to ramp up for military-scale production. But Eaton says his team is already pursuing the possibility of tacking nitro groups onto cheap and abundant acetylene, or ethyne, gas (C2H2) and then assembling four of these dinitroacetylenes to produce single molecules of octanitrocubane. Acetylene's high reactivity means that such an assembly won't be easy, says Eaton. But if it works, it's likely to have a powerful impact on both chemistry and explosives.


    Bringing the Salton Sea Back to Life

    1. Jocelyn Kaiser

    The U.S. government has given the nod to what could become one of the most ambitious ecological restoration projects ever attempted: rescuing the Salton Sea, a giant lake in Southern California that has become a deathtrap for wildlife. On 13 January, the Interior Department released a blueprint for healing the lake, now on a fast track to looking as lifeless as the Dead Sea. But Congress must come up with $1 billion or more to pay for a full-scale restoration.

    Created 95 years ago when engineers accidentally diverted the Colorado River into a desert trough, the Salton Sea once thrived as a resort. But years of agricultural drainage made the 984-square-kilometer lake ever saltier and loaded it with nutrients that spur oxygen-depleting algal blooms. Nowadays it's the scene of fish kills and bird die-offs. Despite its woes, many biologists say, the Salton provides critical habitat for birds moving along the Pacific Flyway, a major migratory pathway, as well as for endangered species such as the brown pelican. The lake's boosters succeeded in convincing Congress to pass a 1998 law that directs Interior to consider solutions for freshening the water, now 25% saltier than seawater, and improving it as a habitat (Science, 2 April 1999, p. 28).

    Congress also funded $5 million in studies to reconnoiter the lake's chemistry and biology. The just-released results have “dispelled a lot of perceptions” about the sea's health, says wildlife disease biologist Milton Friend, chair of the multiagency Salton Sea Science Subcommittee. “For the first time, we have some good, solid information” that eases concerns that the lake is too polluted to bother saving. Absolved as suspects in the die-offs are pesticides and the element selenium (concentrations of both are too low), and algal toxins, which so far in lab tests do not appear to harm vertebrates. However, many fish are covered with parasitic worms, reflecting unhealthy conditions that might make them more susceptible to other pathogens. Its penchant for poisoning its inhabitants aside, the lake teems with a remarkable array of life-forms—scientists have counted over 300 organisms not previously reported there, including many microbes new to science. Their studies will appear later this year in Hydrobiologia.

    Having concluded that the Salton Sea is worth salvaging as a resource for wildlife, recreation, and agriculture, Interior officials endorse building an evaporation plant and ponds to remove salts, and they have suggested schemes for pumping in fresher water or moving salty water out. Their plan also calls for a permanent science office that would fund studies and work with management on solutions. Congress will need to appropriate money for these projects, which Interior officials admit could cost $1 billion or more over the next 30 years.

    In the meantime, Salton managers have $8.5 million in hand to move ahead with a pilot project—an evaporation tower that will spray a fine mist of lake water into a holding pond, where salt will precipitate. They're also seeking to pay a commercial trawler to harvest fish, which by removing the nutrients sequestered in the fish's bodies would lead to a healthier ecosystem, and they've hired a wildlife biologist whose job is to anticipate—and take preemptive measures to alleviate—disease outbreaks.

    Some critics say the plan doesn't go far enough to tackle tough issues such as stemming the flow of nutrients into the lake. “Birds and fish are going to continue to die unless they address these other problems,” says Michael Cohen of the Pacific Institute, a think tank in Oakland, California. The plan does leave many issues unresolved, says Stuart Hurlbert, a limnologist at San Diego State University and staunch restoration advocate, but undertaking a pilot project first, he says, “seems a reasonable way to go.”


    FDA Halts All Gene Therapy Trials at Penn

    1. Eliot Marshal

    The death of a volunteer in a gene therapy experiment at the University of Pennsylvania in September triggered a flood of publicity; now, the consequences have landed on researchers and other patients at Penn. On 19 January, the Food and Drug Administration (FDA) stopped all seven clinical trials run by Penn's Institute for Human Gene Therapy—perhaps the most respected and best funded center of its kind—after finding “serious deficiencies” in the way the institute monitors its trials. The FDA had already halted the trial in which 18-year-old Jesse Gelsinger died.

    Penn had not calculated at press time how many patients might be affected by the shutdown. But it noted in a statement that five “active trials” are on hold, including experimental therapies for cystic fibrosis, mesothelioma (lung cancer), melanoma and breast cancer, muscular dystrophy, and glioma (brain cancer). University President Judith Rodin has asked the provost, physician Robert Barchi, to oversee two reviews of all of Penn's clinical research. One panel, chaired by Barchi, includes “distinguished members of the Penn faculty,” and the other, whose chair has not been named, will use outside scientists. The director of the gene therapy institute, James Wilson, a key investigator on all the trials, had no comment on FDA's decision. In December, during a public review of the case in Bethesda, Maryland, Wilson defended the institute's record and argued that the accident was unforeseeable (Science, 17 December, p. 2244).

    The FDA did not release conclusive findings. But it did release an eight-page report offering preliminary “observations” that help explain the suspension order. The report lists 18 problems, some well publicized already. For example, FDA inspectors found that physicians had not filled out volunteer eligibility forms in advance, as required, for any of the 18 patients enrolled in the fatal trial, which was testing a new therapy for a genetic disorder that overloads the body with ammonia. The FDA learned that undated forms were filled out for these patients after Gelsinger's death. In addition, the report says that Penn failed to document adequately the consent process for nine patients, failed to inform FDA rapidly that two monkeys in a similarly designed experiment had died, and failed to develop a “standard operating procedure to conduct a clinical study.” FDA has asked Penn to respond and explain how it intends to comply with regulations in the future. Until FDA is satisfied, the clinical trials will remain suspended.

    Meanwhile, members of Congress and other federal officials have jumped in. Senator William Frist (R-TN) was planning a public hearing in early February on the topic, “Gene Therapy: Is There Oversight for Patient Safety?” And the federal Office for Protection From Research Risks has launched its own broad review, a type that according to the director may take 18 months to complete.


    NIH Cuts Deal on Use of OncoMouse

    1. Eliot Marshall

    A new agreement cuts away some red tape that has been a serious annoyance for cancer researchers. The policy, announced by the National Institutes of Health (NIH) on 19 January, allows NIH-funded scientists doing noncommercial research to use patented transgenic animals without obtaining specific approval of E. I. du Pont de Nemours and Co. of Wilmington, Delaware.

    “This is a significant deal,” because it removes legal worries for many labs, says David Einhorn, counsel to The Jackson Laboratory of Bar Harbor, Maine, a major supplier of research animals. Equally important, experts say, DuPont agrees not to seek broad commercial rights to discoveries in nonprofit labs just because patented animals were used in the research.

    In 1988, Harvard University was granted a patent on the OncoMouse and broad claims on any other mammal (except humans) containing foreign genes implanted to express a tumor. Harvard licensed the technology to DuPont, which didn't start enforcing its legal rights vigorously until the mid-1990s, one expert in the field says. By then, the technology was widespread. Scientists were not only publishing papers on tumor-ridden mice and other animals, but breeding and sharing them with colleagues. When DuPont demanded that they submit papers for review by the company and stop sharing animals, many scientists got upset. “In a sense, we were all violators of the patent” and all at risk of being sued, recalls Harold Varmus, the former NIH director who this month took over as president of the Memorial Sloan-Kettering Cancer Center in New York City.

    Varmus says he personally appealed to DuPont CEO Chad Holliday about the OncoMouse problem. After many months of negotiations, the DuPont and NIH technology licensing staffs reached an agreement that would exempt nonprofit researchers from restrictive licensing provisions. (The text is available at

    “It will be a great relief for many people to know they are not violating the law” by sharing animals with a colleague down the hall, says Varmus. DuPont “deeply appreciates the importance of wide dissemination of tools for basic research and is committed to making [OncoMouse] available to the academic community,” corporate intellectual property manager Tom Powell commented in a written statement. However, the company will retain tight control of commercial uses.


    New Chief Promises Renewal and Openness

    1. Michael Balter

    PARISPhilippe Kourilsky has wasted little time making his mark on the illustrious Pasteur Institute. Just days into his term as its new director-general, Kourilsky spelled out plans for big changes in a meeting here last week with the research center's nearly 1100 scientists. His strategy includes a stronger concentration on public health and applied research as well as a stiffer evaluation process that some Pasteur researchers say could lead to a major reshuffling of lab heads in coming years. Yet despite this potential shake-up, Pasteurians who spoke with Science are casting their lot with their new boss. “Kourilsky's speech was positively perceived by most of the people here,” says molecular geneticist Fredj Tekaia.

    Since it was founded in 1888 by Louis Pasteur, the institute has gained a worldwide reputation in biomedical research, and its scientists racked up eight Nobel Prizes during the past century. But Kourilsky's new orientation may help resolve some long-standing debates at Pasteur, many of which revolve around just how hard basic scientists should be trying to make their research pay off in medical applications (Science, 15 October 1999, p. 382). Reminding his colleagues that the statutes of the institute—which is run by a private foundation partly supported by the French government—put a clear emphasis on microbiology and public health as top priorities, Kourilsky said that a lack of cooperation between labs at Pasteur has created an “intolerable” gap between basic and applied research. “Is it acceptable, for example, that collectively the immunologists are so little involved in the study of several major pathologies … [such as] Helicobacter pylori and even HIV?” asked Kourilsky, an immunologist himself.

    Some Pasteurians say that because most of Kourilsky's talk was short on details, they would wait and see how effectively their new chief implements his priorities before passing judgment. But one important change outlined by Kourilsky is far from abstract: Pasteur's 110 research labs will now be allowed to exist at most 12 years at a stretch unless they pass a rigorous scientific evaluation. Similar policies are already in force at some of France's large public research agencies, such as the CNRS and INSERM. Kourilsky intends to carry out this policy retroactively, meaning that the future of about half of the labs—as well as their chiefs—will be up in the air during the next 2 years. Pasteur molecular biologist Moshe Yaniv says the 12-year rule would help address a common problem in French research: “the problem of seniority. Once you establish a lab, you rarely lose it.” Adds Pasteur immunologist Antonio Freitas, “This is an excellent idea. Some labs have clearly lost steam. In a more competitive situation, they will have to improve considerably to survive.”

    Kourilsky tried to reassure his colleagues that the rule “has nothing to do with the guillotine” and that lab chiefs forced to step down would be given other functions. Perhaps most importantly, some Pasteur researchers say, Kourilsky has promised that these scientific evaluations—as well as the other changes he intends to make—will be carried out openly, in contrast to the behind-closed-doors approach that has often plagued the institute in recent years (Science, 13 November 1998, p. 1241). Says Tekaia, “This is one of the most important points, for which people have been waiting for some time.”


    Graduate Educators Struggle to Grade Themselves

    1. Jeffrey Mervis

    As the NRC prepares for another survey of U.S. graduate schools, educators find themselves in a heated debate about how to measure—and define—a quality education

    David Schmidly of Texas Tech University in Lubbock can't wait for his next report card. Five years ago his school's 17 graduate programs earned a dismal composite score—92nd out of 104 comparable institutions—in a National Research Council (NRC) assessment of U.S. academic programs. But Schmidly, vice president for research and graduate studies, parlayed that low ranking into a successful bid for more state money for faculty and help in boosting the school's research budget by 50%. He expects that the NRC's next survey, in 2003-04, will validate his efforts, boosting the school's reputation and making it easier to attract more money and better students and faculty.

    Top-tier schools rely on rankings, too. Just last week, Yale University President Richard Levin cited the university's #30 ranking in engineering as proof of the need for a half-billion-dollar science and engineering construction binge (see p. 579). And holding the top ranking in physiology and pharmacology in the 1995 NRC study, he noted, “makes it imperative that we invest enough to stay at the forefront in those fields.” But not everyone is so enamored of grades for graduate programs. Patricia Bell, chair of Oklahoma State University's sociology department—ranked dead last out of 95 participating programs in the NRC's last survey—scoffs that the reputational rankings, which are based entirely on scholars' opinions of their peers, are a “popularity contest” that “dismisses the value of teaching.”

    Such conflicting opinions are part of the debate over how to rank graduate research programs, a debate that has sharpened in recent months as the NRC gears up for its third attempt since 1982 to plumb the world's best academic research system. Almost everyone agrees that assessing graduate research programs is a useful tool for administrators who manage the programs and provides an important window into the system for students, faculty, funding agencies, and legislators. But that's where the agreement ends. All previous U.S. assessments—there have been 10 major attempts since 1925, most of them heavily dependent on reputational rankings—have been controversial, as is an ongoing exercise in the United Kingdom (see sidebar). And now, just as college football fans argue endlessly about the relative importance of such factors as won-lost record, coaches' ratings, and strength of schedule in choosing the #1 team, those who follow graduate education are weighing the merits of reputation vs. quantitative measures—such as numbers of published papers and amount of research funding—in assessing research and debating how to measure the caliber of teaching and the fate of graduates.

    The stakes are high. Top-ranked programs attract more funding as well as high-quality faculty and students, while “low rankings can shrink or even kill off a program,” notes Vanderbilt University historian Hugh Graham, author of a well-regarded 1997 book on the history of U.S. research universities. And it extends far beyond the campus. “It's a spiraling effect,” says a spokesperson for Arizona State University (ASU) about its well-regarded business school. “High-achieving alumni are valuable to their companies, who see ASU as a good place to invest their money. Corporate donations allow us to offer talented faculty the salaries that attract and retain them, which contributes to higher rankings.”

    NRC officials hope to begin exploring these issues later this year with a series of pilot studies that will culminate in a full-blown survey in 2003-04. An added factor is a highly visible rating system that already exists: The news magazine U.S. News & World Report publishes best-selling annual issues that tout the country's best graduate and professional programs and the best undergraduate institutions via a reputational ranking. Most university officials say the magazine doesn't capture the opinions of real peers, and they accuse it of deliberately shaking up the ratings to retain reader interest—a charge that editors hotly deny. But universities are also quick to cite flattering results in press releases and recruitment ads.

    The magazine's popularity makes it imperative for academics to stay in the game, says John Vaughn of the Association of American Universities. The AAU hopes this spring to begin its own 5-year effort to collect graduate education data from its 59 members, which include most of the country's research powerhouses. “We shouldn't cede our capacity for thoughtful analysis to a commercial operation that must put business first,” says Vaughn.

    And although most administrators prefer the more sober NRC effort, many worry about how it will turn out. “There should be a study,” says graduate school dean Lawrence Martin of the State University of New York, Stony Brook, who is also head of a panel of land-grant colleges that has drafted a position paper urging coverage of more fields, greater use of objective research criteria, exploration of some measures of program outcome, and ranking institutions by cluster rather than individually. “The fundamental issue is whether they will do it right.”

    Despite the clashing opinions about what it means to “do it right,” educators agree that an assessment is no trivial matter. The last NRC survey covered 3634 programs in 41 fields at 274 institutions; this time, it will have to do all that and more, says Charlotte Kuh, head of the NRC's Office of Science Education Programs. She hopes to raise at least $5 million, four times the cost of the 1995 survey and 25 times the 1982 price tag, for the two-phase study.

    Not by reputation alone

    By tackling these thorny issues early, Kuh hopes to avoid the blizzard of criticism directed at the previous survey for flaws ranging from factual errors to a disregard for applied fields. First up is the charge that the NRC relied too heavily on research reputation, one of many categories of data but the sole source for the numerical rankings of programs. For the reputational rankings, NRC asked more than 16,000 scientists to assess the quality of the faculty and the relative change in program strength over the past 5 years for as many as 52 programs. Each rater was provided a list, supplied by the university, of faculty members in each program.

    Many academics believe that approach is badly flawed. Oklahoma State's Bell, for example, argues that relying on reputations penalizes what novelist Tom Wolfe has called “flyover universities” like hers that don't have national reputations but emphasize teaching. And it's not just those on the bottom who complain. “Most people think that it was a mistake,” says Jules LaPides, outgoing president of the 400-member Council of Graduate Schools (CGS), about the NRC's decision to gather lots of kinds of data, but to rank programs simply by reputation. “It legitimizes a flawed concept, that there is a single ‘best’ graduate program for all students. But graduate education is not a golf tournament, with only one winner.”

    Vanderbilt's Graham and others argue that reputational rankings have become obsolete, as fields expand too rapidly for anyone to remain familiar with all the players. He favors quantitative measures of research productivity that do not rely on the memories of beleaguered reviewers. Such measures as citation impact, levels of funding, and awards, when applied on a per capita basis, he argues, would provide a more accurate picture of the current research landscape. “There are a lot of rising institutions that are being ignored,” he says. “The next NRC study should help to reveal this layer of excellence that is waiting to be tapped.”

    Critics also note that larger departments have an unfair advantage in reputational rankings because of the bigger shadow cast by their graduates, as do those with a handful of standout performers. “The best way to improve yourself quickly is to hire a few faculty superstars,” says David Webster, an education professor at Oklahoma State who has written about both NRC studies. But superstars don't necessarily enhance the educational experiences for grad students, he says.

    Yet few administrators are willing to jettison reputation. The reputational ratings “don't capture the whole picture, but they capture people's perceptions, and that's important,” says Yale's Levin. And even Webster believes that “reputational rankings, for all their faults, provide a type of subtlety that you don't get in more objective measures.” Paraphrasing Winston Churchill's views on democracy, he says that reputational rankings of academic quality “are the worst method for assessing the comparative quality of U.S. research universities—except for all the others.”

    Don't forget the students

    Focusing on reputation, however, ignores the question of how to calibrate many of the other complex elements that make up a graduate education. “The previous [NRC] survey was misnamed,” says the AAU's Vaughn, echoing the views of many. “It was an assessment of the quality of research faculty, not of graduate programs. And we don't really know how to measure the quality of graduate education.” Kuh plays down the distinction. “I don't think you can separate the two,” she insists, adding that she thinks previous surveys got it right in emphasizing research.

    Still, many administrators feel that the next NRC survey must do a better job in exploring the quality of education. That includes such factors as the time to degree, dropout rate, and starting salary of graduates, as well as such intangibles as the quality of mentoring, opportunities to attend meetings, and the extent of career advice offered students. “It's not easy to do, but without it the community support [for the next survey] will vanish,” says Debra Stewart, vice chancellor and graduate dean at North Carolina State University in Raleigh, who served on the advisory panel for the 1995 study and who in July becomes CGS president.

    Joseph Cerny, vice chancellor for research and dean of the graduate division at the University of California, Berkeley, and his Berkeley colleague, Maresi Nerad, took a first crack at the issue by surveying some 6000 graduates a decade after they received their Ph.D. The study, carried out in 1995 and still being analyzed (Science, 3 September 1999, p. 1533), surveyed graduates on the quality of the training they received and whether they would do it again, among other questions. The results were quite different from when the NRC asked peers to rate the quality of both faculty and programs.

    “The [NRC] found an almost perfect correlation,” says Cerny, who as a member of the advisory panel lobbied unsuccessfully for outcome data to be collected in the 1995 survey. “But when we asked graduates to rank such things as the quality of the teaching, the graduate curriculum, and the help they received in selecting and completing their dissertation, we got dramatic differences. Instead of a slope of 45 degrees, indicating a perfect fit, we got a 20% fit. The graph looked like it had come out of a shotgun.”

    The data on whether students would repeat their training are also eye-opening. “Computer science ranked the highest, at 85%, and biochemistry was the lowest, at 69%,” he says. And the performance of individual programs varied wildly, including two biochemistry programs that scored 100% and one that received only 15%.

    Kuh argues that such ratings from graduates have limited value because the information quickly becomes dated and doesn't take into account the variation among students. She adds that a stressful graduate experience could still lead to a successful career. Cerny argues, however, that even stale information on student outcomes would be extremely valuable to the university administrators who run the programs—and to the federal agencies that fund graduate training. “I'd certainly want to know if I was the dean at a school [where only 15% of students would redo their training],” he says. “Even if you consider 60% to be a passing grade, we found that only one-third of the programs scored at or above that level.”

    Ultimately, say Kuh and others, the key to a successful assessment is giving customers something they need. Bell and Webster of Oklahoma State say that the NRC and U.S. News surveys have had little impact on their university's research policies not because they fared badly but because the yardstick—the research reputation of its faculty—was seen as tangential to the university's main mission of educating students. “It's like MIT's [the Massachusetts Institute of Technology's] reaction to the weekly Associated Press football polls,” says Webster. “We're just not a big player in that sport.” Kuh hopes that the next NRC survey is good enough to generate as much interest at Oklahoma State as it does at MIT or Yale, setting the standard for anybody interested in assessing graduate education. “The U.S. is at the top of the world in higher education,” says Graham, “and it's too important a topic to produce reports that aren't used.”


    Support Grows for British Exercise to Allocate University Funds

    1. Michael Hagmann*
    1. With reporting by Dennis Normile and Richard Stone.

    Next year British universities go under the microscope for the country's fifth Research Assessment Exercise (RAE), an attempt to rank departments' research output and help the government invest wisely in academic infrastructure. And even some of its most persistent critics acknowledge that the RAE, which has drawn international attention, has overcome a rocky start and is working increasingly well. “Each successive RAE is moving closer to the consensus [of what constitutes high-class research],” says Paul Cottrell, the assistant general secretary of the Association of University Teachers, which opposed RAE's introduction in 1986.

    The RAE began as a way to funnel dwindling resources into the best research programs at a time when severe cuts in public spending raised fears of a major brain drain. Unlike the U.S. National Research Council (NRC) ratings (see main text), a school's RAE ranking has a direct impact on government funding. “The better you do [in the RAE], the more money you get,” says John Rogers, RAE manager at the Higher Education Funding Council for England, which oversees the exercise. The process, which covers 68 fields, puts every university department or program under scrutiny by an independent panel of peers. Each department receives a score, from 1 to 5*, that is supposed to be based on four pieces of work submitted by every participating researcher and such information as prizes, outside funding, and research plans. “The gold standard is always international excellence,” says Rogers. That score, adjusted for the number of participating researchers, determines funding levels.

    Despite its narrower focus and deliberate elitism—last year about 75% of the council's $1.3 billion budget went to the top 13% of Britain's 192 institutions of higher education—the British assessment exercise has raised some of the same concerns as the NRC surveys. “Teaching is not esteemed as highly as research and always gets a back seat,” Cottrell argues. Although teaching skills are evaluated in a separate exercise, the Teaching Quality Assessment, the outcome is not linked directly to funding. The question of how much panels are affected by a researcher's reputation also remains an issue, although Rogers says attitudes will play a smaller role next year than in earlier exercises.

    Rogers calls RAE “the longest standing assessment process on this scale worldwide,” and its influence may soon be spreading beyond its borders. In Japan, where university funding has been based on precedent and enrollment and there is little oversight of performance, the government is moving slowly toward greater accountability. This year the Ministry of Education, Science, Sports, and Culture (Monbusho) hopes to create an evaluation organization that will serve initially as an accreditation board to review curricula and to prod universities to raise education standards. But an advisory panel has also recommended that Monbusho begin evaluating university research efforts on a departmental level, with the results somehow tied to funding for new buildings and large-scale equipment.

    The RAE approach has also found a home in Eastern Europe, where this spring the Czech Republic hopes to begin a long-awaited review of academic research at its 27 universities. The reports from the visiting panels, which will include foreign scientists, are expected to lead to a two-tier university system that favors a handful of elite schools.


    An Integrative Science Finds a Home

    1. Elizabeth Pennisi

    ATLANTA—This year's annual meeting of the Society for Integrative and Comparative Biology (SICB), held here earlier this month, marked a milestone for the fledgling discipline known as evo-devo biology. Beginning about a decade ago, modern biologists realized that they might glean clues to how organisms evolved by studying the genes that control development (Science, 4 July 1997, p. 34). Now, the discipline is so strong that last year it gained its own division within SICB and was invited to present its inaugural symposia in Atlanta. Prominently featured were new findings on the genes needed for butterfly wing development and on the homeobox genes, a key group of developmental genes involved in organizing animal body plans.

    Hox and the Simple Hydra

    Evolution, like development, entails a transformation of shape and form, but on a much longer time scale. The discovery by developmental biologists that the Hox (for homeobox) genes play a major role in guiding this transformation in fruit fly and vertebrate embryos posed an intriguing question. Researchers wondered whether the appearance and proliferation of these genes 550 million years ago made possible the transformation of early life into the wide range of complex shapes and forms seen today. Results presented at the meeting supported this idea.

    The work, which comes from the labs of two independent investigators, John Finnerty of Boston University and Hans Bode of the University of California, Irvine, deals with the homeobox genes of cnidarians. These animals—a group that includes corals, hydra, sea anemones, and jellyfish—resemble the fossils of some relatively simple pre-Cambrian creatures that existed before the large-scale diversification took place. Both groups of organisms lack a key feature of most modern animals—bilateral symmetry, a balanced shape that distributes the body in mirror-image halves along a line drawn from head to tail. However, in contrast to the simplest multicellular organisms, such as sponges, cnidarians do have a body axis: They have a top, defined by the mouth, and a bottom. And the new results suggest that the Hox genes may have been instrumental in establishing that axis.

    Whereas sponges have at most one Hox-like gene, Finnerty has found that sea anemones and other cnidarians can have seven, a mix of Hox genes and related paraHox genes. The Hox genes also appear to be organized much as they are in vertebrate genomes. Bode's results suggest that the genes have similar functions in primitive and advanced organisms, such as helping establish the head region. Taken together, Finnerty suggests, these molecular data say that “[cnidarians] are much closer to bilateral organisms than we have thought. I think these genes are absolutely required for the establishment of the body axis in all animals above a sponge-grade construction.” Later in evolution, the genes may also have helped organize tissues along that axis into discrete and ever more complex body regions.

    The new findings come from a collaboration involving Finnerty and Mark Martindale, formerly at the University of Chicago and now at the University of Hawaii, Honolulu. They conducted an exhaustive search for Hox genes in two sea anemone species and also searched the literature for cnidarian Hox-like genes. Based on an analysis of the sequences of 13 candidate genes, Finnerty and Martindale concluded that seven are either Hox or paraHox genes. Their studies also showed that the sea anemone Hox genes are clustered closely together in one chromosomal location, much as they are in vertebrates.

    The sea anemone is less richly endowed, however. Whereas vertebrate Hox gene clusters can have up to 13 genes, and even simple invertebrates such as flatworms have at least seven, the sea anemone has only four. By assessing the degree of similarity between each cnidarian Hox gene and Hox genes in other organisms, Finnerty was able to match the cnidarian genes with their equivalents in bilateral animals and figure out which were missing. And as far as he can tell, Cnidaria “lack central Hox genes,” the ones that guide the establishment of the central part of the body. That's logical, Finnerty added, because, “In Cnidaria, the center part of the body is relatively unspecialized.” Overall, he concluded, Cnidaria “seem to be at an intermediate stage of evolution” between organisms with no Hox genes and bilaterally symmetrical species.

    There is some evidence that Hox genes carry out similar functions in cnidarians and higher organisms. Bode has identified six Hox-like genes in hydras and finds that at least two are involved in determining where and when the tentacled head of the hydra forms. In earlier work, for example, he reported that a gene called Cnox2, which is active in the body column but not in the cells of the head, appears to suppress head formation. If you expose the hydra to a chemical that seems to shut this gene down, he says, “you get tentacles in the wrong places and heads along the body.”

    More recently, he found that a gene called Cnox3 has the opposite expression pattern. It is weakly expressed in newly formed young cells in the body column, but as the tissue approaches the top of the hydra, where the tissue stops dividing and begins to differentiate, Cnox3 gets more active. “[Expression] is very strong just below the tentacle,” he reported. Combined with the activity of another gene that turns on briefly at about the same time, the Cnox3 activity, he says, may coordinate tentacle development.

    Because Cnox3 looks like the head-defining Hox gene in other organisms, Bode thinks that it's playing a similar role in the hydra. If so, it would mean that even before organisms had evolved strongly defined head, thorax, and tail regions, homeobox genes might have been working as they do in higher organisms—sequentially, in a “head-to-tail” way.

    Other researchers aren't so sure, however. The work in cnidarians “shows Hox genes have always had a role in specifying cells, but [they] might not have been [used to] specify the body axis,” cautions evolutionary developmental biologist Rudolf Raff of Indiana University, Bloomington. Moreover, the body parts may not be parallel: The head of a hydra might not really correspond to the head of a lizard. Nevertheless, Finnerty notes, “we are starting to get enough comparative data that we can fill in the details of the evolution of the clusters.” And that, says John Postlethwait of the University of Oregon, Eugene, is what makes cnidarian work “great.”

    Genetic Diversion, Evolutionary Diversity

    When Sean Carroll set out more than 5 years ago to find out how the buckeye butterfly got its spots, he had only a slight inkling of what this tale could contribute to an emerging discipline called evo-devo. But he soon discovered that the eyespot that adorns many butterflies' wings—where it helps confuse predators looking for a tasty snack—evolved through the reuse of genes already known to be important for the development of the wing. Because this result demonstrated that nature could co-opt genes for completely different purposes, the work guaranteed the story a place in the evo-devo history books.

    Since then, the plot has become even more intriguing. About a year ago, Carroll, an evolutionary developmental biologist at the University of Wisconsin, Madison, and his team showed that not just single genes, but an entire developmental control pathway involving a suite of genes—the one through which the key gene hedgehog exerts its effects—had been recruited to specify where eyespots would appear. This suggested that evolution of new features doesn't require the evolution of new genes or pathways, just a change in how those pathways are used.

    In new results presented at the meeting, Carroll and his colleagues have now taken the work a step further, identifying some of the genes that take over after the eyespot location has been established to determine the sizes, and, very likely, the colors, of the central spot and any surrounding rings. The work fills in a “missing link” in understanding how the butterfly sets up the details of spot formation, says Scott Gilbert, a developmental biologist at Swarthmore College in Pennsylvania.

    The results show that some of the same genes involved in determining the eyespot locations are called upon again, this time to set up the exact eyespot pattern. There's also a great deal of flexibility in how the genes are used. Whereas all four butterfly species studied appear to use the same patterns of gene expression to set up a spot, they each use the genes differently to determine the spot's details. “Everything looks very fixed and conserved to a certain step, and then there's a little riot going on,” Carroll said.

    For the current work, Carroll and Wisconsin's Craig Brunetti decided to track the activity of three genes, called engrailed, spalt, and Distalless, during the stage of development when the outlines of the spot and its rings are actually defined. They just happened to pick those genes from among the many that help the wing form. “We got really lucky,” Carroll reported, as all three proved active at this time.

    The genes' expression patterns indicated that they help define the spots and rings. For example, in the East African butterfly, Bicyclus anynana, all three genes are active in what becomes the white center of the spot, while just spalt and Distalless are turned on in the black ring flanking it. And in the outer ring, only engrailed was active.

    In contrast, in the buckeye butterfly, Distalless, spalt, and engrailed are turned on in both the central white spot and its adjacent ring, leaving it up to another, still unidentified, gene to set up the ring. The combinations of active genes were different yet again in two other species examined. “What we've lifted the lid on is a very flexible system,” Carroll concluded.

    He suggested that such flexibility in gene usage is tolerable because the butterfly has already set up the wing and other critical aspects of its body, so it can tolerate deviations in more superficial characteristics, such as the appearance of the wing decorations. What results is a rapid and continual experimentation with new eyespots and eyespot patterns, some of which persist because, in the context of entire populations, one distracts predators better than another.

    Despite the progress, many questions remain about eyespot evolution. For one, Carroll has yet to identify the mutations that enabled butterflies to co-opt the same genes for so many different functions. Still, he and others are pleased with what's been learned so far. “[The work] is a very good example of comparative developmental studies and how it is now possible to do detailed studies down to the molecular level in nonmodel organisms like butterflies,” notes Lennart Olsson, an evolutionary developmental biologist at Uppsala University in Sweden. “Studying later parts of development will become more common, I hope, [because] from an evolutionary perspective, later parts of development are more interesting because this is where the viable variation occurs.”


    Heretical Idea Faces Its Sternest Test

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    Upcoming studies of the big bang's afterglow should make or break MOND, an equation that many cosmologists love to hate

    When Vera Rubin began mapping massive hydrogen gas clouds that swarm around spiral galaxies like sluggish electrons around a nucleus, the astronomer at the Carnegie Institution in Washington, D.C., had a good idea of what she should find. Moving out from the galactic center, the gravitational pull of the ever-larger amount of mass within a cloud's orbit should make the cloud wheel faster and faster around the hub. But very distant clouds, orbiting beyond a certain radius that encompasses virtually all of the galaxy's mass, should chug more slowly, as the galaxy's gravitational pull diminishes with distance. Although Rubin did see the predicted rise in cloud velocity away from the center, the farthest clouds—to her astonishment—never did slow down.

    To Rubin and most other experts, the explanation for this curious behavior, first noticed more than 20 years ago, had to lie in the presumption that galaxies are far more massive than meets the eye. Indeed, they say, the gravitational pull of invisible dark matter—thought to make up as much as 90% of the universe's mass—should account for the zippiness of clouds skirting the fringes of galaxies. But an alternative notion says that dark matter has nothing to do with this phenomenon. Instead, it argues that when mass is spread thinly across space, the local gravitational force—that exerted by a galaxy, say—is stronger than Newton's law of gravity predicts it should be. This gravitational fudge factor, called Modified Newtonian Dynamics (MOND), flies in the face of modern physics. But to the horror of many scientists, “it works amazingly well,” says Princeton University cosmologist David Spergel.

    It was working well, anyway, until researchers took a closer look at a new class of lightweight galaxies. There, they have found, clouds are trucking along more slowly than they should be according to MOND. “I wouldn't say it is a death blow,” says astronomer Julianne Dalcanton of the University of Washington, Seattle, “but MOND is staggering and bleeding.” The coup de grâce—or MOND's resurrection—may arrive after what could be a decisive test: measurements of fluctuations in the faint afterglow of the big bang, the cosmic microwave background, that should force scientists to choose between a universe dominated by dark matter and one that obeys the weird rules of MOND.

    At stake is a pillar of modern physics, Einstein's hallowed special theory of relativity. According to Newton and Einstein, gravity is a simple creature. An object's gravitational attraction, they showed, is proportional to its mass divided by the square of the distance from the object. To calculate gravitational force, all you need is the number that reflects the proportion: the gravitational constant, which holds true anywhere in the universe.

    Challenging that fundamental idea about gravity, Moti Milgrom, an astrophysicist at the Weizmann Institute of Science in Rehovot, Israel, proposed MOND in 1983 as a way to explain the surprising speed of Rubin's clouds without having to resort to cramming galaxies full of invisible dark matter. Milgrom conceived another universal constant, the MOND critical acceleration. The idea is that instead of diminishing with the square of the distance, the pull of gravity—once it falls below a threshold, the critical acceleration—declines less precipitously, in direct proportion to distance. Milgrom and others have pegged a minuscule value for the critical acceleration: about one-trillionth the force of gravity we feel on Earth.

    Needless to say, most cosmologists hate the idea. Not only does it fly in the face of Einstein's theory, but unlike general relativity, MOND offers no insights into how the rapidly accelerating early universe evolved, says Spergel. And cosmologists are quick to point out that if there is no dark matter, galaxies shouldn't bend light as much as they appear to. Nevertheless, the idea has garnered adherents, including reputable scientists who didn't want to believe it at first. “I sometimes wish I hadn't gotten involved with MOND,” says astronomer Stacy McGaugh of the University of Maryland, College Park, a former student of Rubin's and a specialist in charting the speeds of galactic gas clouds. “But then it showed up in my data.” Indeed, for 16 years, MOND equations have explained data collected on clouds in orbit around more than 100 galaxies.

    That was until a new player in the galactic lineup threw MOND a curve. Low surface brightness (LSB) galaxies are loose collections of stars weighing less than 20% of our Milky Way (no heavyweight in its own right). Applying MOND's equations, one should see gas clouds racing at speeds that—if MOND were wrong—would require a titanic dark matter halo to achieve.

    When the data on LSB clouds began trickling in, MOND at first seemed on target. Then last year, Dalcanton and Rebecca Bernstein of the Observatories of the Carnegie Institution of Washington in Pasadena, California, used optical and infrared telescopes to map the velocities of gas clouds orbiting 50 newly identified LSB galaxies. At a July 1999 meeting of the Astronomical Society of the Pacific in Paris, they reported that the clouds were moving at velocities that could be explained by reasonable amounts of dark matter. MOND, on the other hand, could explain the cloud movements only if the galaxies were much lighter than expected—that is, if their stars were lighter than predicted by current theories of stellar evolution ( Dalcanton thinks that's very unlikely. Milgrom, however, is withholding judgment until the data appear in a peer-reviewed journal. “That claim is only the latest in many, many such earlier attacks, all of which turned out to be based on errors in the data or misinterpretation,” he says.

    Realizing that further data on gas cloud velocities are unlikely to settle the debate, McGaugh has proposed a make-or-break test for MOND. Cosmologists think that hot bubbles of matter and light in the early universe left behind a pattern of bright and dim patches in the cosmic microwave background. Both MOND and standard theory predict that most patches of microwave light should span about 1 degree of sky as seen from Earth, an area that amounts to about twice that of the full moon. However, the competing approaches disagree on the odds of finding patches smaller than 1 degree. If the universe is mostly cold dark matter, one should expect to find many more minute patches than MOND predicts, McGaugh wrote in the 1 October 1999 issue of Astrophysical Journal Letters.

    Measurements of the microwave background by the Cosmic Background Explorer satellite and by several recent balloon flights have mapped the 1-degree patches, but the extent of smaller patches is still in doubt because it's difficult to count them. Upcoming satellite missions in the next few years should provide a patch census accurate enough to discriminate between MOND and dark matter.

    Physicist Michael Turner of the Fermi National Accelerator Laboratory in Batavia, Illinois, who makes no secret of his dislike for MOND, is confident about the final outcome: “I think that MOND will turn out to be the Bode's law of our era,” he says. Bode's law, proposed by the 18th century German astronomer J. D. Titius, held that the distances of all the known planets followed from a sequence of doubled numbers: 0, 3, 6, 12, 24, etc. Adding four to each number in the sequence and dividing by 10 gives the orbital radii of all the planets out to Saturn in astronomical units. (One unit equals the distance from Earth to the sun.) It was only when Neptune and Pluto were found far from their predicted positions that astronomers recognized Bode's law for what it is: an amusing coincidence.

    Although even enthusiasts are puzzled by the meaning of MOND, they aren't ready to write it off as mere coincidence. “People have taken potshots at it for 16 years,” says Jerry Sellwood, a physicist at Rutgers University in New Jersey. “They haven't killed it off yet.”


    Returning America's Forests to Their 'Natural' Roots

    1. Keith Kloor*
    1. Keith Kloor is a freelance writer in New York City.

    New data on how North American forests looked centuries ago are fueling a debate on what ecologists should aim for when restoring ailing ecosystems

    The flames licked high into the sky above a stand of Utah junipers in the foothills of Wyoming's Bighorn Mountains. Phil Shepard stood for a while admiring his handiwork, which ended up incinerating 283 hectares of Tensleep preserve. No arsonist, Shepard, an ecologist who manages the preserve for The Nature Conservancy, had set the fire to beat back the junipers that had overrun lush grassland and to thin out a ponderosa pine forest that had accumulated dead and diseased wood. One lightning strike could have touched off a devastating blaze, he says, if his group had not struck first.

    Shepard views the Bighorn's overgrown forests as more than a fire hazard, however. The juniper and pine, he says, are driving out aster and other plants that flourished before people of European descent and their livestock settled this rugged land more than a century ago. The controlled burn was part of an effort to turn back the clock on the preserve's ecosystem. And Shepard isn't alone in his nostalgia for a wilderness of yore: Plenty of other ecologists view prescribed fires and other interventions, such as selective logging, as essential stratagems for repairing ailing ecosystems. The goal, as Interior Department Secretary Bruce Babbitt puts it, is to restore these lands “to a presettlement equilibrium.”

    The need to reduce the risks of widespread conflagration in the nation's forests is adding urgency to such efforts: Last month, the U.S. Forest Service reported that about 17 million hectares of national forest in the western United States are at “high risk of catastrophic wildfire,” a fragility brought on by years of zealous efforts to stamp out natural fires. The forests “have become a tinderbox ready to explode,” says the service's Chris Wood. Last year, federal agencies torched 931,000 hectares, more than twice the average annual burning over the last decade.

    But as scores of projects to save the forests get under way, a debate is smoldering among ecologists about how much controlled burning and logging is needed. The core issue is just what Babbitt's presettlement forests ought to look like. In the Bighorn basin, for instance, Julio Betancourt, a paleoecologist with the U.S. Geological Survey in Tucson, Arizona, and his colleagues have found that although grazing and fire suppression may abet the rapid advance of Utah juniper and ponderosa pine, the major culprit is probably climate changes spanning 5000 years. “Many land managers and ecologists tend to assume that all change in the 20th century is unnatural,” Betancourt says. “That's not necessarily true.”

    In trying to reconstruct how ecosystems looked centuries ago, Betancourt and others hope to offer a handle on how much change is natural. He and two paleoecologists at the University of Wyoming, Laramie—Stephen Jackson and Mark Lyford—took a stab at this for the lowlands of the central Rocky Mountains. They examined plant remains embedded in fossilized packrat middens, or dung-filled nests, preserved in caves and rock shelters. After analyzing hundreds of middens, they discovered that regional warming about 4500 years ago drove Utah juniper northeast into dry canyon habitats in Wyoming and Montana. Cooling over the next 2 millennia took out much of these stands, leaving isolated patches of juniper in Wyoming. Then another warming trend kicked in about 2000 years ago, apparently allowing the species to recapture lost ground.

    A recent acceleration of the juniper's migration, they say, may be due to three possible factors: grazing, fire suppression, and warming since the Little Ice Age ended around 1850. “If the trend is because of grazing and fire suppression, you can try to reverse it,” Betancourt says. “But if it's a natural invasion because of climate change, how do you restore a migrating ecosystem?” Their as-yet-unpublished findings have persuaded Shepard to cut back on burning Utah juniper in Tensleep preserve. Betancourt applauds that decision, arguing that managers should first consider a forest's ecological history before taking aggressive measures to reduce a fire hazard or to restore a presettlement look.

    Another hot spot where critics question the methods and goals of restoration is the 890,000-hectare Coconino National Forest, which surrounds Flagstaff, Arizona. Decades of fire suppression have left the Coconino's ponderosa pine itching to burn, says Wallace Covington, a restoration ecologist at Northern Arizona University in Flagstaff. His solution is the Fort Valley Restoration project, a massive effort that aims to return a 40,470-hectare swath of the Coconino to the way it looked before European peoples first settled the area around 1876.

    Back then, Covington says, pine forests in Arizona were sparser—averaging 57 trees per hectare, according to population analyses based on annual tree rings. Holding the trees in check over the millennia, he presumes, were wildfires that kill off seedlings. Today, with as many as 2100 trees per hectare in some areas, the tightly packed stands are prone to disease and catastrophic wildfires that could even threaten Flagstaff.

    The Fort Valley project, slated to get under way this spring, would use controlled fires and selective logging to thin out ponderosa pine. Testing this approach in the Coconino several years ago, Covington found that culling led to a greater diversity of plants in the forest understory, healthier old-growth trees, and a reduced risk of crown fires, in which flames race through a forest by leaping from treetop to treetop. Based on these findings, the project calls for logging up to 90% of the trees on some acreages. The plan has strong support. Not only the Coconino, but many North American forests must be thinned drastically if they are to be restored to their pre-European luster, argues Thomas Bonnicksen, a forest ecologist at Texas A&M University in College Station. “And I don't care if they use burning, girdling, or a laser beam to do it.”

    Although nobody disputes the need to reduce the risk of fire, some scientists question Covington's presettlement model of the Coconino. Based on analyses of weather data and tree rings, Melissa Savage, a geographer at the University of California, Los Angeles, has deduced that populations of ponderosa pine near Flagstaff surged, or regenerated, in the early 1800s, then again in the early 1900s. “If you pick a particular date for restoration, as [Covington] has with 1876, then you don't account for any episodic regeneration that may have occurred since then,” she says. A better approach, Savage says, would be to nudge an out-of-control forest back into “the envelope of variability,” rather than push restoration further toward a perceived ideal. That would be enough to reduce the fire hazard and let nature take over from there, she says. Covington counters that the presettlement model is merely a starting point. Although their prescriptions may differ, Covington says he shares Savage's goal of healing the forest: “We just want to get the ecosystem into the ballpark of natural conditions and then let nature take care of the rest.”

    Dogging the debate is whether restoring a forest to its presettlement state is even a legitimate goal, considering that Native Americans were shaping the land long before European settlers arrived. “There is no such thing as presettlement, at least not the way most people define it. It's all been settled since 10,000 to 20,000 years ago,” says Charles Kay, a wildlife ecologist at Utah State University in Logan. Bonnicksen agrees, arguing in a new book, America's Ancient Forests: From the Ice Age to the Age of Discovery, that “the forests and the people who lived there formed an inseparable whole that developed together over millennia.” Other experts, however, play down the impact of Native Americans. “It's an overgeneralization to say that everywhere you look is the hand of man in the presettlement era,” says Thomas Swetnam, a fire ecologist at the University of Arizona in Tucson. In particular, he takes umbrage at Bonnicksen's view that Native Americans thinned Northern Arizona's pine stands: Rather, he says, fires triggered by lightning strikes were the major landscape architect before European settlers arrived. The best strategy, Swetnam says, is to use fires and logging judiciously to return forests to a state in which frequent, small, natural fires can help tend the forests.

    Although scientists debate what forces, in what proportions, kept North American forests healthy in presettlement times, they concur that each restoration project must be justified in part based on the emerging data about regional ecological histories. The questions, says Swetnam, have become “where and how to do [restoration], and to what extent should you use the past?” Scientists and policy-makers must act quickly to try to achieve a consensus on these issues, says Savage, before it's too late for ailing ecosystems: “The next 10 years will determine the future of the forests.”


    Quakes Large and Small, Burps Big and Old

    1. Richard A. Kerr

    SAN FRANCISCO—The fall meeting of the American Geophysical Union held here last month attracted 8200 earth scientists and a dizzying variety of topics, including the prospects for another large earthquake in Turkey, bursts of methane from the sea floor, and the use of microearthquakes in the study of fault behavior.

    Istanbul's Next Shock

    The metropolis of Istanbul, 13 million strong, is in the cross hairs of the next major Turkish earthquake, and seismologists are worried. They calculate that the magnitude 7.4 quake that struck 90 kilometers east of Istanbul last August near Izmit and killed 17,000 people has tightened the squeeze on the trigger for the next temblor. Since 1939, seven major earthquakes have marched along the North Anatolian fault toward Istanbul, rupturing one segment of the fault after another (Science, 27 August 1999, p. 1334). Using the new science of fault-to-fault communications, seismologists have found that the Izmit quake triggered the magnitude 7.2 earthquake that struck Düzce east of Izmit in November—and it also loaded the fault to the west of Izmit with additional stress. “There are large uncertainties,” says seismologist Nafi Toksöz of the Massachusetts Institute of Technology (MIT), but “the likelihood of a magnitude 7 [quake] or larger has increased” just a few kilometers south of Istanbul.

    Researchers can talk about the prospects for the next Turkey earthquake because they have realized in recent years just how effectively quakes pass stress along faults. A quake releases stress on the section of fault it breaks but also transfers stress to adjacent sections. Once they know exactly where and how the fault broke in a quake, seismologists can calculate the location and magnitude of increased stress. At the meeting, seismologist Tom Parsons of the U.S. Geological Survey in Menlo Park, California, and his colleagues reported that according to their calculations, the Izmit quake increased the stress at the starting point of the Düzce quake by 1 to 2 bars. Because stress increases as small as 0.1 bar can sometimes trigger major quakes, the Düzce quake “is an induced quake if ever there was one,” says geodecist Robert Reilinger of MIT.

    What Izmit has wrought to the east, say researchers, it could wreak to the west. The North Anatolian fault splits west of Izmit into two strands that run beneath the Marmara Sea, which lies between the Black Sea to the north and the Aegean Sea to the south. The last major quake in the Marmara Sea was in 1894; six have struck there in the past 500 years. To judge by historical accounts, some struck the northern fault strand that passes within 15 kilometers of Istanbul on the north coast.

    Parsons and his colleagues calculate that the Izmit quake increased stresses beneath the Marmara Sea by 1.5 to 5 bars. Making similar calculations, geodecist Geoffrey King of the Institute of Physics of the Globe of Paris, seismologist Aurelia Hubert-Ferrari of Princeton University, and their colleagues also reported at the meeting that Marmara Sea stress increased up to 5 bars. “We expect the next segment to rupture to the west [of Izmit] near Istanbul,” says Hubert-Ferrari. “You can expect heavy damage in Istanbul, even a tsunami.” Just how soon a Marmara Sea quake might strike depends on the poorly understood history of fault ruptures under the Marmara Sea, but Parsons and his colleagues estimate that a stress increase of merely 1 bar could increase the odds of a large quake by 20% to 50% in the next few decades. And recent history is not reassuring: In the sequence since 1939, no quake waited longer than 22 years and some came within a year of the one before.

    This threat of another, possibly even more damaging, quake is being viewed with concern in Turkey. Toksöz says that the municipality of Istanbul was due to sign a contract last week for a first-ever evaluation of the likely effects of nearby quakes—none too soon given the tightening grip on the Marmara Sea trigger.

    Gas Blast For the Dinosaurs?

    A catastrophic meteorite crash wiped out the dinosaurs 65 million years ago, but they may have benefited from a gentler catastrophe 55 million years earlier. Rummaging in sediments laid down in the Cretaceous period, paleoceanographers have found signs of massive outpourings of methane gas from the sea floor that could have helped create the hottest climate of the past 150 million years. And there are hints of another methane burst 90 million years ago that may help account for a mass extinction in the ocean. These results join previous strong evidence of a methane burst 55 million years ago, which marked a major turning point in mammal evolution.

    To track down methane bursts, paleoceanographers must dissect the sedimentary record finely enough to see each geologic moment, something they have begun to do only recently. For example, 55 million years ago, as the Paleocene epoch was slipping into the Eocene, the proportion of the rare isotope carbon-13 relative to carbon-12 dropped abruptly (Science, 19 November 1999, p. 1465). This isotopic shift was big—three parts per thousand—but it had gone unnoticed because the drop, most of which occurred over only a few thousand years, had slipped between widely spaced samples taken along sediment cores.

    Such a large, abrupt shift points to a methane burp. Only the methane of methane hydrate—a combination of ice and methane formed by sub-sea-floor microbes—would have been poor enough in carbon-13 to drive such a shift; volcanoes could never have emitted enough carbon dioxide in such a short time. Apparently, a few thousand gigatons of methane escaped from the sea floor.

    At the meeting, a group of paleoceanographers reported the discovery of another large, abrupt drop in the proportion of carbon-13, this one in 120-million-year-old marine sedimentary rock drilled in northern Italy. Bradley Opdyke of the Australian National University in Canberra and his colleagues described a 1.5-parts-per-thousand drop that took perhaps 20,000 years. Paleoceanographer Hugh Jenkyns of the University of Oxford has found the same isotopic change in the Pacific but with a magnitude of three parts per thousand. Most researchers see this drop as evidence of another burst of methane from sub-sea-floor methane hydrates. “I'm not 100% convinced myself” that some other mechanism won't turn up, says Jenkyns, “but it fits the facts.” If there was a methane burst 120 million years ago, Opdyke has an explanation: the largest volcanic event of the past 160 million years. In his “hot LIPs” scenario, the volcanic eruption of large igneous provinces, or LIPs, onto the floor of the Pacific about 120 million years ago warmed midocean waters, breaking down methane hydrates and releasing their methane, which oxidized to carbon dioxide. Once in the atmosphere, the gas helped fuel a super- greenhouse warming.

    The LIP-induced midocean warming would also have triggered an explosion of near-surface sea life, which would have used up the oxygen in the deep sea. Such anoxia would have ensured the preservation of organic matter in a layer of “black shales” that immediately overlies the isotopic drop. Two other such oceanic anoxia events mark the mid-Cretaceous, when soaring temperatures allowed the dinosaurs to expand into polar latitudes. One was at the transition from the Cenomanian epoch to the Turonian 90 million years ago, when another, smaller isotopic shift just preceded the anoxia and came in the midst of a major extinction event in the ocean.

    Confirming the role of sea-floor methane in these mid-Cretaceous anoxia events as well as their triggering mechanisms will require more work, says paleoceanographer James Zachos of the University of California, Santa Cruz, but “there should be more of these; it's a matter of looking more closely.”

    Fuzzy Faults Sharpened Up

    Seismologists have long tried to get a picture of earthquake fault behavior by looking for patterns in the tiny quakes of magnitude 1 to 3, which constantly shiver along most active faults. Those microearthquakes have been hard to pinpoint, but now an improved technique is bringing the view into focus. Instead of fuzzy clouds of microearthquakes, researchers are seeing clusters of repeating quakes, mysterious streaks, and ominous holes that offer deeper understanding of how faults work. “It's like putting the corrective lens on the ailing Hubble Space Telescope and finally seeing what's out there,” says seismologist William Ellsworth of the U.S. Geological Survey in Menlo Park, California. Through the new lens, seismologists seem to be witnessing the progressive failure of a fault that should lead to a long-predicted quake in Parkfield, California.

    Seismologists locate quakes by triangulation. The distance between a seismograph and a quake can be calculated from the time it takes for the first seismic signal to arrive, and the quake's location can be determined by seeing where the distances from three or more seismographs converge. But until recently, even with dense networks of seismographs, the error margins were large—up to several hundred meters—for quakes of magnitude 1 and 2 that break patches of fault only 10 to 200 meters across.

    Seismologists have now improved their accuracy by more than an order of magnitude by using a second type of information. They also examine seismic waveforms, the wiggly traces of seismograms ultimately produced by the ground motions of fault rupture. The closer two microearthquakes are on a fault, the more similar their waveforms. If they are similar enough, picking the time of one quake's arrival relative to the other's becomes far easier, allowing seismologists to locate quakes relative to each other with a precision of 10 meters or less. Although the technique is not new—it has been around for a couple of decades—only recently have seismologists' computers been powerful enough to apply it to thousands of microearthquakes on a fault. “It looks like a very, very promising technique and very important,” says seismologist Yehuda Ben-Zion of the University of Southern California in Los Angeles.

    At the meeting, one speaker after another showed examples of how they could use the higher resolution techniques to snap a fault into focus. For example, groups headed by Ellsworth, Gregory Beroza of Stanford University, and Allan Rubin of Princeton University are studying the central San Andreas fault and two of its branches in northern California, the Calaveras and the Hayward. Quakes that used to seem scattered over large parts of these faults collapse into a few tiny areas covering only 1% to 5% of the fault. Some are tight clusters of similar quakes; others are identical ruptures that repeat time after time at the same spot.

    Most surprisingly, many of the microearthquakes in some areas form streaks along the fault that run parallel to the fault motion and have been dubbed skid marks by one seismologist. Several kilometers long and about 100 meters high, they remain enigmatic, but they may indeed be skid marks of a sort, where slippage on the fault drags rock of a different composition against the opposing fault face. The rock could smear along the fault, causing it to stick there and break in microearthquakes.

    The skid mark explanation won't work, however, for streaks that run across the direction of fault slip. On the San Andreas near the town of Parkfield in central California, two such inclined but roughly parallel streaks pass beneath Middle Mountain, where repeating magnitude 6 quakes have struck over the last 150 years—the most recent in 1966.

    At Parkfield, streaks may be pointing to how a fault prepares to break. At the meeting, Ellsworth reported that a 1993 magnitude 4 quake that struck the same spot on the fault where three other magnitude 4 quakes have hit since 1934 clearly began on the lower of the two inclined streaks. It then ruptured upward toward the point where the 1966 Parkfield quake got started. At a depth of more than 12 kilometers, the lower streak may well mark where deep, hot rock cools enough to become brittle and break in earthquakes, says Ellsworth. Above it, the fault could be locked tight until a magnitude 6 Parkfield quake ruptures it.

    Apparently, microearthquake-free holes between streaks or clusters mark fault patches that break in larger earthquakes. Ellsworth and his colleagues have found that coincidence not only at Parkfield but also in the case of the 1984 magnitude 6.2 Morgan Hill earthquake and the 1989 magnitude 6.9 Loma Prieta quake. “The streaks do seem to be telling us a lot about [fault] structural controls” on larger quakes, says Ellsworth. Holes in seismic activity may thus direct seismologists to the most dangerous parts of faults.

    Researchers do not yet know what touches the big quakes off, however. At Parkfield, both streaks seem to have given rise to quakes that are “knocking on the door” without triggering the expected quake. Or as Ellsworth wonders, “what makes that '66 spot so hard to knock off?” Perhaps he'll get his answer as more faults get defuzzed.