News this Week

Science  19 Dec 1997:
Vol. 278, Issue 5346, pp. 2038

    CLONING: The Lamb That Roared

    1. Elizabeth Pennisi


    A year ago, few researchers would have guessed that science's most stunning achievement in 1997 would come from a barn. But in late February, a bleating, white-nosed lamb swept into the public eye: 7-month-old Dolly, the first animal cloned from an adult cell. She electrified both the research community and the general public, for although animals had been cloned before, creating a sheep from a single cell of a 6-year-old ewe was a stunning technological feat that many had thought impossible.

    Cloning is of practical import, as it can be used to quickly create herds of identical animals that churn out medically useful proteins; the first such animals—a handful of transgenic sheep clones—are described on page 2130 of this issue. But the implications of cloning technology go much further, opening up new avenues of research in cancer, development, and even aging. Indeed, Dolly forces a reexamination of what it means to grow old, for although she is now 18 months old, her DNA, taken from the donor cell, may be almost 8 years old.

    According to conventional wisdom, adult cells cannot give rise to new, mature organisms. So after Dolly's debut, researchers scrambled to understand how she was created. Scientific societies convened their own impromptu meetings to discuss both the scientific and ethical implications of the work, and companies specializing in transgenic animals saw their stock value jump overnight.

    But despite Dolly's soft brown eyes, some feared that she was a wolf in sheep's clothing, come to steal humankind's individuality and autonomy. She sparked calls for a ban on human cloning in the United States, Switzerland, China, and other nations, and to some, she raised the sci-fi specter of cookie-cutter clones grown for spare parts. But whether welcomed or feared, cloning in 1997 forced scientists and the public alike to rethink their basic ideas about life, and to confront the implications of our growing ability to manipulate life's blueprint.

    As is true for many breakthroughs, cloning represents the convergence of advances in several disciplines over several decades. Painstaking progress in sheep reproductive biology, genetic manipulation, and cell culture all paved the way for Dolly. But the critical technique is nuclear transfer, in which the intact nucleus of one cell is absorbed into an egg whose own nucleus has been removed.

    Researchers seeking to unlock the secrets of embryonic development had been working to perfect this technique for 40 years, starting with experiments in frogs in 1952. They transferred the nuclei of embryonic or tadpole cells into frog eggs and succeeded in raising cloned tadpoles and even adult frogs. But the older the frog cell donating the DNA, the less likely was the resulting clone to develop normally. When donor cells from an adult were used, no frog clone ever developed beyond the tadpole stage. And in mice, the typical mammalian model organism, results were even more discouraging. At the time, researchers couldn't get viable young from anything but nuclei taken from very early embryos—the two- to four-cell stage. So most biologists came to accept that mature cells could not give rise to entire organisms, especially in mice. Only an egg cell possessed that mysterious power, called totipotency.

    But those working with cows and sheep were not quite persuaded. A team of researchers at the Roslin Institute outside Edinburgh, Scotland, for example, suspected that previous failures were caused by donor DNA that was in a different stage of the cell cycle than the recipient egg cell. They used nuclear transfer to clone sheep from embryonic cells, and in 1996 announced the birth of two cloned lambs. Next, they cloned sheep from fetal fibroblast cells. And in partnership with a local biotechnology company, they attempted what everyone had said was impossible: to clone a sheep from adult cells.

    To do this, the team used cultured udder cells, taken from a 6-year-old ewe, and then starved them, forcing most of their genes to enter an inactive phase that the researchers hoped would match the cell-cycle stage of the recipient eggs. Once the udder-cell nuclei were transferred into the eggs, still-unknown factors coaxed that “inactivated” 6-year-old DNA to go back in time, so to speak, and apparently become totipotent once more, directing the eggs to develop into lambs. Out of 277 such eggs, only one produced a healthy living animal: Dolly.

    To a startled public, Dolly made the horrors of science fiction clones seem all too possible. If she could be cloned from an udder cell, people wondered, then why not a dictator from his nose, as was attempted in the movie Sleeper, or a spare self as a reservoir of replacement body parts? Such things are safely in the realm of fiction, of course, but many people, scientists included, became concerned that cloning people would dehumanize our species and spoke out against it.

    Yet upon reflection it's clear that just as identical twins grow up to be individuals, clones would never be truly identical. Even Dolly is not an exact replica of the ewe used to clone her, because she did not develop in that ewe's uterus nor receive its genes in the cellular organelles called mitochondria.

    For now, Dolly stands alone. No one, not even the Roslin team, has made a second animal from an adult cell. Of course, most biomedical researchers work with mice—and mouse nuclear transfer results are still dismal. So attention is focused on the handful of labs worldwide working on cloning in livestock. Most are starting with fetal cells, whose DNA can more easily be made totipotent. So far, several firms say they too have cloned either sheep or cows from fetal cells, and one group has cloned monkeys from embryonic cells.

    Nuclear transfer experiments are under way in other species too, ranging from zebrafish to rabbits. Among basic researchers, the Scottish group's success has inspired new experiments looking at how DNA changes as a cell matures. Clarifying the nature of totipotency may spark insight into what makes cells and organisms age, and how cell growth can go awry, as in cancer. Researchers are watching Dolly closely, for although so far she seems the 18-month-old she's supposed to be, her DNA may make her age prematurely.

    Cloning experts point out that the true identity of Dolly's progenitor cell is not known for sure—it's possible that it was a stem cell, known to be able to develop into several kinds of tissues. Even if that's true, the ability to restore totipotency to easily harvested adult cells would offer a potentially simple method to replace lost or damaged cells.

    Meanwhile, the Roslin team has taken the next step toward making cloning economically useful by cloning sheep carrying foreign genes. Three sheep carry a marker gene, and two have both the marker gene and the gene for the human factor IX protein, which some hemophiliacs take to aid blood clotting. These sheep were cloned from transgenic fetal fibroblast cells, not adult cells, so they are most remarkable not as clones, but because they developed successfully despite having undergone genetic manipulation.

    On the drawing board are flocks of sheep that make factor IX and other useful proteins in their milk. Other scientists are developing nuclear transfer techniques to create other types of genetically tailored livestock, opening the door to better animal models of genetic diseases, animals as organ donors, and possibly leaner, faster growing livestock.

    Indeed, as with all breakthroughs, it's not possible yet to foretell exactly where cloning will lead. Although initial reactions were universally against all human cloning, there have been whispers that such cloning may one day have a place in giving infertile couples genetic offspring. Whatever direction the research takes, however, the public is likely to demand a say in how cloning is applied. Biologists, ethicists, and others will be wrestling with the implications of this birth in a barn for years to come.



    1. I. Wilmut et al., “Viable offspring derived from fetal and adult mammalian cells,” Nature 385, 810–813 (27 February 1997).

    2. K.H.S. Campbell et al., “Sheep cloned by nuclear transfer from a cultured cell line,” Nature 380, 64–66 (7 March 1996).

    3. E. Pennisi and N. Williams, “Will Dolly send in the clones?” Science 275 (5305), 1415 (7 March 1997).


    1. H. T. Shapiro, “Policy Forum: Ethical and policy issues of human cloning,” Science 277 (5323), 195 (11 July 1997).

    2. E. Marshall, “Mammalian cloning debate heats up,” Science 275 (5307), 1733 (21 March 1997).

    3. E. Marshall, “Clinton urges outlawing human cloning,” Science 276 (5319), 1640 (13 June 1997).

    GraphicEthics Updates Page, including discussion group, literature, and links.

    Graphic National Bioethics Advisory Commission


    The Runners-Up

    The News and Editorial Staffs

    When it comes to Mars exploration, getting there has always been half the battle. From the start of the space age to the beginning of 1997, 19 missions set out—and more than half failed. No mission had touched down on martian soil since 1976. So on the 4th of July this year, when the Mars Pathfinder lander sent back images of the first robot to roam the surface of another planet, jubilation was the order of the day. But Pathfinder was more than just a stunning technological achievement: It returned a bounty of scientific information from an intriguing part of the Red Planet and did so on the cheap. As the first of NASA's “faster, cheaper, better” Discovery missions, Pathfinder broke through a history of martian jinxes and high costs. The landing captivated audiences worldwide and vindicated NASA's gamble on a new, smaller scale approach to solar system exploration.

    The big picture.

    Pathfinder scanned Mars from horizon to horizon.


    Pathfinder's technological victory was all the more impressive given its Rube Goldberg approach to landing on Mars. Previous landers have relied on an expensive rocket to first get into orbit around their targets, but Pathfinder blazed straight into the martian atmosphere behind a simple heat shield, popped open a parachute while still at supersonic speeds, and then began searching for the surface using onboard radar. At literally the last second, the craft fired its three small retrorockets and cut the parachute loose from the lander. Encased in airbags, the lander plummeted the last 30 meters in a free fall, finally bouncing to a stop—right side up.

    When the rover started returning data, color images of the martian landscape—released in real time on CNN and in almost real time on the World Wide Web—entranced the public. To geologists, the images also revealed traces of the great flood that a billion years earlier had sculpted a distant hill, stacked nearby boulders, and left meter-scale ripples in the ground. Compositional measurements made by Sojourner on rocks such as Barnacle Bill and Yogi revealed that they contain a surprising amount of silica, suggesting that geological forces had reprocessed the rocks sometime in martian history. And Sojourner's discovery of rounded pebbles that may have been water-worn in flowing streams added evidence that Mars's surface was warmer and wetter in its earliest days—when life might have gotten a start.

    Mission engineers at the Jet Propulsion Laboratory in Pasadena, California, managed all this with less than $270 million and within a 3-year development schedule. Such economy runs counter to the old bigger-is-better culture of spacecraft design, which produced, for example, the $3.3 billion Cassini mission to Saturn. And Pathfinder's achievements are the first in what NASA hopes will be a series of blockbusters from the Discovery program. Next up, on 5 January, Lunar Prospector lifts off for its mapping orbit around the moon.

    Synchrotrons Shine New Light

    Beams of x-rays and ultraviolet light have long been used to reveal the atomic structure of materials. But by all accounts 1997 was a banner year for a new generation of stadium-sized machines known as synchrotrons, which produce the brightest beams yet and can illuminate the structural secrets of matter—both living and inert—down to individual atoms. This year saw the commissioning of SPring-8, the world's most powerful synchrotron, in Nishi-Harima, Japan, and marked the first full year of operation of the second most powerful machine, the Advanced Photon Source in Argonne, Illinois. And synchrotrons around the globe yielded some striking breakthroughs in the structure of materials.

    Shining light

    Bright beams at the Advanced Photon Source reveal the structure of matter.


    Among the highlights: An international team used the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, to produce an atomic-scale map of the nucleosome core particle, thus gaining new insights into how this DNA-protein complex manages to coil meters of DNA inside each cell. A Swiss and French team used the ESRF beam to solve the structure of bacteriorhodopsin, a membrane protein whose small crystals had defeated previous attempts to determine its structure. And a group at Oxford University solved the largest x-ray crystal structure to date, that of the bluetongue virus, made up of more than 1000 separate proteins.

    Despite such smashing results, budget woes threaten a new ultraviolet synchrotron, the Advanced Light Source, in the United States. But worldwide, synchrotrons show no signs of slowing—another 26 are in the works.

    Keeping Time

    As the days ticked by in 1997, those studying organisms' internal clocks marked time with periodic bursts of discovery. Researchers isolated several new genes that help keep daily rhythms, including the first two mammalian clock genes. And one fruit fly clock gene was found to be active throughout the fly, suggesting that many cells, not just those of the brain, can keep time.

    Clocking in.

    Active mammalian clock gene (yellow) lights up a mouse brain at 11 a.m.


    Before this year, only three clock genes had been identified, two in the fruit fly (per and tim) and one in bread mold, called frequency (frq); these genes code for proteins whose concentrations rise and fall on a cycle set to 24 hours by sunlight. Then, last May, researchers reported two new genes in bread mold, white collar-1 and -2, which turn on the transcription of frq, thus participating in a feedback mechanism that keeps the clock ticking. Later that month, another team isolated the first timekeeping gene from a mammal—a mouse gene called Clock—which also appears to regulate circadian rhythms.

    In September, two teams independently discovered a gene that resembles per in mice and humans. Having similar genes in such divergent species as humans and flies implies that the genes that make up the clock's gears and springs may have been conserved since the earliest days of biological time. Finally, in the closing days of 1997, researchers found that per is active not only in the brains of flies, but in many other tissues as well, suggesting that many independent clocks are ticking away in the body—and that the brain is only one of many timekeepers.

    Violence at a Distance

    They are the most violent events in the universe, and for 30 years they ranked high on the cosmic mystery top 10. Gamma ray bursts, sudden explosions of high-energy radiation occurring almost daily at random positions in the sky, were first detected in the early 1960s. But gamma ray detectors couldn't accurately determine their location or distance from Earth. Some astronomers thought the bursts were far-off events, others that they occur in our own galaxy.

    Then on 28 February, the Italian-Dutch satellite BeppoSAX simultaneously detected a burst in both gamma ray and x-ray wavelengths, using two separate detectors. An x-ray telescope aboard the satellite was also able to catch the afterglow of the burst and pinpoint its position. Alerted via the Internet, Dutch astronomers at an observatory in the Canary Islands found a dimming optical light source at the burst position, coinciding with what appeared to be a distant galaxy.

    Ten weeks later, on 8 May 1997, it happened again—a second BeppoSAX detection was linked with an optical source. This time, American astronomers using a Hawaii-based telescope were able to study the light of the burst in great detail and to peg its distance at several billion light-years. It seems that gamma ray bursts occur in the far reaches of the universe, making them by far the most energetic events in the cosmos, exceeded only by the big bang itself. But what distant cataclysms cause these bursts? They might be the collision of two dense neutron stars, but their true nature remains a mystery to be solved another year.

    A Glimpse of Neandertal DNA

    Ever since the first skeleton of a Neandertal was discovered in Germany's Neander Valley in 1856, anthropologists have wondered whether this burly human was ancestral to living humans or an evolutionary dead end. Many more Neandertals have turned up since, but the bones alone haven't settled the mystery.

    Then in July a team in Munich announced that it had recovered and analyzed a snippet of DNA from the arm bone of that original Neandertal. The researchers had pieced together a 379-base pair sequence from the mitochondrial DNA (mtDNA) in the cell's energy-producing organelles. This Neandertal mtDNA spelled out a sequence very different from that in living humans, supporting the view that Neandertals were not our ancestors, but a separate species that went extinct.

    Only one small part of the genome was reconstructed, but at 30,000 to 100,000 years old this is the oldest DNA extracted from a human. And it is a triumph for the besieged field of ancient DNA. Earlier, spectacular claims of analyzing DNA from insects in amber have not been replicated, and many had almost given up on the idea of extracting useful DNA from ancient fossils. But this work was replicated in a U.S. lab, convincing even skeptics that it is the real thing.

    Nanotubes on a Roll

    Since their discovery in 1991, nanometer-sized tubes of carbon have been seen as the right stuff for everything from future electronic devices to ultrastrong materials. In 1996, researchers developed a laser-based method to produce high yields of single-walled nanotubes (SWNTs), and in 1997 they brought the tubes closer to their potential by testing, tweaking, and filling them.

    Tubing it.

    Nanotubes are in demand.

    R. BAUM/C&E NEWS, JUNE '97, p. 40

    Cousins of the spherical buckminsterfullerene (C60, Molecule of the Year in 1991), nanotubes are sheets of graphite—carbon atoms arrayed in adjoining hexagons—that are rolled up and capped at the ends. Those made of only a single wall of carbon are prized for their regular structures and predictable behavior. For example, theorists predicted early on that depending on their architecture, SWNTs should be semiconductors or metals, key building blocks for electronic devices. Scientists confirmed both predictions in 1997. Individual nanotubes were found to be excellent conductors, a property that could be enhanced by doping their outer surfaces. And a slightly different bonding arrangement between a single pair of polygons was shown to turn a nanotube into a simple semiconducting electronic device. Other researchers demonstrated their ability to manipulate the tubes by stuffing them with gas or with gallium nitride rods.

    Also this year, a French and U.S. team came up with a cheaper way to produce SWNTs using a simple electric arc discharge, a feat that's likely to make these cylinders easier to come by and therefore study. But no known method can produce the tons of SWNTs needed for a commodity material, so the push to make larger batches of tiny tubes will continue.

    The Other Ocean

    This year planetary scientists gathered solid evidence that our ocean is not alone. The Galileo spacecraft orbiting Jupiter returned images of the surface of the jovian moon Europa, revealing what looks like an icy crust floating on a watery ocean. Absolute proof of a deep sea might be beyond Galileo's abilities, but the newly credible case for liquid water—the crucial ingredient for life—raises the prospect of alien stirrings beneath Europa's icy surface.

    Deep water.

    Europa's jumbled, icy surface hints at an ocean below.


    The signs of a europan ocean were varied. Faults, rifts, and jumbled crustal blocks suggest that the surface layer of ice was thin when last disturbed; in one spot, iceberglike blocks seem to have floated in a now-frozen sea. And some large impact craters appear to have punched through thin ice, leaving flat blemishes rather than rimmed craters.

    These and other clues suggest that when the europan surface was last disrupted, the ice was only 10 to 20 kilometers thick, leaving 100 kilometers or more below for liquid water. And large areas nearly unblemished by the drizzle of small impactors suggest that all this disruption was recent or even ongoing. An intensive campaign by Galileo during the next 14 months may turn up more evidence, but look for proposals to send a spacecraft to orbit this moon to probe for the final answer.

    Genomes Galore

    The growing tower of microbial genetic data was buttressed by two more cornerstones this year, and geneticists also pushed closer to what once seemed a pie-in-the-sky goal—analyzing whole genomes.

    Researchers sequenced the entire genetic codes of two well-studied microbes, the common gut microbe Escherichia coli and the soil bacterium Bacillus subtilis. These bacteria—each with a genome more than 4 million bases long—have been laboratory workhorses for generations of biologists. Now, scientists can link decades of physiological and biochemical work to the genes involved.

    This was also the year when whole-genome sequencing took off. Once a maverick approach, this shotgun method has become commonplace in organisms with fewer than 2 million bases. This year, it yielded genomes of three archaea (primitive microbes often adapted to extreme environments) and several pathogens, such as Helicobacter pylori, infamous as the cause of ulcers, and Borrelia burgdorferi, the spirochete behind Lyme disease. More than 40 other microbial genome efforts are under way.

    This output has been both empowering and humbling. With whole genomes to compare, researchers can classify genes into functional or ancestral families. From these studies, biochemists are tracking down new proteins in key metabolic pathways, providing insight into such questions as how pathogens gain access to their hosts. Yet even in familiar E. coli, about a third of the putative new genes are of unknown function, which stakes out a new challenge for the years ahead.

    Neurons in the News

    In 1997, researchers homed in on clues to the workings of the central nervous system, clues that may one day lead to new treatments for ailments ranging from Parkinson's disease to spinal cord injuries.

    This year, scientists fingered the first genetic cause of Parkinson's, tying a mutation in a gene called α-synuclein to a heritable form of the disease in a large Italian family. The work not only put to rest a long-standing debate over whether genes play a role in Parkinson's, but also pointed to a possible mechanism for the disease, involving abnormal processing of the protein.

    Other research shed light on the biology of the dopamine-producing brain cells that die in Parkinson's. Work from a Swedish lab using mice showed that a receptor protein called Nurr1 is needed for both proper development of dopamine-producing cells and the synthesis of a healthy amount of dopamine.

    There were advances in Alzheimer's disease too: Researchers identified a potential new player, novel brain lesions called “AMY plaques,” that may contribute to the disease. And this year also raised hopes that injured spinal cords may one day be rewired. Groups from London and San Diego linked the sprouting of nerve fibers in the severed cords of adult rats with some return of function—a feat long thought impossible.


    1. 1.
    2. 2.
    3. 1.
    4. 2.
    5. 3.
    6. 1.
    7. 2.
    8. 3.
    9. 4.
    10. 1.
    11. 2.
    12. 3.
    13. 4.
    14. 5.
    15. 6.
    16. 7.
    17. 8.
    18. 9.
    19. 1.
    20. 2.
    21. 3.
    22. 1.
    23. 2.
    24. 3.
    25. 4.
    26. 1.
    27. 1.
    28. 2.
    29. 3.
    30. 1.
    31. 2.
    32. 3.

    Bumper Crop for Pop Science

    Most scientists think the general public pays scant attention to research, but in 1997 several science stories became hits in popular culture, as the mass media—particularly television and the Internet—discovered that the process of discovery itself can be a rich source of entertainment as well as information.


    Space science was perhaps the most popular story, as a string of marvels, mishaps, and milestones drew eyes upward in numbers probably not matched since the days of the Apollo moon project. In March and April, there was Hale-Bopp—for once, a comet whose brightness exceeded its ballyhoo. The comet turned even urbanites into backyard astronomers, and nearly doubled previous records for visibility and endurance. Lasting even longer, however, were the travails of Russia's dilapidated Mir space station. This space-based soap opera repeatedly stopped the hearts of Mir watchers, as a rotating cast of astronauts battled fires, computer breakdowns, and other calamities.

    Then on the 4th of July, the Mars Pathfinder lander bounced down on the Red Planet, announced on CNN with much fanfare and 24-hour live coverage from NASA's mission control room. Pathfinder's intrepid Sojourner rover wasn't equipped to search for signs of life, but Sojourner and its lively band of handlers at mission control charmed TV and World Wide Web audiences anyway. The “hits” pummeling Pathfinder Web sites peaked at millions per hour, with the grand total now nearing 1 billion. As with many hit TV shows, there was a toy tie-in: Mattel's Hot Wheels version of the rover went flying, not crawling, off store shelves.

    Space exploration may have mimicked high-adventure science fiction, but reproductive biologists' exploits in a petri dish in Scotland evoked, for many, the genre's cautionary side. Dolly—the ewe cloned from an adult mammary-gland cell—sparked hasty calls from politicians for a moratorium on human cloning, forced ordinary citizens to reconsider the meaning of individuality, and became a staple of late-night TV comedians. Like headless tadpoles in Britain, genetically engineered soybeans in Germany, and the new film Alien Resurrection, the cloning story tapped public fears of genetic technologies. Scientists may wish that ordinary citizens knew more about science than they do—but they can hardly deny the strength of its grip on the public imagination.


    Politicians Sweat Over Global Warming

    Scientific uncertainty is nothing new to policy-makers. But this year it occupied center stage as the global warming issue, long smoldering on the scientific sidelines, spread to the political arena. Earlier this month, politicians, scientists, and lobbyists gathered in Kyoto, Japan, to face up to a burning question: If human emissions of carbon dioxide and other greenhouse gases will warm the world, what should be done about it? While the answer depends in part on science, 10 days of arduous debate at Kyoto—as delegates hammered out a historic accord—made it plain that the science is only the start.


    Well before Kyoto, scientists had already aired their views in a report by the Intergovernmental Panel on Climate Change (IPCC)—the most comprehensive international assessment ever on an environmental question. But while the effort was impressive, it was hardly conclusive. The report said warming due to a doubling of greenhouse gases, expected sometime late in the next century, could range from 1.5 to 4.5 degrees Celsius—from moderate to outright catastrophic. And scientists still can't be sure just what the effects of climate change will be on various regions of the world.

    Given that uncertainty, most policy-makers agreed that emissions need to be restricted—but how much, and who will bear the cost? Kyoto provided a few answers, including emission reductions ranging from 6% to 8% below 1990 levels by Japan, the United States, and Europe, as well as the inclusion of all six major greenhouse gases in the tally. Of special interest to scientists, a clean development fund would channel new energy-saving technologies to developing countries. Such a fund—intended as the first step toward participation by poorer nations—may thrust science back into the center of the debate. But it won't provide any quick fixes for negotiators when they reconvene in November 1998 in Buenos Aires to tackle several unresolved issues, including whether and how developing nations will participate in the treaty.



    New Research Horizons

    What fields will rise to prominence in 1998? Science surveyed the research world and found six likely prospects.


    Forecasting future shocks. Climate researchers made a sharp call on 1997's massive El Niño, but the burgeoning field of long-range climate prediction has its reputation on the line once more: New predictions for this winter have been posted, from warmth in Minnesota to drought in southern Africa. Such seasonal forecasts pale before the new frontier of decadal predictions, based on understanding the slow mood swings of the oceans.

    The expanding universe. Views of a handful of distant stellar explosions, taken this year by the Hubble Space Telescope, suggest that the ballooning of the universe has slowed so little over time that it may expand forever, rather than going to blazes in a final collapse. As more observations come in, expect these data to have a chilling effect on mainstream theories of the universe's birth.

    Personalized prescriptions. The surge of interest in genotyping technologies—and in the firms developing them—is reviving pharmacogenetics, a decades-old vision of tailoring drugs to a patient's genetic makeup. Thanks to DNA chips and arrays, rapid DNA analyses may soon be able to reveal genetic variations affecting drug metabolism and side effects. These technologies are already changing how clinical trials are conducted and may eventually revolutionize how medications are prescribed.

    Ribosomal inspection. At long last, researchers are beginning to get a detailed look at the inner machinery of the ribosome, the cell's protein factory. Advances in the techniques of structural biology are revealing the intricate dynamics of this large complex of RNA and proteins. Higher resolution images of the ribosome are likely to yield the exact nature of some of the many steps involved in protein production.

    Diversity debate. Experiments with artificial ecosystems have bolstered the traditional view that biodiversity improves ecosystem functions. But recent work on natural ecosystems suggests that biodiversity may not be crucial to such functions as how nutrients cycle, while research in microbial communities suggests that high biodiversity makes ecosystems more predictable. As humans continue to wipe out species, expect more research on the science of why biodiversity matters.

    Designer crops. The United States leads in transgenic crops, but Europe is fast catching up, with more than 100 trials—ranging from maize to strawberries—approved in Britain. A few transgenic crops have earned European Union approval and will soon be on the market. But will reluctant European consumers bite? Expect a battle for acceptance, as lobbyists press for new labeling rules.

    Climate Prediction:

    Expanding Universe:

    1. 2.
    2. 3.


    1. 4.
    2. 5.


    1. 6.
    2. 7.
    3. 8.


    1. 9.

    Transgenic crops:

    1. 10.

    Scorecard '96

    Last year, Science's editors gazed into the future to predict the hot fields of the next 12 months. Here's how our favorites fared this year, showing whether our crystal ball was cloudy or clear.

    The cure for cancer. Promising new results boosted bold new therapies, such as blocking the growth of blood vessels that feed tumors, or designing a virus to kill cancer cells. But clinical trials are proceeding slowly, and it may be years before such tactics pan out.

    Advanced Photon Source. Several beamlines saw first light this year, while other groups used this x-ray synchrotron to probe everything from protein structure to the behavior of catalysts in water (see second runner-up, p. 2040).

    Computer security. Although worried Web users kept a wary eye on information security, there were no major new breaches of the encryption codes used to safeguard business data.

    Synthetic carbohydrates. New clinical trials were launched to see if these sugar-based molecules may work as a cancer vaccine, coaxing the immune system into attacking the natural carbos that decorate tumor cells.


    Quantum error correction. New schemes for correcting errors in unimaginably fast quantum computers leaped forward, as theorists moved beyond correcting mistakes in memory to protecting logic operations. And the first experimental realizations of quantum error correction are in the publication pipeline.


    Supersymmetric particles. A fire at CERN slowed the search for the elusive signs of supersymmetry. Optimists still see hints of the particles in the modest amount of data gathered and predict that their signatures will turn up next year at CERN.


    'Playing Chicken' Over Gene Markers

    1. Eliot Marshall


    The U.S. government is getting set to launch a big addition to its human genome project this winter, and it is doing so with “some urgency,” says geneticist Francis Collins, the chief architect of the new initiative. Details of the venture—which involves sequencing snippets of DNA from hundreds of individuals of different racial backgrounds and putting them in a public repository—were discussed at a meeting at the National Institutes of Health (NIH) last week, and a strategic plan will be put together over Christmas. NIH hopes to issue a funding announcement in December or January, winners will be selected by summer, and investigators are supposed to begin putting out data in less than a year. “Some urgency” seems to be an understatement.

    Unofficially, it's known as the SNPs project (for single nucleotide polymorphisms, pronounced “snips”). The basic strategy, decided upon at last week's meeting, is to collect at least 100,000 single-base variations in human DNA donated by 100 to 500 people in four major population categories: African, Asian, European, and Native American. These variable sites will serve as landmarks for creating a new, very fine-grained map of the genome, says Collins, director of the National Human Genome Research Institute (NHGRI), which will help investigators track down elusive genes that cannot be found using family studies. The collection would also provide some data on variable forms of known genes, and it may breathe life into the beleaguered human diversity project, which was supposed to survey DNA variation in populations around the world, but has become bogged down in politics (Science, 24 October, p. 568). Most important, according to Collins and other advocates, the project should encourage researchers to adopt a common format and quality-assurance methods when collecting and reporting DNA variations, which should make the data more useful to other scientists.

    After organizers identify a reliable way of finding new SNPs, they hope to move quickly into production mode, collecting thousands of SNPs before they are locked up in a hodgepodge of other DNA collection schemes—many of them commercial (see sidebar). The effort is likely to cost $20 million to $30 million over the next 3 years—and probably tens of millions more after that. Collins has already received the green light from NHGRI's advisory council, and he says that 18 NIH institute directors have promised to share the costs. NIH is now providing seed money with funds from its current budget.

    Funding may not be the biggest problem, however. Concerns over how to protect confidentiality and ensure that DNA donors have given informed consent for the use of their genetic data could present a far more difficult set of obstacles. Indeed, there were some tense moments at last week's meeting—a gathering of an ad hoc working group of scientists, bioethicists, and government officials who are advising Collins on the project—over a proposal to use some existing DNA collections to kick-start the venture. Some felt that it would be improper to use these collections without going back to the original donors for their consent, yet without a ready-made source of DNA, the new project will be slow to get started.

    Why hurry?

    Collins first began promoting this project about 3 months ago (Science, 19 September, p. 1752), and it is already on the verge of getting under way. The main reason it is moving at warp speed, say scientist-advisers on the project, is that big academic labs and companies have jumped into genomic data collection in the past year, and U.S. genome project leaders want to make sure that they don't patent most SNPs before the smaller labs have a chance to use them as low-cost genomic mapping tools. Or, as one geneticist said: “It's a game of chicken between Francis and the companies.”

    Aravinda Chakravarti, a geneticist at Case Western Reserve University in Cleveland and the most prominent academic champion of the SNP project, co-authored with Collins a Policy Forum in Science last month where they warned that if this effort doesn't get public support, much of the SNP data will be socked away in “private collections” (28 November, p. 1580). The information might then be subject to “a tangled web of restrictive intellectual property attachments … inhibiting many researchers from using these powerful tools.”

    Besides, says Chakravarti, the “cottage-industry style” of gathering such data is coming to an end. Today, much information about genetic variation is acquired haphazardly, as researchers collect information about polymorphisms in families or explore clinical data about a particular gene. For example, the intense focus on two genes linked to breast cancer (BRCA1 and BRCA2) has turned up hundreds of alleles, scores of which have been patented. But new technology may make it possible to sift thousands of small variations out of a collection of DNA in minutes, simply by using a mutation-sensing “chip” to scan a person's genome for anomalies. Such devices could enable companies to scoop up massive amounts of data on DNA variation. “When you consider the magnitude of what's coming down the pike,” says Chakravarti, “we will lose information if we don't combine it all in one place.”

    Companies that are generating their own SNP collections don't buy the argument for a big public investment in a database, however. “There's a lot of me-too-ism in this field right now,” says Gualberto Ruaño, a Yale University geneticist and founder of Genaissance Therapeutics Inc. in New Haven, Connecticut. His company is attempting to develop what Ruaño calls “personalized medicine” by identifying variant forms of important human genes—such as those that code for the estrogen receptor—patenting them, and designing drugs that conform to the particular molecular structures associated with the most common types of genes.

    Ruaño says it might be wiser for the public agency to “keep its focus” on completing the full sequence of the human genome. Later, he said, it can add variation data. Fred Ledley, president of another small company that's exploiting human DNA variation for drug development—Variagenics Inc. of Cambridge, Massachusetts—says, “We welcome efforts by the NIH to systematize sequence variation in a public database.” But, like Ruaño and other industry people, he says that to secure private investment he must continue to patent human genetic variations.

    That argument bothers Kenneth Weiss, an anthropologist at Pennsylvania State University in University Park, an adviser to Collins on the SNP project and a strong supporter of it. “For the good of science, [this information] should be made available as widely as possible so as many scientists as possible can think about it,” he says. Weiss personally opposes patenting any human genetic sequences.

    Although many academic scientists agree with that sentiment, some question the need to move ahead so rapidly with the SNP project. “This is incredibly hasty,” says one researcher who asked not to be identified. “Why can't they slow down and improve the science?” he asks, arguing that polymorphism data would be far more valuable if it were linked to detailed biomedical information about the donors. Such information (and permission to use it) would take more time to obtain.

    But Collins and his NHGRI staff say that they want quick access to the best DNA samples currently available, because this may prove to be the biggest hurdle to getting started. The grant money for analyzing the DNA won't begin flowing until October 1998. By then, NHGRI needs to have access to a large source of DNA from individuals of diverse backgrounds, ensure that donors have given proper consent, and place the DNA into immortalized cell lines. These issues occupied most of the working group's meeting last week.

    DNA on tap

    The strategy session, chaired by Chakravarti, came up with a scheme for sampling four major population groups, leaving it to a technical subcommittee to devise numerical targets. For example, evolutionary geneticists have shown that there is far greater genetic variation within African populations than in non-African populations, and also that certain alleles considered rare in European DNA samples are common in African samples. This reflects the greater age and diversity of the African gene pool. Yet publicly available DNA collections contain little African material; Native American and Asian contributions are similarly scant. As a result, say NHGRI staffers, it will be essential to collect DNA from a racially structured set of donors. Once the DNA has been sampled, however, all personal and racial data will have to be removed to protect privacy—diminishing the scientific value of the project, but bolstering its ethical foundation.

    Where those samples would come from—and how to ensure that donors have given appropriate consent and that their privacy is safeguarded—prompted the most intense debates. NHGRI staffers had set their hopes on getting a set of ready-made cell lines containing DNA from a broad sample of the U.S. population, created by the National Center for Health Statistics (NCHS). From 1989 to 1994, that agency's National Health and Nutrition Examination Survey (NHANES) collected blood from more than 17,000 representative individuals around the country to obtain a snapshot of the health of the U.S. population. Karen Steinberg of the Centers for Disease Control and Prevention later converted more than 8000 of these samples into immortalized cell lines. But Chakravarti warned that “it is not a done deal” that genome researchers would be allowed use this material. The reason: Because DNA studies had not been foreseen, NHANES staff did not ask the donors for permission to put their DNA in a database.

    Managers of the NHANES data asked their human subject research panel for guidance on this issue more than a year ago. In September, the panel said it would be acceptable to run some health studies on the cell lines, but only if the samples were made completely anonymous by stripping them of all identifiers (Science, 21 November, p. 1389). Edward Sondik, director of the NCHS, has interpreted this ruling to mean that DNA data from these samples cannot be put in a database—even with anonymous identifiers—without consent, because a third party who knows the name and genes of an individual might still be able to ferret out unique genetic information.

    But the prospect of asking for fresh consent from donors concerned NHANES officials. As Diane Wagener of NCHS explained, they worry that if people receive a letter informing them that their DNA has already been immortalized in cell lines and requesting approval for hard-to-understand genetic studies, they might say, “I don't want to have anything to do with [NHANES].”

    As the cell lines seemed to become less accessible by the hour, an annoyed Collins declared that, after months of discussion, “I am very troubled to learn that there still doesn't seem to be a clear answer” about whether they can be used. After a coffee break, Sondik announced that 600 DNA samples that are not part of the primary NHANES set will be made available for the SNP project, and possibly a small fraction of the primary set, as well. Consent will have to be obtained from the donors, only 75% of whom are expected to be reachable through old addresses. But even with a strong response, the NHANES contribution will not be enough. For example, it is short on Asian and Native American DNA. NHGRI will therefore have to find other donors, perhaps among patients in ongoing NIH projects.

    The NHANES samples will, at least, allow the SNP project to get started. Now NHGRI staffers must lay out the technical parameters, the sequencing goals, deadlines, and cost limitations. They hope to complete all of this by January. Then, if the whole scheme doesn't run into a wall, the genome community will witness a brand-new competition in gene discovery.


    The Hunting of the SNP

    1. Eliot Marshall

    The government's plan to create a genomewide catalog of human DNA variation (see main text) represents a twist on the way big research projects like this usually get started. Often, government-sponsored research galvanizes industry, but this time the impetus is going the other way. The academics are worried that private companies wielding new technologies for scanning the genome will snap up the SNPs (single nucleotide polymorphisms) and patent them.

    Two projects in particular have caught people's attention. One is a 5-year, $40 million collaboration between biochipmaker Affymetrix Inc. of Santa Clara, California, two pharmaceutical companies, and geneticist Eric Lander's lab at the Massachusetts Institute of Technology's Whitehead Institute Center for Genome Research. They are creating a sensor that identifies 2000 or more SNPs. It's an electronic device coated with a preset DNA sequence that probes a test material, yielding a readout of variants in shaded dots (see photo). The output is easy to scan robotically. While it includes more genomic SNPs than have been screened in parallel before, the chip holds just a fraction of the 100,000-plus SNPs the National Human Genome Research Institute (NHGRI) is seeking.

    Last month, Whitehead staffer David Wang told a symposium at NHGRI that a prototype chip has proved its ability to find new SNPs. In one recent test, Wang said, it achieved a 90% accuracy rate. Affymetrix staffer Robert Lipshutz says the gadget will be ready for distribution next year.

    The second initiative is a $20 million to $43 million joint venture between the drug manufacturer Abbott Laboratories of Chicago and Genset of Paris, a company that acquired the gene-mapping resources of the former nonprofit Centre d'étude du Polymorphisme Humain. Genset has published few technical details, but genetics chief Daniel Cohen and other officials have sketched out the plan. They aim to identify 60,000 SNPs distributed over the entire human genome, patent them, and create a SNP map of the genome. Genset intends to sell this map to researchers who want to scan a person's genome for basic genetic studies or for drug research. Abbott wants specialized SNP maps to screen out patients who are less likely to respond to drugs in clinical trials because of the variant genes they carry. Many other companies are jumping into the SNP race as well.

    At the same time, academic labs are developing their own collections of SNPs and SNP-hunting systems. At Stanford, for example, David Cox is involved with one group, and Ronald Davis, Peter Oefner, and Luca Cavalli-Sforza are in another group, both of which are tinkering with fast methods to identify genetic variation. Pui-Yan Kwok of Washington University in St. Louis has been funded by NHGRI to look for 3500 SNPs in genomic data generated by standard methods. And Debbie Nickerson at the University of Washington, Seattle, is also refining standard methods. All are potential competitors for funding under the new NHGRI project, whose first goal is to develop an efficient way of locating new SNPs.


    Thirty Kyotos Needed to Control Warming

    1. David Malakoff
    1. David Malakoff is a writer in Bar Harbor, Maine.

    When exhausted delegates emerged from round-the-clock negotiations in Kyoto, Japan, last week with a global agreement to curb emissions of heat-trapping gases, some observers hailed the deal as a diplomatic miracle. Climate scientists say, however, that it will be miraculous indeed if the Kyoto pact—which calls for 38 industrialized nations to cut their emissions by an average of 5.2% from 1990 levels by 2012—even temporarily slows the accumulation of warming gases in the atmosphere. The cuts are too small and are likely to be overwhelmed by developing nations, they say. “It is a laudable and reasonable first step,” says Jorge Sarmiento of Princeton University, “but much deeper emissions cuts will be needed in the not too distant future if we are going to meaningfully reduce the rate of warming.”

    If approved by major industrialized nations such as the United States and Japan—and that is far from certain—the new Kyoto Protocol to the 1992 Climate Change Treaty could reduce emissions of six greenhouse gases, including carbon dioxide, methane, and nitrous oxide. The United States—the world's leader in greenhouse emissions, with 25% of the total—agreed to a 7% cut from 1990 levels by 2012, while the 15 nations of the European Union committed themselves to an 8% reduction and Japan to 6%. If the cuts are achieved, industrialized nations will reduce their collective greenhouse emissions to two-thirds of what they would be in 2012 without action, according to the United Nations.

    Even this significant reduction, however, won't prevent total global greenhouse emissions from rising, analysts predict. The cuts will be swamped early in the next century by increases in emissions from developing nations, such as China and India, which successfully resisted being bound by the protocol. For example, China, currently in second place, is expected to overtake the United States as the world's leading emitter of carbon dioxide within decades, as it burns more of its massive coal reserves. From 1990 to 2015, the U.S. Energy Information Administration predicts, carbon dioxide emissions from developing countries—including Russia and Eastern European nations—will nearly double, and will account for 58% of the global total even if industrial countries do not cut back.

    At that rate, the 1992 treaty will not achieve its major objective—stabilizing atmospheric concentrations of carbon dioxide—says Tom Wigley, a climate researcher at the National Center for Atmospheric Research in Boulder, Colorado. Currently, carbon dioxide levels stand at about 360 parts per million (ppm), up from preindustrial levels of 280 ppm. Computerized climate models created by Wigley and others suggest that concentrations will at least double by 2100 unless developing nations hold their emissions steady and industrial nations progressively reduce emissions.

    “A short-term target and timetable, like that adopted at Kyoto, avoids the issue of stabilizing concentrations entirely,” Wigley says. As a result, it will at best delay the predicted warming trend by just a few decades. Jerry Mahlman, director of the Geophysical Fluid Dynamics Laboratory at Princeton, adds that “it might take another 30 Kyotos over the next century” to cut global warming down to size.

    Still, says Sarmiento, “you have to start somewhere, and the protocol at least provides a framework for revisiting the issue as our understanding improves.”


    Funder Rejects RGO Rescue Plan

    1. Judy Redfearn
    1. Judy Redfearn is a science writer in Bristol, U.K.

    A business plan drawn up by the staff of the Royal Greenwich Observatory (RGO) to privatize their institution and thus save it from closure was rejected last week by the RGO's funder, the Particle Physics and Astronomy Research Council. PPARC said that the plan was too risky and costly, and that it threatened to turn the RGO into an unwelcome competitor for the new Astronomy Technology Centre (ATC), which PPARC is setting up in Edinburgh. The decision almost certainly means that the RGO will cease to exist in anything resembling its current form. “We are very disappointed indeed,” says RGO director Jasper Wall. Martin Rees of Cambridge University, Britain's Astronomer Royal, says he has lost confidence in PPARC's decision-making.

    Currently, both the RGO and the Royal Observatory, Edinburgh (ROE), provide technological support for Britain's ground-based astronomy program. But faced with a dwindling budget, PPARC has wanted to cut back on both observatories once construction of the twin 8-meter Gemini telescopes, in which the United Kingdom is an international partner, nears completion. To do this, PPARC announced earlier this year that the RGO would be closed and its technological capabilities, which are mainly in telescope design, would be transferred to the new ATC based at the ROE (Science, 13 June, p. 1641). The move was expected to save $4.1 million annually for the first 4 years, and $6.8 million annually thereafter.

    To try to save the RGO, the staff proposed setting up a not-for-profit company which would receive about half its revenue from PPARC for providing astronomical services, such as data archiving, not transferred to Edinburgh. A third would come from establishing a business to build small, robotically controlled telescopes for the international market in collaboration with John Moores University in Liverpool. And the remainder would be provided by PPARC grants for astronomical research. PPARC was also asked to invest $2.2 million to set up the company, but RGO staff estimated that it would save more than that by not having to pay for redundancies and broken contracts.

    Two committees set up by PPARC to look into the plan expressed concern about its financial viability and the element of competition it would pose to the ATC. The RGO would have retained some of its capacity to design telescopes and build charge-coupled device instruments, but PPARC's chief executive, Ken Pounds says, “we would be retaining some of the technological activity in Cambridge that we want in Edinburgh.” The $2.2 million payment was also deemed unacceptable.

    The RGO has not given up and is seeking sponsorship or support from others. If that fails, it will cease to exist as a major institution, although its name might live on. Pounds says it might be linked in some way with astronomy research at Cambridge University, where the RGO is now, or it might be attached to a new museum for the public understanding of science in Greenwich. But the latter would be very unattractive to RGO staff, says Wall: This would be “theme parks instead of real work.”


    Panel Proposes Ways to Combat Fraud

    1. Robert Koenig
    1. Robert Koenig is a writer in Bern, Switzerland.

    MunichGermany's main granting agency has taken a tentative step toward a more systemic approach to the problem of scientific fraud by floating a set of proposals developed by an international panel. The proposals, released this week, recommend ways for universities and research institutions to investigate alleged misconduct and foster ethical conduct, as well as to suggest that grants be denied to organizations that do not adopt effective procedures.

    The proposals are the work of a 13-member “Self-Control in Science” panel created by the Deutsche Forschungsgemeinschaft (DFG), the country's main granting agency for basic research. The panel was formed in the wake of the country's most notorious scientific scandal in the postwar era, a case involving alleged falsifications in publications by two university professors.

    DFG President Wolfgang Frühwald said that he “had not expected that this group would end up so united in its approach to such difficult issues.” Frühwald anticipates “a vigorous discussion” on the idea of denying grants to universities that do not adopt satisfactory procedures when the DFG's governing body, the Senat, debates the panel's recommendations early in 1998. One question is whether the DFG has the legal authority to deny grants based on such criteria.

    The panel, which met only twice, took a first stab at several issues that have confounded inquiries in other countries, notably the United States. Among its 16 recommendations are:

  12. German universities and scientific institutions should name outside ombudsmen to hear “whistle-blower” complaints;

  13. Researchers and scientific publications should tighten co-authorship standards, eliminating “honorary co-authorships;”

  14. Primary research data used as the basis for publications should be preserved for at least 10 years; and

  15. Quantitative measures such as “impact factors” should not take precedence over qualitative assessments in decisions on grants and hirings.

  16. At the same time, the panel rejected the notion that Germany set up a separate government bureaucracy to investigate misconduct, as was done for U.S. biomedical research through the Office of Research Integrity, to carry out investigations that are not resolved by the institution. “No one supported that concept,” says Frühwald.

    Frühwald formed the panel last summer after two German biomedical researchers—Ulm University professor Friedhelm Herrmann and former Lübeck University professor Marion Brach—were accused of falsifying or manipulating data in some three dozen publications resulting from research at universities and a national research center. Herrmann, who denies committing or knowing of any falsifications, has been barred from DFG advisory boards and temporarily suspended from the university while he fights the state science ministry's disciplinary proceedings. Brach, who had admitted to falsifying data in “two or three cases,” lost her professorship at Lübeck.

    Meanwhile last month, Germany's most prestigious scientific organization, the Munich-based Max Planck Society—which operates 73 basic research institutes—adopted its own new regulations on handling misconduct cases. Max Planck will set up a new standing committee, headed by an outsider, that will investigate any misconduct complaints and recommend sanctions, which will range from a warning, to dismissal. The final decision on sanctions will be made by the Max Planck president.

    “I hope universities or other institutions will look closely at our new rules and perhaps use them as an example,” says Max Planck President Hubert Markl. “Scientific misconduct is never wanted and never expected. But when it happens, you are apt to do things wrong if you haven't developed procedures on how to handle it.” In addition to these new procedures, Markl has asked the Max Planck scientific council to develop a new educational program that will “help sharpen the awareness” of ethics issues at institutes.

    The DFG also would like its panel's recommendations to reach a wider audience. It plans to send them to international science organizations, including the European Heads of Research Councils group, the European Science Foundation, and the scientific section of the G-8 organization of industrialized nations. “We hope each nation's scientific institutions [will] make use of these recommendations to examine their own rules,” says Frühwald.

    The only U.S.-based scientist on the DFG panel was Lennart Philipson, director of New York University's Skirball Institute of Biomolecular Medicine and a former director of Heidelberg's European Molecular Biology Laboratory. “It is healthy for the DFG, the Max Planck Society, and, ultimately, the German universities, to develop clear rules on how to deal with such issues,” he said, adding that he particularly applauds the idea of an ombudsman. “Because the hierarchy at universities and at research institutes in Germany is so strong, it is extremely important to have neutral boards to examine such problems.”


    B Cells May Propagate Prions

    1. Gretchen Vogel

    No one knows exactly what causes “mad cow disease” and related neurodegenerative conditions, such as Creutzfeldt-Jakob disease (CJD) in humans. But this uncertainty hasn't kept researchers from wondering how the agents that cause these diseases spread from the site of infection to the brain. New work by neuropathologist Adriano Aguzzi of the University of Zurich in Switzerland and his colleagues now suggests that B cells, a type of immune cell carried in the blood, play an important role in this propagation.

    In this week's issue of Nature, the Aguzzi team reports that mice lacking B cells are resistant to infection with scrapie, a sheep condition similar to mad cow disease, when they are inoculated with infectious material in areas outside the brain. If B cells are necessary for the disease to propagate, the authors reason, they may also carry the infectious agent.

    In Britain, where 20 young people have already died from a new variant of CJD, possibly originating in meat and other products from cattle infected with mad cow disease, the finding has sparked fears about the safety of donated blood. Experts on the diseases say that no case of CJD in humans has ever been linked to blood transfusions. But the news—which Aguzzi presented at a closed meeting in November—has prompted calls for hemophiliacs in Britain to receive blood clotting factors made by recombinant DNA technology instead of prepared from the pooled blood of many donors. It has also led to suggestions that blood banks should remove white blood cells from donors' blood.

    Nailing down how the agents that cause CJD and scrapie travel through the body has been difficult because researchers don't know exactly what to look for. Many believe that the agents are misfolded proteins called prions that propagate themselves, while others think an as yet unidentified slow-acting virus is to blame. But scientists showed as long ago as the 1970s that a wide variety of immune tissues can be infective, especially tonsils, thymus, lymph nodes, and spleen, when injected into animals' brains.

    To try to pin down what particular component of the immune system carries the agent, the Aguzzi team tried to infect a number of mouse strains that had been engineered to lack specific immune cells or molecules. They found that mice lacking T cells or the immune-system protein interferon γ were infected as easily as normal mice, but mice that had no B cells were resistant. Out of 27 such mice, none developed symptoms of scrapie after more than a year (up to 534 days), although at least four of them did show evidence of scrapie in their brains. All other mice developed scrapie within 8 months.

    Other researchers do not find the results particularly surprising. They note that the tissues previously shown to be infective are rich in B cells. More recent, unpublished work by neuroscientist Paul Brown of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, and Robert Rohwer of the Molecular Neurovirology Unit at the Veteran's Administration Medical Center in Baltimore shows that both white blood cells and plasma from infected animals can transmit disease. And at a number of meetings Rohwer has reported that he and his colleagues have infected at least one hamster—out of 22 tested—with scrapie through a transfusion.

    The researchers caution, though, that the results only show that blood-borne transmission is possible in the laboratory, and say nothing about the likelihood of it in humans or animals. They point out that no human case of CJD has been traced to a blood transfusion. There is a “tremendous amount of epidemiology that all speaks against the possibility of blood-borne transmission of the agent,” Aguzzi says. Rohwer points out, however, that while those results apply to classic CJD, with which “we have been living since the very first transfusion,” the situation may be different for the U.K.'s new CJD variant.

    There's at least one indication that the immune system could play a bigger role in transmitting the new variant. While doctors have never spotted abnormal prion proteins in the tonsils of patients with classic CJD, in new variant patients, tonsils are “full of abnormal proteins,” Aguzzi says.

    Aguzzi suspects that the B cells in tonsils carry the prions. But it is not clear whether all B cells harbor the infectious agent, or if only a subset do so. Says Aguzzi, “There is still a tremendous amount of work to be done.” That's surely one of the only things on which all researchers in the field can agree.


    Deformed Frogs Leap Into Spotlight at Health Workshop

    1. Jocelyn Kaiser

    Research Triangle Park, North CarolinaWhen rumors of malformed frogs in America's northern Great Plains started circulating a few years ago, most wildlife scientists sang the same sad tune: The abnormalities, like the worldwide decline in amphibian numbers that gained attention in the mid-1980s, suggested an early warning of environmental deterioration. But soon after scientists began wading into lakes and wetlands to hunt for a cause of the malformations, the tune dissolved into dissonance. From similar field and lab data, three now hotly contested theories have emerged: Depending on who you listen to, the main culprit is either a parasite, ultraviolet radiation, or an environmental chemical.

    To help determine whether what's hurting the frogs could hurt people too, and to perhaps defuse some of the tension, the National Institute of Environmental Health Sciences (NIEHS) held a workshop here last week to take a critical look at data that have been debated in the press long before they could reach the scientific journals. “I have never seen a scientific or biological phenomenon grow so fast with so few publications,” says Stanley Sessions, an amphibian researcher at Hartwick College in Oneonta, New York. By the time the workshop ended, however, it was clear that far more questions than answers remain. Some frog researchers even argue that the alarm could be much ado about nothing: No one knows whether abnormalities are truly on the rise or if people have just become more assiduous at finding and reporting them. As Greg Hellyer of the Environmental Protection Agency's (EPA's) New England Regional Laboratory in Lexington, Massachusetts put it, “Is it a [real] phenomenon, or is it just because we're looking?”


    Victim of new environmental scourge or of a mere predator?


    The hullabaloo began in August 1995, when some Minnesota schoolchildren found dozens of multilegged and misshapen leopard frogs in a farm wetland. A couple of years earlier, state officials had looked into a report of malformed frogs, but after news broke nationwide of the children's cache, reports of misshapen frogs began pouring in from all over. Scientists reported that in some ponds and lakes, the rate of malformations ran as high as 67% of frogs surveyed; overall, in Minnesota, Quebec, and Vermont, about 8% of frogs sampled (mainly northern leopard frogs, but also other species) appear to be afflicted with abnormalities—most often missing or malformed hind limbs (see chart).

    View this table:

    Among the first to weigh in with an explanation was Sessions, who had studied some salamanders and Pacific tree frogs with extra legs found in northern California in 1986. He and a colleague noticed that the amphibians' hind limbs were packed with encysted trematodes, parasitic flatworms that burrow into amphibians. The researchers found that when they embedded plastic beads—meant to resemble cysts—in the limbs of developing frogs and salamanders, the limbs would sometimes split and form two. Sessions thinks natural, sporadic peaks in populations of pond snails, the trematodes' primary host, may explain the apparent rise in deformed frogs.

    “I'm quite satisfied that the parasite hypothesis does a lot of the work for us” in explaining the extra limbs being reported, says herpetologist David Wake, director of the University of California, Berkeley's Museum of Vertebrate Zoology. But many experts are unconvinced, noting that abnormal frogs in Minnesota and Vermont are no more likely to have cysts than normal ones. Sessions suggests, however, that cysts may disappear as a young frog's immune system develops and eventually destroys the parasite.

    Other researchers have fingered a different culprit, the increase in ultraviolet B (UVB) light reaching Earth's surface because of the thinning ozone layer. UVB light, the theory goes, could be damaging amphibian embryo DNA, resulting in abnormalities during metamorphosis; it could also be transforming pesticides into teratogens, chemicals that can interfere with development and cause birth defects. Scientists at EPA's Mid-Continent Ecology Division in Duluth have reported at recent meetings that northern leopard frog embryos develop abnormalities—including deformed and missing limbs—after being exposed to about 30% of natural UVB levels for at least 24 days, according to EPA's Gary Ankley. “We wouldn't say it explains what happens in the field, but it's similar enough that we think it deserves further study,” he says.

    This theory was strengthened earlier this month when new findings suggested that UVB rays can indeed penetrate the murk of the average North American pond and cause developmental changes. A team led by ecologist Andrew Blaustein of Oregon State University in Corvallis reported in the Proceedings of the National Academy of Sciences that long-toed salamander eggs in pond water exposed to the sun are more likely than those shielded from UVB to develop abnormalities such as edema—fluid-filled areas under the skin—and bent tails.

    A third possibility is that certain watersheds may harbor an unidentified teratogen, presumably an environmental contaminant. Some scientists suspect retinoids, a class of chemicals including vitamin A that tell embryo cells to grow. Experiments have shown that retinoids can cause limb malformations in frogs. One candidate might be an insecticide such as Methoprene, a retinoid mimic widely used to kill mosquitoes.

    A potential waterborne threat has grabbed the most headlines so far. Last September, NIEHS and the Minnesota Pollution Control Agency issued a press release detailing unpublished findings that water from two Minnesota ponds with abnormal frogs caused embryos of the African clawed frog to develop abnormally in the test tube. The findings were particularly alarming, as some Minnesota families draw their drinking water from the ponds. Although standard tests found the water safe to drink, the state offered bottled water to these families.

    Other scientists assailed these provocative claims. Some questioned the relevance of abnormalities in African frogs to those in their far-removed Minnesota kin. And EPA-Duluth scientists argued that the NIEHS team had overlooked a technical point: African frog embryos won't grow in ordinary water unless certain salts are added. The NIEHS group, they charged, had failed to correct natural ion imbalances in the Minnesota well water. When EPA scientists repeated the experiment after adding the ions to one site's water, they say, embryos developed normally.

    But at the meeting, the NIEHS's Jim Burkhart said his team also performed the assay after adding back the ions, and the embryos still developed abnormally. He and his colleagues also presented data at the workshop showing that when organic compounds and metals were filtered out of the Minnesota water, it caused fewer deformities. NIEHS toxicologist George Lucier says his group has now independently tested water from 10 affected sites and from 10 reference sites, all but one in Minnesota, and found that the deformities appear only in water from the affected sites. “It's something in the water,” concludes Lucier, adding that it's too early to say whether that something is natural—such as a plant steroid—or synthetic, such as a pesticide.

    Some observers wonder whether the scale of the problem has been overblown. Experts note that naturalists have been reporting five-legged frogs and salamanders for at least 250 years. Yet it is difficult to know how common these abnormalities were in the past because museum collections tend to keep only scientifically interesting or pretty specimens. Without long-term monitoring, “there's virtually no way to figure out what the background levels are,” says Smithsonian Institution ecologist Jamie Reaser.

    Indeed, some ecologists suspect that normal predation attempts could spawn high background levels of deformity. Reaser says that in her fieldwork, she often notices tadpoles damaged by aquatic predators: “chunks taken out of tails, eyes damaged, legs damaged.” Sessions adds that he has found that bullfrog tadpoles bred in cramped quarters will nibble off each other's limbs. As for the 1600-and-counting confirmed abnormalities since August 1995 compiled so far by the North American Reporting Center for Amphibian Malformations (, “I will not be surprised at all if it turns out this has been an enormous Chicken Little thing,” says Sessions.

    However, David Gardiner, a developmental biologist at the University of California, Irvine, disagrees. He recalls a family whose Minnesota farm he visited to collect water. As he was leaving, Gardiner says, a teenager remarked that his family is still waiting for an explanation. They are justifiably worried, he says, and “they'd like to figure this out.” But that answer may be a long time in coming. Solving this puzzle, predicts Wake—who thinks multiple causes are at work—will be “a scientific nightmare.”


    Qualified Thumbs Up for Habitat Plan Science

    1. Charles Mann,
    2. Mark Plummer
    1. Mann and Plummer are co-authors of Noah's Choice: The Future of Endangered Species.

    Santa Barbara, California—For 2 months earlier this fall, University of Washington, Seattle, grad student Amanda Stanley sifted through reams of documents—from dense scientific articles to denser federal reports—bearing on the fate of northern spotted owls, marbled murrelets, and other imperiled icons of the Pacific Northwest. Her task was to assess the quality of the science underpinning the state of Washington's 1996 plan to protect these species while allowing some logging on the state's land. Sound like a nightmarish grad school assignment? “I had no idea what I was getting into,” Stanley admits.

    The slogging was no mere academic exercise: Stanley was one of 106 grad students from eight universities who played a role in one of the most unusual ecological studies ever undertaken. Led by Peter Kareiva of the University of Washington and 12 other ecologists, the students last week completed an initial assessment of the science behind a divisive environmental policy tool: habitat-conservation plans (HCPs), agreements that allow developers to harm endangered species in return for specified efforts to protect habitat.

    A growing chorus of environmentalists and scientists has argued that many HCPs—usually drafted by scientific consultants hired by landowners—are flawed and effectively promote extinction (Science, 13 June, p. 1636). But at a meeting here last week at the National Center for Ecological Analysis and Synthesis (NCEAS), Kareiva's group offered a different view: HCPs are far from the junk-science giveaways to developers depicted by their harshest critics. Although the group also reported that the plans are frequently plagued by inadequate monitoring and a lack of key data, the analysis puts HCPs in a better light. “What did surprise me,” says meeting attendee Ron Pulliam of the University of Georgia in Athens, “was how well the science was used when it was available.”

    Critics rebuffed.

    HCPs did better than some assert at addressing habitat loss and other threats to species.


    Created by a 1982 amendment to the Endangered Species Act (ESA), HCPs are meant to resolve conflicts between conservation and commercial interests. Because the ESA generally forbids harming listed species, it has led to frequent clashes between property owners and the U.S. Fish and Wildlife Service (FWS), which administers the act. The amendment gives landowners some leeway by allowing FWS to issue a permit for activities that harm a listed species. To receive a permit, developers must devise an HCP that will, “to the maximum extent practicable, minimize and mitigate the effects” on endangered species. The amendment was rarely used during the Reagan and Bush years; by 1992, only 11 HCPs had been signed.

    That has changed. Since taking office in 1993, U.S. Interior Secretary Bruce Babbitt has championed HCPs, arguing that the amendment was the key to avoiding “environmental train wrecks” over endangered species. To make HCPs more attractive to landowners, Babbitt unveiled a “no surprises” policy 3 years ago, promising that “unforeseen circumstances”—for example, species being added to the endangered list that might trigger new protective steps—would generally not force modifications to existing HCPs. When a draft Senate ESA reauthorization bill with this change appeared last February, scientists and environmentalists were galvanized into scrutinizing the burgeoning HCP program, which now numbers more than 200 plans.

    Most observers didn't like what they saw. The opening salvo came last April from nine well-known conservation biologists, led by Dennis Murphy of the University of Nevada, Reno, who charged that “many recent HCPs have been developed without adequate scientific guidance.” A National Audubon Society task force drew similar conclusions soon after, claiming that a “disturbing number of HCPs are based upon questionable or risky scientific assumptions.” The Administration, meanwhile, had few data supporting its rosier view. At a meeting on HCPs last April sponsored by the Environmental Defense Fund (EDF), Babbitt asked scientists to offer improvements to these controversial compromises.

    One attendee was Frank Davis, deputy director of the NCEAS, an ecological think tank at the University of California (UC), Santa Barbara. The center has an unusual mission: Instead of sponsoring experiments, it tries to resolve important ecological questions by analyzing existing data (Science, 17 January, p. 310). Believing that a synthesis of HCP data fit its mission, Davis urged the center to support a study, which it agreed to do with the American Institute of Biological Science (AIBS).

    To lead the effort they tapped Kareiva, who is well known for applying sophisticated mathematical modeling to ecological problems. Kareiva had no previous involvement in the politically charged subject of HCPs. “He was not identified with any one position,” says AIBS President Frances James, an ecologist at Florida State University in Tallahassee. “He fit the bill.”

    Kareiva soon discovered—to his shock—that FWS has no central repository of HCP documents. Getting data from far-flung agency offices, consultant firms, and environmental groups would be a major project in itself. Rather than ask senior ecologists to embark on a grinding paper chase, Kareiva decided to marshal an army of grad students: “grassroots science,” as he calls it.

    Last August, he contacted colleagues at eight universities—UC Berkeley, Florida State, North Carolina State, UC Santa Barbara, UC Santa Cruz, Virginia, Washington, and Yale—with a novel proposition: Would they conduct a graduate seminar that would produce a major analysis of an urgent scientific issue—by Christmas? The ecologists, he said, quickly “put up notices in the hallways,” attracting scores of eager students.

    After reconnoitering 206 HCPs, Kareiva's team of senior ecologists selected 44 for indepth analysis in late September, choosing a range of sizes, locations, and landowner types. Next, they divvied the plans among the universities and created a form with hundreds of questions for assessing each plan. The questionnaire covered the plans' designs, not their implementation. The students, after becoming familiar with the HCP process, spent the rest of their seminars evaluating plans based on official documents, scientific literature, consultants' reports, expert opinions—whatever they could lay their hands on.

    They were asked to weigh the scientific evidence behind four key aspects of an HCP: the overall health of a species' population; the extent to which a species would be “taken”—harmed or killed—by an activity allowed under an HCP; how this taking would affect a species' long-term viability; and the actions prescribed to mitigate harm. They also evaluated any long-term monitoring, by the landowner or a government agency, spelled out in an HCP. “By the end of the seminar,” says Yale seminar leader David Skelly, the students “were pretty sophisticated” at evaluating HCPs.

    Questionnaires completed, the students converged on NCEAS earlier this month. Kareiva gave them less than a week to merge the data sets, produce descriptive analyses, summarize major findings, and begin writing papers for publication and a “gray literature” report as a source book for further work.

    On 9 December, the students presented their findings to a small, invited audience of biologists experienced with HCPs. Few attendees were startled to hear that the plans' authors had been hampered by a lack of data. For example, marbled murrelets are covered in several Pacific Northwest HCPs, yet there are few data available to estimate the number of birds harmed by an activity such as logging.

    The big surprise was how well the HCPs were perceived to have used available data. Two-thirds of the plans, the students judged, were able to determine reliably a population's health before implementation. About half the plans made a reasonable guess as to the harm the landowners would cause species (although the Kareiva team's study did not evaluate what actually happened). And half adequately gauged primary threats, such as habitat loss, to species (see graph above).

    Territorial gains.

    More than 200 HCPs are now in action, with another 200 in the works.


    But many plans had serious shortcomings. Nearly two-thirds were deemed “insufficient” to determine how the actions allowed by an HCP would affect species' viability as a whole, rather than merely the local population in the plan. In 60% of HCPs, the monitoring was inadequate, because either the plans were poorly crafted or the students couldn't tell from the documents whether the monitoring was sufficient. And again and again, the students lamented the absence of basic natural history on organisms covered by HCPs.

    The work groups uncovered a few horror stories; prime examples were two HCPs covering the Utah prairie dog. In both cases, property owners were allowed to build on habitat if they relocated the prairie dogs to public land. Scientists had long known, however, that relocation fails—only 3% of prairie dogs survive such a move. Still, FWS approved the plans because relocation was the agency's own strategy for recovering the species.

    Overall, the analysis offers some “counterintuitive surprises that will make us think,” says Mike Scott of the U.S. Geological Survey's Biological Resource Division, formerly the National Biological Service. He cites, among others, the puzzling finding that medium-sized HCPs—not the largest ones, which generally are the most elaborately prepared—appeared to have the best scientific grounding. In addition, the analysis found that the science underlying multispecies plans seemed as sound as that behind single-species ones, despite the former's greater complexity.

    Some observers, however, question the usefulness of an assessment by scientists with little hands-on HCP experience. “You simply cannot do this kind of evaluation if you have not been on the ground trying to put these plans together,” says Nevada's Murphy, the only HCP critic invited to the meeting. According to Murphy, HCPs “are essentially political documents” that should not be evaluated by questionnaire.

    To give other HCP critics and proponents alike a chance to review the analysis, Kareiva hopes to have it posted on the NCEAS Web site ( by the end of March. By then, Congress may have renewed its debate over ESA reform, and the students' data could “become a focus of analysis and discussion,” Kareiva says, “getting us away from silly ideology and storytelling.” But even if politicians ignore the data, EDF's David Wilcove believes the effort will have produced a benefit: “100 grad students and 20 professors who know a lot more about HCPs than they did before.” He adds, “I don't know whether they found the project inspiring or a never-to-be-repeated nightmare, but I hope they'll stay interested.”


    Plants Decode a Universal Signal

    1. Elizabeth Pennisi


    When winter's bitter tongue lashes south of the frost line, Florida vacationers can always head to a Caribbean island. Plants don't have that luxury. Palm trees and citrus groves have to tough it out right where they are, by activating a stress response that makes them more tolerant of cold or drought. But orange trees and tourists aren't as different as they seem. A team led by plant molecular biologist Nam-Hai Chua of Rockefeller University in New York City has identified a signaling molecule that helps trigger plant stress responses to low temperatures and drought. In so doing, the researchers may have opened a window on how animal cells regulate all sorts of things, from heartbeat to insulin secretion.

    Fitting the pieces.

    Working through an as yet unidentified receptor, the stress hormone (ABA) uses cyclic ADP-ribose to release Ca, activate a kinase enzyme, and turn on certain genes, as evidenced by luminescence in Arabidopsis.


    Plant biologists already knew that the stress responses are turned on when a specific plant hormone triggers a surge in calcium ions inside the cell. They did not know, however, exactly how the hormone does this. On page 2126, the Chua group now reports that a molecule called cyclic ADP-ribose is what relays the hormone's signal to the stores of calcium in the interior.

    Researchers already suspected that the same molecule helps to control calcium in animal cells. Cyclic ADP-ribose, says Anthony Galione, a pharmacologist at the Oxford University in the United Kingdom, “looks like a ubiquitous messenger across the plant and animal kingdoms.” The Chua group's finding will help researchers pin down in animals how other signal molecules interact with cyclic ADP-ribose to control calcium. “It's the first definitive evidence of a significant role for cyclic ADP-ribose,” says Dale Sanders, a cell biologist at the University of York in the U.K.

    Besides tracing what may be a striking example of a single molecular mechanism conserved across the tree of life, the discovery could have practical payoffs. In plants, it might open the way to producing crops that resist stresses better, perhaps by engineering them to respond more quickly in tough conditions. And in humans, it could help investigators understand disorders ranging from heart arrhythmias to diabetes.

    Although the focus of the current work is plants, animal cells—sea-urchin eggs—provided the first evidence that ADP-ribose regulates calcium. Almost 10 years ago, Chua's collaborator Hon Cheung Lee, a cell physiologist at the University of Minnesota, Minneapolis, was studying the wave of calcium ions that touches off the embryo's early development. At the time, cell biologists thought that a compound called inositol trisphosphate (IP3) was the cell's internal calcium dispatcher. But Lee noticed that adding a substance called NAD could also stimulate a calcium wave. The NAD did not seem to be acting by triggering the IP3 signal, because the calcium buildup was slower than the rise seen when he added IP3, and it occurred even in cells that no longer responded to IP3. Puzzled, Lee hypothesized that something else—presumably a breakdown product of NAD—was freeing calcium from its internal stores. In 1989, he found that “something else”: cyclic ADP-ribose.

    At first, other cell biologists were skeptical. But time has proved that cyclic ADP-ribose does regulate calcium. “Since then, about 40 different cell systems from 14 different species—ranging from plants and protozoa to mammals—have been shown to be responsive to [cyclic] ADP-ribose,” says Lee. Yet no one had been able to place this calcium regulator into a specific pathway until now.

    Knowing that calcium ions are involved in plants' stress responses, Chua decided in 1995 to find out whether cyclic ADP-ribose was behind the increases. The team approached the problem by injecting the stems of tomato seedlings with either of two genes that respond to stress, or with a third gene that is activated by light. All three were attached to marker genes that enabled the researchers to monitor them when activated. As expected, injections of the plant stress hormone abscisic acid (ABA) turned on the two stress-responsive genes, rd29A and kin2, while the light-responsive gene remained off. Cyclic ADP-ribose by itself, without the hormone, could also activate rd29A and kin2. But inhibitors of cyclic ADP-ribose or of calcium-ion release prevented the genes from being turned on, even when the stress hormone was present. “We show that cyclic ADP-ribose mediates the [stress response] signal,” Chua concludes.

    The researchers also monitored kin2 expression in the tiny plant, Arabidopsis thaliana. To trace the gene's activity, they linked the DNA sequence that regulates it to luciferase, the gene responsible for the firefly's glow, so that the firefly gene would be turned on whenever kin2 was active. They then introduced the hybrid gene into Arabidopsis. After adding ABA, the researchers watched the luminescence of the plant tissue rise to a peak after about 3 hours. The rise came shortly after increases in both cyclic ADP-ribose concentrations and calcium ions. “The [researchers] show a causal relationship,” comments Dierk Scheel, a plant molecular biologist at the Institute for Plant Biochemistry in Halle, Germany. “These experiments are wonderful.”

    At the University of York, Sanders has unpublished evidence that cyclic ADP-ribose triggers another stress pathway: the closing of the microscopic openings on the leaf surface known as stomata in response to drought. Working with Alister Hetherington at Lancaster University in the U.K., Sanders's team coinjected cyclic ADP-ribose and a dye that tracks calcium release into the guard cells which surround the stomata and control their size. “We showed that not only did the guard cells close, but that [the closure] was preceded by an increase in calcium,” notes Sanders. The researchers also demonstrated that they could block both responses of the guard cells, even in the presence of stress hormone, by adding a cyclic ADP-ribose inhibitor.

    In animals, researchers have not yet worked out specific pathways to the extent the Chua team has in plants, but they are making progress. Over the past 2 years, Galione's team has shown that cyclic ADP-ribose helps control one calcium-dependent process: cardiac muscle contraction. Working with heart muscle cells isolated from the guinea pig, he has found that too little of the messenger can lead to inadequate contractions, while too much can make the heart beat out of control.

    Other work has implicated cyclic ADP-ribose in glucose-stimulated release of insulin from the pancreas. “There is the intriguing possibility that defective signaling could be one way diabetes arises,” says Galione. This messenger also seems to be part of signal-transduction pathways in the reproductive and immune systems, and in the regulation of smooth muscle cells by thyroid hormone.

    To be sure, calcium levels in cells respond to many other signals besides cyclic ADP-ribose. “There are actually multiple mechanisms for the releasing of calcium,” stresses Galione. In nature, these multiple mechanisms may cause calcium to be released from different storage sites and in different patterns, generating a calcium “signature” unique to each signaling pathway. Galione and Chua suggest that this may be how the cell can keep its signals straight even though the same ion, calcium, is involved in many different signaling pathways.

    Even for the plant stress pathways that depend on cyclic ADP-ribose, many details remain to be ironed out. Researchers do not yet know, for example, how it leads to the activation of ADP-ribosyl cyclase, the enzyme that generates cyclic ADP-ribose. However, Chua's work provides a clue. He has found evidence that the activation step requires the addition of a phosphate group, possibly to the cyclase.

    Chua now hopes to pin down the other details of this pathway. “It has tremendous agricultural implications,” because it could be a key to designing hardier crops, he explains. But “in order to design methods to engineer plants to resist drought, you need to know about the signal to be transmitted.”

    Like Chua, Galione is working hard to identify all the components in the pathways between, for instance, the signal to contract and the heart contraction. Others are trying to understand the steps between glucose's arrival at a pancreatic β-cell and the release of insulin. And because molecular biologists are now able to make molecules that mimic or block cyclic ADP-ribose activity, Galione expects rapid progress. “We should really be able to nail these pathways down,” he predicts—and explore an unexpected similarity between plants and people.


    Black Hole Lurks in Miniature Quasar

    1. Alexander Hellemans
    1. Alexander Hellemans is a writer in Naples, Italy.

    Everything about quasars and active galaxies is outsized. These brilliant objects lie millions to billions of light-years from Earth and, at their centers, may harbor black holes with masses millions of times that of the sun. But astronomers are getting a closeup view of quasar behavior in a scaled-down model—a so-called microquasar in our own galaxy.

    Like its full-sized relatives, the microquasar is thought to contain a black hole—in this case just a few times more massive than our sun—and it shows similar violent behavior, sometimes ejecting high-speed streams of ultrahot gas. Because of the microquasar's small size, its jets and flares develop and die out much faster than those of larger quasars, in minutes or hours rather than millions of years. This fast-forward view of quasar behavior is yielding new clues to how quasars work, including new evidence for the reality of the black holes at their centers. As Michael Garcia of the Harvard-Smithsonian Center for Astrophysics puts it, “You can actually see things happen in a minute” in the microquasar. “You can't see things like that happen in a [full-sized] quasar.”

    A team led by Felix Mirabel of France's Atomic Energy Commission at Saclay and Luis Rodriguez of the Institute of Astronomy in Morelia, Michoacán, Mexico, discovered the quasarlike behavior 3 years ago. With the Very Large Array radiotelescope in New Mexico, they observed jets of matter coming from a known x-ray source called GRS 1915, just 40,000 light-years from Earth. After other observers spotted radio outbursts from the same source, astronomers concluded they were seeing a system containing a small black hole—probably the legacy of a collapsed star—surrounded by an “accretion disk” of gas and dust dragged from a companion star. The interplay between the accretion disk and the black hole somehow powers the outbursts, just as in a full-sized quasar.

    Last month, Ralph Spencer of the University of Manchester in the United Kingdom and several colleagues followed the evolution of GRS 1915's jets for 2 weeks using MERLIN, a set of six electronically linked radiotelescopes spaced across England. They found, reports Spencer's co-worker Rob Fender of the University of Amsterdam, that “the jets appear to be generated in less than a day, and we see the material speeding out over something like 2 weeks.” Mirabel says that he observed the same event on 31 October with the Very Long Baseline Interferometer, which combines telescopes from Hawaii to the Virgin Islands. Once the data are processed, he says, “we will be able to see the blobs with a resolution equivalent to the size of the solar system.”

    By studying these fast-changing jets, astrophysicists are already gaining clues to their origin. In a paper scheduled to appear in the February issue of Astronomy and Astrophysics, Mirabel and his colleagues report that they traced jet formation by monitoring the microquasar at many different wavelengths—x-ray, infrared, and radio—sensitive to different parts of the accretion disk and jets. First, he says, “x-radiation from the inner accretion disk disappeared,” presumably because superhot material fell into the presumed black hole. Then a new x-ray burst indicated that the inner disk was refilling with new material, and a dozen minutes later the first hints of a jet appeared. The findings, he says, support a picture in which the spinning black hole ejects the jets whenever material is building up in the inner disk.

    The abrupt cutoff of the x-rays at the beginning of this process—often in a matter of seconds—is a strong hint that a black hole really is present, says Mirabel. The cutoff would not be so sudden if the material landed on the surface of a star, he says. Mirabel argues that the only explanation is that the globules of matter must suddenly be disappearing beyond the “event horizon” of the black hole. Garcia calls that picture “a very good model,” although not “absolute direct evidence” for a black hole.

    Still, says Martin Rees of Cambridge University in the U.K., further study of the microquasar is sure to cast more light on its shadowy heart and those of full-sized quasars: “In the next year or two, we [may] actually learn something about the nature of black holes.”


    Materials Researchers Pick Up the Pace of Discovery

    1. Robert F. Service

    Boston—Speed was a common theme among some 4400 scientists who gathered here from 1 to 5 December for the fall meeting of the Materials Research Society (MRS). Among the highlights: a rapid-fire approach for creating and testing hundreds of slightly different electronic devices simultaneously, a faster way to make high-temperature superconducting wires, and a quick approach to durable coatings.

    Shotgun Chemistry Takes Aim at Devices

    Combinatorial chemistry, the shotgun approach to synthesizing and testing hundreds or even thousands of slightly different compounds at one time, is already the rage in drug discovery. Over the last couple of years, the technique has also started to catch on in the materials science community as a way to speed the discovery of everything from new catalysts to polymers. At the MRS meeting, a team of researchers from Lawrence Berkeley National Laboratory in California reported taking the approach one step further by creating the first-ever combinatorial library of electronic devices, each made with a slightly different mix of materials that influenced its performance.

    For their library, the team—led by Berkeley Lab postdoc Ichiro Takeuchi, physicist Xiao-Dong Xiang, and chemist Peter Schultz—made simple charge-storing devices known as dielectric capacitors. By keeping the configuration of each capacitor the same while varying the materials used to build it, the researchers were able to compare the electronic performance of all the devices in the library and identify the most promising combination for further study. The new work “is looking very good,” says Randolph Treece, a chemist and device expert at Superconducting Core Technologies in Golden, Colorado. “Technologists will probably see this as a good screening approach” to evaluate a wide array of possible electronic materials quickly.

    Dielectric capacitors, which are the basis of a type of computer memory known as dynamic random access memory (DRAM), consist of a sandwich of two flat electrodes separated by a layer of insulating material known as a dielectric. The capacitors for the current generation of DRAMs are made with silicon dioxide as the dielectric, but it is not the most efficient of insulators. To create the next generation of DRAMs, which will be smaller but will have to store the same amount of charge, researchers must find a material that is a better insulator than silicon dioxide. One front-runner is a ceramic compound called barium strontium titanate (BST). But although this material is more effective, BST devices that have been filled with charge commonly have trouble holding onto it. So researchers have been exploring a number of approaches to reducing this current leakage, including adding impurity atoms, or dopants, to the BST. It is the effect of dopants on the performance of BST capacitors that the Berkeley Lab team studied with their library.

    To create the library, the researchers first laid down a heat-stable ceramic that served as the bottom electrode for all the devices on the chip. They then deposited the different BST materials on top, varying the concentrations so that the final capacitors were divided into three main groups, each having a different ratio of barium to strontium. Within each of these groups, they spiked the BST layer with varying concentrations of one of four different dopants, in the end producing a total of 240 different dielectric compounds. Next, they heated the entire chip to allow the component materials in each of the dielectric layers to mix fully; that step also caused the BST to align its crystalline lattice with that of the underlying electrode, a key to improving device performance. Finally, the researchers deposited a layer of platinum atop each device to serve as the top electrode.

    To test the devices, the researchers applied a voltage across each one while monitoring the effects. The differences were readily detectable. “Most of the dopants didn't help,” says Takeuchi. But particular metal dopants—which the researchers decline to specify for proprietary reasons—“seemed to reduce the leakage current a lot,” as much as three orders of magnitude over other dopants. Down the road, Takeuchi says that the team hopes to improve the various metal-doped dielectrics further, as well as try out the combinatorial approach with transistors and other types of devices. If the strategy catches on, it could end up boosting the electronics industry's already brisk pace of advances.

    HTS Wires in a Hurry

    Wires made from high-temperature superconductors (HTS) are great for carrying electrical current without resistance at the relatively accessible temperatures of liquid nitrogen. But the current-carrying ability of the most common HTS material, a ceramic made of bismuth, strontium, calcium, copper, and oxygen, plummets in strong magnetic fields, severely limiting its use in applications such as electric motors and magnetic resonance imaging. New results reported at the MRS meeting should help overcome this problem.

    For several years researchers have investigated using another type of HTS ceramic, made from yttrium, barium, copper, and oxygen (YBCO), that is more resistant to magnetic fields. YBCO is brittle and difficult to form into wires, however, so researchers have been coating YBCO on flexible metal wires covered with an intermediate template layer, which is needed to control the growth of the YBCO (Science, 5 May 1995, p. 644). But the best scheme for producing the template layer is exceedingly slow, which has so far kept YBCO wires off the market.

    Now there is new hope for YBCO wires. Physicist Robert Hammond reported that he and his colleagues at Stanford University were able to kick the templating method into high gear by replacing the normal slow-growing templating material known as yttria stabilized zirconia (YSZ) with fast-growing magnesium oxide. “It's a major step forward that gives [YBCO]-coated conductors a fighting chance,” says Paul Grant, a superconductivity expert at the Electric Power Research Institute in Palo Alto, California.

    Like other high-temperature superconductors, YBCO is a crystalline ceramic that grows as a myriad of tiny crystalline grains. The orientation of these grains relative to one another controls how well electrical current passes through. Within each grain, YBCO is composed of layers of copper and oxygen atoms separated by layers of other elements. These layers are akin to stacked sheets of grid-lined graph paper. Because superconducting electrons travel most easily through the copper-oxygen sheets and along the grid lines, YBCO wire-makers have to align all the neighboring grains in the material in two directions. All the grains must lie flat, so that the copper-oxygen sheets in neighboring grains line up, and in addition, the grids of neighboring sheets should have more or less the same orientation to allow electrons to travel unimpeded in the same direction as they propagate down the wire.

    If left to their own devices, YBCO grains neither lie flat nor align their grids. But in 1993, a team of Japanese researchers got YBCO grains to do both by growing them on top of a thick YSZ template. They first had to achieve a trick with the YSZ grains themselves, however. Normally, these grains also grow randomly, but the Japanese team found that bombarding the YSZ layer with a beam of argon ions while it was growing resulted in grains that lie flat and have a common orientation. When YBCO grains are grown on top, they follow YSZ's example. Yet despite further improvements over the years, “ion beam assisted deposition [IBAD] is still too time-consuming,” taking hours for just short lengths of wire, says Hammond.

    Hammond's team improved matters by switching to magnesium oxide for the template layer. The key is that grains of this material prefer to lie down flat when grown. While IBAD is still needed to orient the grids of the crystals, the job can be done in less time, because as Stephen Foltyn, a YBCO wire-maker at Los Alamos National Laboratory in New Mexico puts it, “with magnesium oxide, you've already got half the battle won.” What's more, a much thinner layer of magnesium oxide seems to control the YBCO grain alignment, further helping to reduce the time needed for producing the template. Indeed, the Stanford team found that it could produce a template layer of magnesium oxide with the desired texture in a single minute, compared to the hours it takes to produce a YSZ template.

    When Foltyn and his colleagues at Los Alamos produced short wires of YBCO grown on top of magnesium oxide IBAD templates, they found that the wires could conduct only about one-third the current that a YSZ/YBCO combination has achieved. But Hammond points out that YSZ/YBCO wires actually contain other thin layers above the YSZ, such as cerium oxide, which act to improve the wire's conductivity. And it's likely, he believes, that a similar strategy could improve the magnesium oxide/YBCO wires as well. If so, IBAD may have a commercial future after all.

    Laser Process Coated in Mystery

    Hard coatings are vital for protecting everything from surgical blades and computer disc drives to the heads of golf clubs. But conventional coatings, painted or evaporated onto a surface, are often short-lived and eventually flake, chip, or peel away. In the last few years, however, a little-known company called QQC Inc. in Dearborn, Michigan, has come out of nowhere with a novel process that plays a set of lasers across steel and other materials to transform their outer layers into durable superhard coatings.

    Light work.

    Overlapping laser pulses vaporize atoms in making a superhard coating.

    QQC INC.

    Nobody knows for sure how the process works, as the technology has preceded the science. But an independent researcher, materials scientist Rustum Roy of Pennsylvania State University in University Park, has now taken a close look at some of QQC's metal coatings using a variety of techniques including cross-sectional scanning electron microscopy (SEM) and x-ray diffraction. The laser process, Roy reported at the Boston meeting, rearranges the atoms of the metal itself into new, high-strength configurations, and can blend added coating material into the underlying metal, atom by atom, to form an alloy that is less likely to degrade. Roy concludes that the technique “is a major discovery.”

    Already, QQC has shown that the technique can harden not only metal, but also ceramics, and even plastic. “It's so generic, everyone and his brother will use it,” says Roy. Prashant Kumta, a materials researcher at Carnegie Mellon University in Pittsburgh who led the MRS symposium that featured the new work, adds that unlike techniques for hardening a metal without applying a coating, which normally require very high temperatures, the lasers can easily be targeted to transform only desired areas, with nanometer precision. “It's not possible to do this with another technique,” says Kumta.

    According to QQC co-founder Pravin Mistry, the new metal-transforming process works by overlapping the light pulses from at least three types of high-powered lasers—excimer, yttrium-aluminum-garnet, and carbon dioxide—each of which produces a different wavelength of light. When scanned across a steel surface, for example, these lasers vaporize a thin layer of material, creating an electrically charged, superheated plasma of iron atoms. When the beams pass on, the atoms rain down again onto the surface and then bond together. QQC can add other materials to the coating by sending a stream of particles across the surface of the steel. The lasers break apart the particles, adding their constituent atoms to the plasma, which rains down to form the new surface.

    Roy analyzed the effect that this process had on a couple of different samples. The first was a standard fuel injector piece made from ferrosilicon, an alloy of silicon and iron, which QQC had coated with superhard titanium carbide. SEM and other techniques showed that the piece ended up with a top layer of graded titanium-carbide and ferrosilicon alloy, with two phases of steel underneath, both of which are harder than the conventional phase known as ferrite. First was a thin layer of martensite, and below this was a layer of amorphous steel. Creating just amorphous steel by itself, notes Roy, usually requires heating the part up to 1000 degrees Celsius and then cooling it instantly, a process known as quenching. “[Mistry] can quench it without quenching,” says Roy.

    Roy also looked at a hot forging punch, a cylindrical rod of coated steel used to stamp out wheel hubs for cars. QQC coats the steel rods with a layer of high-strength chromium cobalt alloy. In this sample Roy found that the laser transformation process creates a surface alloy of chromium cobalt and steel backed by a 2- to 3-millimeter-thick layer of martensite. The durability of the new coating, says Mistry, increases the industrial lifetime of the punches from a mere 1 hour of pounding out wheel hubs, to 12 days.

    At present, the new coatings are not inexpensive. QQC has spent about $15 million developing their laser setup. But Roy believes that if researchers can get a better picture of how the laser transformation method works, future versions of the setup may bring down the cost. Says Roy: “It's incredible what it's going to do.”


    Counting Creatures of the Serengeti, Great and Small

    1. Virginia Morell

    Serengeti National Park, TanzaniaIt's just after dawn, and in the dense woodlands along the Grumeti River the birds are beginning their morning chorus. This is prime data-collecting time for ecologists A.R.E. Sinclair and Simon Mduma of the University of British Columbia (UBC) in Vancouver, and they bend their heads and cup their ears to better hear the bird calls. “That's two gonoleks singing their duet; the cooing is a green pigeon; and that harsh cry is a gray-headed kingfisher,” says Sinclair. Mduma and Sinclair's wife, Anne, jot down the species on their data sheets. Crouching low, Sinclair listens again, then creeps forward, following a hippo's trail through the dark stands of fig and African olive trees. Silently, he points to an oval circle of mashed grass and ferns where the hippo had rested, then raises his eyebrows in mock alarm at the bus-length crocodile stretched on a sandy beach across the river. But it's the birds Sinclair is after, and a few moments later when he first hears and then sees an olive-green bulbul, he smiles broadly. “They are rare and hard to see,” he whispers, “and so are exactly the kind of bird we're trying to find by creeping around like this. It's the kind of species most likely to vanish if we lose these forests.”

    Canopy count.

    Mduma is exploring the remote Serengeti.


    Counting olive-green bulbuls in a forest may seem odd in the Serengeti, an ecosystem best known for vast herds of antelopes and large carnivores racing across wide-open grasslands. Indeed, most scientific research in these famed East African plains has focused on grassland plants and the “glamour” animals—lions, rhinos, buffaloes, and cheetahs. Sinclair himself, who directs UBC's Centre for Biodiversity Research, has done several such studies since he began working in the Serengeti in 1965.

    Master surveyors.

    Sinclair and Mduma count Serengeti birds and small mammals, not just big cats and wildebeest.


    But even the man whom other researchers sometimes refer to as “Mr. Serengeti” says that his past work has not been comprehensive enough, and that it's time to pay attention to the other creatures of the park—the smaller mammals, birds, and insects—and their diverse habitats, such as this little-known riverine forest. With a small grant from the Natural Sciences and Engineering Research Council of Canada, Sinclair has just launched the first systematic biodiversity study of the park. “We need that baseline data: the variety of species, their numbers, habitats, and how all these are interrelated,” he says. “Without it, we have no way of knowing what generates the Serengeti's diversity, how the park is changing, or how people are affecting the environment outside the park.” For example, riverine forests such as this one along the Grumeti are shrinking, but no one knows why—whether it's part of a natural cycle, or due to human influence.

    Armed with new—if meager—funds from government research agencies, scientists worldwide are setting out on similar surveying missions in key areas. “There's still no part of the planet where we know every species,” says Joel Martin, program director for the U.S. National Science Foundation's (NSF's) biotic surveys and inventories. “And we're losing species and habitat at an unprecedented rate, which makes these kinds of surveys imperative.” But there are no set rules about the best way to tackle this enormous task. Some surveys, such as those funded by Conservation International, are quick-hit projects, with a team of scientists sampling as much as they can in a region in a 4-week period. Others target all the plants of a country, or a specific group of animals in a region and their parasites. A few projects try to identify every organism in a given area, although such all-taxa surveys are costly and difficult to coordinate. (See Science, 9 May, p. 893, on the breakup of one such planned survey.)

    For Sinclair, a simple approach is best. With two men and two Land-Rovers—plus a lifetime of natural history observations—he hopes to understand, if not count precisely, the densities, habitats, and cycles of the Serengeti's flora and fauna. Although some note that the shoestring approach may miss crucial information about, for example, all the insects, Sinclair and Mduma argue that the Serengeti Biodiversity Project is a more realistic model for the developing world than all-taxa surveys. “It's a tremendously exciting approach and just what we need more of,” says Peter Raven, director of the Missouri Botanical Garden in St. Louis, “because he's looking for cycles of change” and “setting this up with Tanzanians.”

    Money for such work wasn't available even in the Serengeti until recently, say Sinclair and others. “Everybody's idea of the Serengeti is a big acacia tree with a leopard hanging in it,” says Peter Arcese, an ecologist at the University of Wisconsin, Madison, who has also worked extensively in the Serengeti. “So that's where the grants went.” The emphasis on the big glamour pusses only began to shift around 1990, says Sinclair, when researchers began to hammer home the fact that Earth is now in the grip of a sixth great extinction, following the five recorded by fossils (Science, 21 July 1995, p. 347). Suddenly, documenting the remaining species became an urgent task, and funding agencies began putting up money to make it happen. The NSF's inventory program, for example, was launched in 1991; this year's budget is about $2.4 million. Conservation International had begun its Rapid Assessment Program 2 years earlier.

    Often, these projects focused on areas that had not been studied at all. But now, researchers are beginning to discover the scientific value of long-studied national parks such as the Serengeti, Kruger Park in South Africa, and Yellowstone, says Sinclair: “They're the places that hold a record of our natural world—and in about as pristine a condition as we can now find.” But even though they are protected, they continue to suffer decline—which makes documenting what's in them even more important, adds Keith Langdon, a biogeographer at Great Smoky Mountains National Park in Tennessee, who's coordinating a proposed survey there—the first ever all-taxa inventory of any U.S. park (Science, 12 December, p. 1871).

    Tackling a project of this size seems at first glance a logistical nightmare—particularly with a staff of two and only $60,000 a year. In the 14,673-square-kilometer Serengeti, habitats range from grasslands to various kinds of acacia forests to stony outcrops, and organisms range from a minimum of 28 acacia species to untold numbers of beetles. So instead of visiting each square kilometer, Sinclair and Mduma are studying samples of each habitat. And instead of collecting thousands of specimens and then enlisting dozens of taxonomic specialists to identify them, as is done in the all-taxa surveys, the pair is simply “starting with the species we know or can easily identify,” Sinclair says. Mduma is a small-mammal specialist, and ecologist Sinclair's first task as an undergraduate researcher in 1965 was to learn all 517 known bird species in the Serengeti by sight and song. As for the butterflies, the team is identifying them like any old-fashioned natural historian—by using a just-published field guide.

    Sinclair acknowledges that this approach won't document every organism from microbe to mammal, but it will give the big picture—and how that picture is changing (see sidebar). “A list [of species] by itself isn't that helpful for conservation purposes,” says Sinclair. “What we want to know is where do you find the species, what habitats are they living in, and how do these habitats change over time? The Serengeti isn't like a museum. It has cycles; it does change.”

    He and Mduma sample along transects—the main roads—at least once a year, spotting large birds such as shrikes, starlings, and doves. Smaller birds, such as warblers and finches, are identified by both sight and sound. Later in the survey, they will also set mistnets in the dense forests to catch more secretive bird species. For the small mammals, they will put out live traps and cover animal trails with wet sand to record tracks.

    Sinclair is also drawing on his 32 years of working in the park and making informal observations. He has a habit of jotting down sightings of particular species in a pocket notebook and photographing key areas annually, giving him a huge data bank to draw on. Anne Sinclair is even combing the letters the couple wrote home over the past 30 years, pulling out notes on animal and plant sightings. “It's not very high-tech,” says Sinclair, “but if you're diligent about it, in time, you can pick up patterns that you'd otherwise miss.”

    Among those patterns are the densities of species in their particular habitats and how these change over time. “Thirty to 40% of the park has changed its vegetation community in the last 25 years,” says Sinclair, “and that change should bring an accompanying change in the fauna.” For example, he and Mduma hope to identify healthy stands of forest with abundant animals and compare them to more fragmented, disturbed forests. “That may give us some idea of how much forest is necessary to maintain the species, and an inkling about why this fragmentation is occurring,” says Sinclair.

    Once they have detailed data on certain habitats, they can spot-check other examples of the same habitat to see if the diversity patterns hold true. Then they can get a sense of how the whole park works by putting it all together.

    The transect results, coupled with Sinclair's notebooks, are already turning up new long-term biodiversity patterns. For example, certain shrikes and thrushes have moved into the park, says Sinclair—ones “that I know for certain weren't here 30 years ago, because they are so visible. We don't yet know why this has happened.” Similarly, the visit to the forest along the Grumeti River turned up a small population of black-and-white colobus monkeys, the farthest west these monkeys have ever been seen. Again, at the moment, no one knows why.

    But species are disappearing, too, particularly in the riverine forests. One site had seemingly healthy populations of trogons and large-casqued hornbills in 1965. Now it has lost much of its tree cover, as well as many species of birds, including these two. “It's quite clear that some [bird] species will exist only in intact forests, ones with a complete canopy cover,” says Sinclair. “We want to measure how much canopy cover is necessary for maintaining species like those.” So far it seems that the canopy needn't be wide—perhaps only 50 to 100 meters—but it must extend for some distance along the river.

    Eventually, the duo plans to extend the faunal inventory to the park's insects, reptiles, and fish with the aid of specialists in those fields. But because of limited funds, Sinclair imagines that only a handful of people will be involved, so many organisms would still be left out.

    Tropical ecologist Daniel Janzen of the University of Pennsylvania in Philadelphia, who pioneered the idea of all-taxa inventories, cautions that to achieve his goals, Sinclair may need to identify more species. That means more people and much more money. “If you're going to invade Normandy, you're going to have to pile on the resources,” says Janzen. “It's OK to start out small like they're doing, but to do the full inventory requires a massive attack. And it's going to be expensive.” Without that full inventory, including, for example, the “gut flora of the buffalo,” Sinclair won't have the “full Yellow Pages” of the Serengeti, adds Janzen.

    Other experts, including James L. Patton, an evolutionary biologist at the University of California, Berkeley, who has begun the first small-mammal survey of the Amazon Basin, think Sinclair's more scattershot approach is feasible. “To get down to the soil microorganisms isn't always necessary,” says Patton. Besides, he adds, Sinclair's focus on the Serengeti's dynamism “as the key to its biodiversity is what makes this project so neat.” Raven, who is investigating the possibility of having his institution team up with Sinclair, adds that “Sinclair is gathering baseline data that will help people manage that park; you don't need every microbial organism for that.”

    The debate doesn't trouble Sinclair, who thinks that their count will take a minimum of 10 years and “in some ways, given the park's cycles, it will never be complete.” Nor does this open-ended aspect of his research worry him: “It's a record of what we see here today in the Serengeti. It's what I wish the first European explorers in East Africa had recorded. If they'd done this, we'd have a much better understanding of how the Serengeti changes over time—and a far better idea of how to preserve it.”


    Return of the Forest

    1. Virginia Morell

    Serengeti National Park, TanzaniaBack in 1980, when the acacia and bush forests of the Serengeti National Park were shrinking, ecologist A.R.E. Sinclair tried to take notes on the “last tree in the Serengeti.” “I remember climbing to the top of that ridge,” he says, pointing to a steep hill. “There was then one acacia growing at the very top.” The forest decline had been going on since before Sinclair started working in the park in 1965—and elephants were held responsible. “Everyone blamed the elephants, which was easy to do since you could see them eating the trees,” he says.

    Desert or forest?

    Although the Serengeti's acacia forests declined in the 1970s, the trees returned, as seen in photographs taken in 1980 (top) and 1991 (above).


    But now, in the late 1990s, Sinclair has a very different perspective on the comings and goings of Serengeti species. His long-term monitoring of the park's ecosystem (see main text) has persuaded him that rather than having a single cause, such changes are driven by complex interactions among such factors as the life-span of acacia trees, the numbers of wildebeest, and the influence of humans. The loss of forest—and its recent return—is a case in point.

    By the mid-1970s, some conservationists feared that the Serengeti was doomed to become a desert unless elephants were culled. “People thought the forests and bush were the way ‘pristine Africa’ was supposed to be,” recalls Sinclair. In fact, the “pristine” woodlands of the 1930s and '40s were new, themselves part of a larger trend. They had grown up after cattle belonging to the local people had been devastated by an outbreak of rinderpest disease in the 1890s—leading to mass human starvation.

    People burn grasslands, in part to create fresh pasture for cattle, and fewer people meant fewer fires. As a result, the acacia seedlings grew up, ultimately creating forests in the midst of the grasslands. But by the 1920s and '30s, the human population had begun to recover, although the Serengeti's vast wildebeest herds—also devastated by rinderpest—did not. Without the wildebeest munching their way across the plains, the grass grew tall, and people—from Masai herders to park rangers—once again began setting fires. Now, however, they set more and hotter fires to control the grasses.

    In the 1960s, several factors came together: A wet climate favored the grasses, so even more fires were lit, scorching new tree seedlings. The older acacias, which live only 60 to 70 years, began to die. And although elephants also fed on the young trees, Sinclair and his then-graduate student Holly Dublin did field experiments in the 1980s showing that fire—not elephants—killed off most of the seedlings. The result was a nearly treeless Serengeti.

    By the mid-1970s, the ungulates had at last recovered, following the establishment of a successful rinderpest control program in 1963. Because wildebeest were again cropping the grass, people slowly began setting fewer fires. Today, the park is dramatically different from what Sinclair saw when he first arrived 32 years ago. Numbers of buffalo and elephants are far lower due to heavy poaching (although elephants have been increasing since the 1990 ivory ban). The wildebeest population has soared to about 1 million; human-set fires are down to about a quarter of what they were—and the acacias have returned.

    Dense young stands of forest cover most of the park's hillsides and flank its grasslands. “That shows just how easy it is to get these things wrong,” says Sinclair. “The Serengeti didn't turn into a desert, but a forest.” And the “last tree on the Serengeti”? It's surrounded by 3-meter-tall acacias, standing limb to limb from the top of the ridge to its base.


    Cool Sounds at 200 Decibels

    1. Dana Mackenzie
    1. Dana Mackenzie is a science and mathematics writer in Santa Cruz, California.

    The loudest controlled sounds ever made by humans were produced earlier this month—not by a rock band, but by a physicist. At the Acoustical Society of America meeting in San Diego, Timothy Lucas of MacroSonix Corp. in Richmond, Virginia, demonstrated a new “acoustic compressor” that uses ultraintense sound waves to do the work of a mechanical pump. The technology may soon be used in everyday appliances such as refrigerators and air conditioners.

    Sound concept.

    Cycles of low and high pressure driven by sound can draw a fluid into a compressor (left) and expel it at high pressure.


    The idea of the compressor is simple: You shake a can back and forth to create vibrations in the air inside. Just as a child can produce huge waves in a bathtub by sloshing back and forth at just the right rate (a phenomenon called resonance), the air vibrations become especially intense if the can is agitated at a certain frequency. But the water in the child's bathtub will splash out if the waves start to crest. For acoustical engineers, the analogous problem is shock waves, which dissipate the sound energy as heat. By making his compressor just the right shape—essentially that of a bowling pin—Lucas was able to keep the shock waves from forming, even as the can vibrated at about 600 times a second.

    How loud are the resulting sounds? The pain threshold is about 120 decibels, and a jet engine produces 150 decibels. If you stand next to a sound of 165 decibels, it will ignite your hair. The sound waves inside Lucas's compressor are about 3000 times more powerful, or about 200 decibels. But because the can's own vibrations are much smaller than the vibrations of the air, on the outside it sounds just like an ordinary compressor.

    The intense sound waves oscillate between low and high pressure in certain regions; with the help of valves that open and close at the right moments, these pressure differences can suck gas into the compressor and shoot it out at high pressure. Lucas's compressor could be especially useful for refrigerators and air conditioners, which work by compressing a refrigerant—traditionally a chlorofluorocarbon. Steve Garrett, a physicist at Pennsylvania State University in University Park, explains that some of the ozone-sparing refrigerants now being used break down in the oil that lubricates a conventional compressor. But Lucas's compressor has no moving parts inside and therefore requires no lubrication. MacroSonix has already signed a licensing agreement with an appliance manufacturer.

    Other specialists in acoustics call Lucas's compressor a breakthrough. “What Timothy Lucas has done is shift the debate from whether acoustic compression can be done to who can do it better,” says Garrett. Lucas himself thinks his sound waves will ultimately find many other roles. “Electromagnetic waves have been commercialized for over 100 years,” he says, “but the commercial application of sound waves has only scratched the surface.”

  • Log in to view full text

    Via your Institution

    Log in through your institution

    Log in through your institution