News this Week

Science  18 Aug 2000:
Vol. 289, Issue 5482, pp. 1118
  1. BIOTECHNOLOGY

    Perseverance Leads to Cloned Pig in Japan

    1. Elizabeth Pennisi,
    2. Dennis Normile

    Last March, PPL Therapeutics made international headlines when it announced, by press release, that it had finally succeeded in cloning pigs. At the time, the Scottish company attributed its success to a new approach—one that the scientific community is still waiting to read about in a peer-reviewed journal. But PPL wasn't the only group racing to overcome low success rates and the often-unpredictable results that have plagued other cloning researchers, particularly those trying to clone pigs (Science, 9 June, p. 1722). In fact, Akira Onishi, an animal breeder at Japan's National Institute of Animal Industry in Tsukuba, and his colleagues have managed to sneak under the wire ahead of PPL. On page 1188, they offer the first scientific report of a cloned pig, named Xena.

    “It's a huge success,” says Philip Damiani, a reproductive physiologist at Advanced Cell Technology in Worcester, Massachusetts, one that bodes well for the cloning field. “Most likely, there will be more pigs on the way,” adds Damiani—although it will be some time before pig cloning becomes routine.

    A study in contrasts.

    Black as only a clone would be, Xena nursed from her white surrogate sow until she was ready for green pastures.

    CREDITS: DENNIS NORMILE

    Cloned sheep such as Dolly, who set off the cloning frenzy in 1997 (Science, 7 March 1997, p. 1415), are impressive scientifically and hold the potential to become bioreactors that produce human proteins for medicine. But pigs are an even hotter commodity: They promise an unlimited supply of organs for transplantation, the result of their close physiological relationship to humans. Thanks to Xena and those piglets cloned by PPL (whose work is forthcoming in Nature), xenotransplantation should have moved one step closer to reality. But this week it was dealt a blow by more evidence suggesting that pig retroviruses can infect human cells, fueling concerns about the safety of xenotransplantation.

    Onishi's group produced Xena by blending the procedures used to clone mice with those used to produce clones of other livestock, such as Dolly. To make Dolly, researchers replaced the genetic material from a mature egg, or oocyte, with the nucleus from a cell of the ewe to be cloned. They activated development with electrical pulses, then implanted the embryo into a surrogate mother ewe.

    With pigs, getting a mature egg has been problematic, as has activating development. Furthermore, once implanted in a surrogate mother, a pig embryo has an even higher chance of failing than a sheep or a cow embryo, because its early fetal development requires at least three other embryos in the womb with it. “For the pig, it seems to be a real numbers game,” says Damiani.

    Onishi's team set out to improve the odds by doing trial runs with eggs that had not received donated genetic material. They found, for example, that eggs were more likely to become embryos if stimulated to divide by a single, strong pulse of electricity rather than by multiple, gentler shocks. They also found that mature eggs taken directly from a female pig worked better than taking the more readily available immature eggs and coaxing them to maturity in the lab.

    With these conditions established, they selected fibroblast cells from 24-day-old fetuses of a black breed of pig as the source of new DNA. They deprived those cells of nutrients for 16 days to shut down cell division and most gene activity—a step some cloning experts believe increases the chances of successful nuclear transfer. Their biggest change from standard livestock cloning was in the next step, nuclear transfer. To clone Dolly, the researchers fused the donor cell with an egg whose own genetic material had been removed. “A lot of researchers tried this with pigs but without success,” says Onishi. His team instead adopted the approach pioneered by a team in Hawaii to produce Cumulina, the first cloned mouse (Science, 24 July 1998, p. 495). Cumulina, and now Xena, were created by removing the donor nucleus in a very fine needle and injecting it into the enucleated egg. “I'm thrilled to see that they basically repeated [the Hawaii team's] stuff in another species,” says Robert Wall, a geneticist at the U.S. Department of Agriculture (USDA) in Beltsville, Maryland. Adds Damiani: “It proves that microinjection can be used in large animals and livestock.”

    Onishi thinks that piezo-actuated microinjection, as this technique is called, worked in pigs where fusion had failed because it separates into two steps the insertion of new DNA material and the reactivation of development. In cloning Dolly, researchers used electrical shock to both fuse the nucleus to the egg and activate development. Microinjection also ensures that very little extraneous donor cell material—material that could adversely affect development—winds up in the egg.

    Producing litters of pigs was yet another obstacle to overcome. An attempt to add cloned embryos of black pigs to pregnant white pigs resulted in only white piglets, indicating that none of the cloned embryos survived. Then they put about 110 cloned embryos into four surrogate sows that were not pregnant. One sow, which received 36 embryos, gave birth to Xena on 2 July. Not only was she black—born to a white sow—but independent DNA analyses confirmed that her DNA matched the donor fetal cell.

    “It's a big achievement,” says Hiroshi Nagashima, a molecular biologist at Meiji University in Tokyo, who had also been trying to clone pigs. Still, he says, getting one piglet from 110 cloned embryos could be “a matter of luck.” Onishi agrees: “More work is needed to refine the techniques and secure a higher success rate.” In the meantime, Onishi's successful use of an approach familiar to other scientists “will make people happy,” predicts James Robl, a reproductive physiologist at the University of Massachusetts, Amherst.

    In contrast, PPL's approach is a much greater departure from the norm, because the company added a second nuclear transfer to its protocol to produce its five piglet clones. The PPL researchers fused the donor cells (in this case, adult granulosa cells) with unfertilized oocytes whose own genetic material had been removed, following in the footsteps of the Dolly team. Like Onishi, they got their mature oocytes straight from a female pig. Then, in an added step, as soon as the transferred nucleus expanded, as it typically does, the PPL team moved that nucleus into a newly fertilized egg. Although they had first removed the egg's DNA and the DNA of the sperm, the egg was still primed for cell division. In this way, “they are using the oocyte as a temporary reprogramming vehicle” in which the oocyte enables the donated DNA to direct development, Robl explains. They then circumvented the need for artificial activation by inserting the nucleus into the fertilized egg.

    The jury is still out on which technique works best. What is clear, however, is that naturally matured oocytes gave both groups an edge over other would-be cloners who harvest immature oocytes from slaughterhouse pigs, says USDA's Kevin Wells. Yet neither PPL nor Onishi's team can really say they've nailed cloning for pigs. Instead, “what we're learning,” says Wells, “is that anyone who suggests they know the secret to cloning is naïve.”

  2. MARS EXPLORATION

    Plan for Two Rovers Squeezes NASA Budget

    1. Andrew Lawler,
    2. John MacNeil

    NASA's decision last week to send two rovers to Mars in 2003 is being hailed by researchers as affirming the agency's commitment to exploring the Red Planet. But once the applause dies down, cash-strapped space science managers will be forced to make tough decisions about how to shoulder the added $200 million cost of a second mission, starting with $96 million that must come out of NASA's 2001 budget.

    Travel costs.

    Two martian rovers will mean more data—but who will pay the piper?

    CREDIT: JPL/NASA

    Two failed missions in the past year have put the agency's martian strategy under intense scrutiny. So after NASA space science chief Ed Weiler initially approved a plan to send one rover in 2003, NASA Administrator Dan Goldin pushed hard for two, to improve the chances of success, say agency sources (Science, 28 July, p. 521). Favorable orbital mechanics in 2003—based on a planetary alignment that won't be repeated until 2018—provided an additional incentive. “This is just an unusually good opportunity to go to the surface of Mars,” says Cornell University's Steven Squyres, principal investigator for the mission's science program and chair of NASA's space science advisory panel.

    The rovers will have the same landing mechanism—a parachute and a cushion of air bags to soften the impact—as the famous Mars Pathfinder mission in 1997. But the new rovers will travel much farther from their landing sites—up to 100 meters a day, about the total distance covered by Pathfinder's rover—and carry a rack of new instruments, including a microscope and imager to analyze rocks, a device for grinding away the outer layers of rocks, and several spectrometers. Their primary job will be to understand martian geology and the possible role that water has played.

    But paying for all this will entail sacrifices. The White House refused a NASA request for additional funding in 2001, and Congress seems unlikely to approve a significant boost next month in the agency's $2 billion space science budget. Weiler's office will have to cut $20 million from other programs to cover the additional costs associated with the first rover and obtain another $76 million from other parts of the agency, in a plan still being worked out. The new Mars program will cost about $600 million total through 2003—$200 million more than originally envisioned.

    Weiler doesn't expect any major missions to be canceled as a result of the added Mars costs. But he acknowledged some difficult choices. “Do I try to keep all of the programs in the basket, and let launch dates slip,” he wonders, “or do some of the projects have higher priority, and [do I] let one drop?”

    Another agency manager says that “space science already has too much on its plate” and that big cuts are inevitable. A budget table presented by Weiler last month to NASA advisers notes “significant new content” that will grow substantially over the next 5 years. Astrobiology is slated to triple, the small Discovery missions are expected to double, and NASA's new Living With a Star program is supposed to grow from $20 million to $177 million (Science, 28 July, p. 528). Existing efforts, such as the Europa orbiter, face technical problems that will cost time and money to fix. And increased testing of new spacecraft will cost more money.

    Politics also plays a role: A new Administration could undo all these funding decisions. In the meantime, NASA is rethinking a mission to Pluto later in the decade, and some current efforts, such as the Extreme Ultraviolet Explorer, seem vulnerable. A more expensive Mars program only makes the situation worse. “There's no question that a second rover makes a lot of sense, but we don't know what the trade-offs will be,” says Bill Smith, president of the Washington-based Association of Universities for Research in Astronomy. “There could be a price to pay.”

  3. TUMOR ANGIOGENESIS

    Gene Expression Patterns Identified

    1. Jean Marx

    Researchers have obtained the most detailed sketch yet of how cancerous tumors secure the blood supplies that nourish their growth. On page 1197, a team led by Bert Vogelstein and Kenneth Kinzler of Johns Hopkins University School of Medicine reports the results of a large-scale comparison of the genes expressed in the blood vessels of human colon cancers and of normal colon tissue. They've found that the gene expression patterns of the two types of vasculature are distinctly different.

    Researchers had previously identified a few “markers” that differ between normal blood vessels and the presumably newly formed vessels of tumors, but nothing on the scale seen by the Hopkins workers, who picked up differences in the activity of nearly 80 genes. “It is an important paper,” says Erki Ruoslahti of the Burnham Institute in La Jolla, California, who studies tumor vasculature. “I found it surprising that there would be so many markers with such big differences.”

    Ruoslahti and other researchers interested in developing new cancer therapies are intrigued by the results. Philip Thorpe of the University of Texas Southwestern Medical Center in Dallas notes that the work could point the way to compounds that home in on the protein products of genes that are overexpressed in tumor vessels, in hope of shutting off the growth of blood vessels the tumor needs to survive. Angiogenesis researcher Judah Folkman of Harvard Medical School in Boston, who pioneered the idea of killing tumors by targeting their blood vessels, describes the paper as a “landmark.”

    Before achieving that landmark, the Hopkins team had to overcome a serious obstacle—learning how to isolate pure populations of the endothelial cells that form blood vessels. These cells are only one of many cell types found in tumor and normal tissues, and they are present in relatively small amounts. The task of sifting them out fell to postdoc Brad St. Croix, who spent 2 years on the problem. He eventually succeeded by using a protein called P1H12, which occurs mainly on endothelial cells, as a handle to separate out the cells. Angiogenesis researcher Noel Bouck of Northwestern University Medical School in Chicago describes the feat as a “tour de force.”

    Once the isolation method was worked out, Kinzler, Vogelstein, and their colleagues used a technique developed in their lab about 5 years ago to compare the gene expression patterns of endothelial cells from colon cancers and from normal colon tissue from the same patients. Called SAGE, for serial analysis of gene expression, the method involves making DNA copies of the messenger RNAs present in each type of cell and then combining up to 50 short nucleotide “tags” cut from each of these cDNAs into longer DNAs for sequencing (Science, 20 October 1995, pp. 368 and 484).

    The Hopkins team found some 100,000 sequence tags, representing more than 32,500 active genes, in both normal and tumor endothelial cells. The expression patterns of 79 of these genes were significantly different in the two cell types: 46 were substantially more active, and 33 were less active, in tumor cells than in normal colon endothelial cells. What's more, the researchers found similar expression patterns for many of the same genes in vessels from other tumors, including lung, brain, and metastatic liver cancers. “It's clear that the tumor vasculature is different from normal vasculature,” says Douglas Hanahan of the University of California, San Francisco.

    Many of the genes expressed at higher levels had not been previously identified. But others are known genes, involved in such activities as forming or remodeling the extracellular matrix—necessary events in the formation of the new blood vessels thought to be present in growing tumors. In fact, the results indicate that the tumor vasculature is very similar to newly forming blood vessels elsewhere, such as in healing wounds. “Virtually all the genes expressed in the tumor vasculature were expressed in neovasculature that was not neoplastic,” Vogelstein says.

    In theory, this could pose problems for drug development. Compounds that block the overactive tumor vessel genes might also disrupt normal blood vessel growth, or angiogenesis. But, says Kinzler, “I don't think it means that there won't be good targets for therapy.” Others agree. Hanahan notes, for example, that early studies of antiangiogenesis drugs haven't picked up any particular problems with normal angiogenesis, which in the adult is mainly confined to wound-healing and the growth of the uterine lining during the menstrual cycle.

    Still, researchers have a great deal of work to do to find out what all the genes identified by the Kinzler-Vogelstein team do in normal angiogenesis, and whether any will be suitable targets for drug therapy. But the Hopkins workers are likely to have lots of help. “This paper will no doubt set off a flurry of work by other investigators,” predicts Folkman.

  4. MARINE CONSERVATION

    Virginia Gets Crabby About Harvest Limits

    1. Erik Stokstad

    Virginia is at odds with other Atlantic coastal states over a plan to protect horseshoe crabs. Virginia officials have refused to accept a quota on their harvest, arguing that it's not based on good science. Now, the Department of Commerce has scheduled a moratorium, to go into effect next month, that is aimed at preserving what many believe is a dwindling population.

    The horseshoe crab is not really a crab at all, but a distant relative of spiders. Birdwatchers prize horseshoe crabs because their eggs provide nourishment for hungry migratory birds. Medical companies use crab blood to test injectable drugs for contamination. And the fishing industry uses crabs as bait for conch and eel. As demand from the growing conch fisheries has increased, crab harvest has skyrocketed, growing from 500 tons in 1990 to 3000 tons in 1997.

    The Audubon Society and other conservation groups fear that this demand may account for what appeared to be a sharp decline in horseshoe crab populations, originally noticed by volunteers in 1992. In response, Delaware and New Jersey officials in 1997 instituted stringent restrictions on their harvest. When fishers began landing their crabs in Maryland to avoid the restrictions, that state imposed its own 75% cut. That shifted the trade to Virginia, which now accounts for about a quarter of the harvest. In 1998, Maryland, Delaware, and New Jersey decided a coastwide management plan was needed, so they asked the Atlantic States Marine Fisheries Commission (ASMFC) to design one.

    The scientists charged with the task soon realized that there was a lack of good data on horseshoe crab populations, says Jim Berkson, a fisheries scientist at Virginia Polytechnic Institute and State University in Blacksburg who participated in the stock assessment committee. But they were alarmed by the still-increasing harvest, fearing long-lasting effects on a species that takes 9 to 11 years to reach sexual maturity.

    In February, ASMFC voted for a 25% reduction of the average harvest levels from 1995 through 1997. This was a compromise between the 50% cut desired by Maryland and other states and the status quo sought by Virginia. Virginia officials objected to what they said amounted to a 75% cut in what the state's conch industry needed. They argued that state laws require them to base their decisions on good science—which, they said, was absent here. State officials also argued that the problem needed to be quantified before a quota was established.

    That position didn't pass muster with the commission, which saw it as a delaying tactic. In May, it found Virginia “out of compliance” and asked the Department of Commerce to shut down Virginia fisheries for not adhering to the commission's quota. “The bottom line is that decisions are made with whatever information is available,” says Dieter Busch, director of ASMFC's Interstate Fisheries Management Program. Virginia's Marine Resources Commission has since reduced the legal harvest in half, to 355,000 crabs. But that still isn't good enough for federal officials. Last week the Department of Commerce proposed a moratorium for September, the start of the fall harvest.

    Virginia hopes to convince the Atlantic commission at a meeting next week to ease its quota, and the fishing industry is watching closely. “We're hopeful,” says Rick Robins, who runs Chesapeake Bay Packing in Newport News, Virginia, the largest exporter of conch. “But we're prepared to seek an injunction,” he says, if the commission stands firm.

  5. AGRICULTURE

    Variety Spices Up Chinese Rice Yields

    1. Dennis Normile

    The results of Chinese field trials reinforce the accepted scientific wisdom that planting different varieties of a crop in the same field holds down the spread of certain diseases and improves yields. And this time researchers seem to have convinced farmers, too.

    Zhu Youyong, a plant pathologist at the Phytopathology Laboratory of Yunnan Province at Yunnan Agricultural University in Kunming, China, and colleagues report in the 17 August issue of Nature on a 2-year experiment that involved mixing two varieties of rice in the same field. Their work, involving thousands of local rice farmers, found an 18% rise in overall productivity, including greater profits for a premium-priced variety that is particularly susceptible to rice blast from a fungus.

    Most Yunnan farmers plant one variety of hybrid rice, with a few devoting some land to a more glutinous rice used for desserts and other regional specialties. Following Zhu's suggestion, however, farmers planted a single row of glutinous rice in the middle of a group of either four or six rows of hybrid rice. The experiment started on 812 hectares in 1998 and expanded to 3342 hectares in 1999. Monoculture control plots were grown at 15 small sites throughout the region.

    The results show the power of variety. Researchers calculated that it would take an average of 1.18 hectares of monoculture cropland to produce the same amounts of hybrid and glutinous rice produced in 1 hectare of mixed crops. The most striking change was for individual glutinous plants grown in a mixed environment: They yielded up to 89% more rice than their monocultural cousins. What's more, because the glutinous rice fetches a premium price, the value per hectare of the mixed fields was 14% greater than the hybrid monoculture plots and 40% greater than the glutinous monoculture plots. In both years, blast destroyed about 20% of the glutinous rice grain in the monoculture plots but only 1% in the mixed plots. Blast damage in the hybrid rice, although much lower in general, also dropped, with a grain loss of only 1% in the mixed plots versus 2.3% in monoculture plots. The damage from blast was so reduced in the mixed plots that farmers stopped their periodic fungicide spraying. “The farmers are very happy,” says Zhu.

    Christopher Mundt, a plant pathologist at Oregon State University in Corvallis and a co-author of the paper, explains that different types of rice blast attack different varieties of rice. In a monoculture field of rice, he says, the blast can spread “like a fire through a field of dry grass.” The fungus has a harder time finding a compatible host in a mixed environment.

    Martin Wolfe, a plant pathologist and research director of the Elm Farm Research Center, an organic farming research center in Hamstead Marshall, Newbury, U.K., supports the approach but notes that the mixture must be tailored to local growing conditions. “This is a useful tool,” says Wolfe, who has written a commentary in the same issue. “But you can't just rush in and plant together anything you like.”

    The message from Zhu's study appears to be spreading through Yunnan Province, where this year 40,000 hectares were planted in the mixed pattern, he says. The payoff, he adds, is easy to measure for farmers: “more rice and more money.”

  6. MICROBIOLOGY

    A Weak Link in TB Bacterium Is Found

    1. Laura Helmuth

    Easily the most successful human pathogen in the world, the bacterium that causes tuberculosis infects one-third of the world's population. Often acting in deadly combination with AIDS, TB kills 2 million to 3 million people per year, more than any other infectious disease. The secret of the pathogen's success is that it can linger undetected in the lungs for decades, hiding from the macrophages that aim to chew it up and spit it out. Now a team of researchers has uncovered a vulnerability in this resilient bug that suggests new ways to starve it out of its bolt-hole.

    When Mycobacterium tuberculosis infects a person for the first time, it proliferates for a few weeks until the immune system marshals its defenses. The two then reach a stalemate, says John McKinney of The Rockefeller University in New York City, part of a four-institution team reporting its findings in the 17 August issue of Nature. This persistent state—the pathogen population doesn't increase, but the immune system can't get rid of the bacteria already ensconced—can last a lifetime, with the person suffering no obvious ill effects. But in 10% of those infected, TB will erupt into full-blown disease in response to various stresses or if the immune system is compromised.

    During its latent days inside macrophages, the bacterium is stuck with a restricted diet: It eats carbon from lipids via a pathway called the glyoxylate shunt present in bacteria and plants. The TB bacterium also builds amino acids via the oft-memorized Krebs cycle, explains McKinney, but “we went after the glyoxylate shunt because it's the only [pathway the bacteria use for metabolism] not found in humans.” Working with William Jacobs Jr. at the Albert Einstein College of Medicine in the Bronx, he created a knockout M. tuberculosis that lacks an enzyme called isocitrate lyase (ICL) that is critical for this pathway. Study collaborator David Russell of Cornell University in Ithaca, New York, discovered that ICL levels are elevated in M. tuberculosis when it's in its latent phase. Normal TB bacteria burrow into macrophages in mice and make themselves at home indefinitely, but McKinney's altered bacteria that can't produce ICL were wiped out by the animals' immune system.

    “One of the things we don't understand is how M. tuberculosis can sit around in tissue for years or decades,” says Jo Colston, an expert on microbial pathogenesis at the National Institute for Medical Research in London who was not involved in the study. “Obviously, if you can hit a protein that enables [the bacterium] to survive, that represents a potential therapy target.”

    McKinney and colleagues are searching for such compounds. In a second publication in the August issue of Nature Structural Biology, they describe the protein structure of ICL. They also identify two compounds that smother the active end of ICL and shut down the enzyme, thus preventing it from playing its part in the glyoxylate shunt. X-ray crystallographer James Sacchettini of Texas A&M University in College Station, a collaborator on both publications, says his group, working with research sponsor Glaxo Wellcome in the United Kingdom, will screen hundreds of thousands of additional compounds. Those that stymie ICL have potential to serve as drugs that can starve TB while it's hiding in macrophages, he says.

    The need for new TB drugs is urgent, McKinney says, as multidrug-resistant TB is on the rise. Current drugs swat the bug when it's replicating, by interfering with nucleotides or with the building of new cell walls, but they can't really harm the TB bacterium while it's inside the macrophage. As a result, even the most potent medications have to be taken for 6 months. Most people don't finish the whole course, a practice that promotes drug resistance. A treatment regimen that lasts only 2 weeks would greatly reduce that problem, says McKinney. But that will require a drug that kills M. tuberculosis while it's resting inside the macrophage.

  7. PLANETARY SCIENCE

    Newfound Worlds Hint at Hard-Knock Life

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    The art of planet hunting took a big step toward becoming a science last week, when three teams of astronomers announced nine newly discovered planets orbiting other stars. Headline grabbers from the meeting* included the lightest known “exoplanet,” the one closest to Earth, the one farthest from its parent star, and the second extrasolar system to contain more than one planet.

    To scientists, though, the real news was not novelty but numbers. As the roster nears 50, extrasolar planets are ceasing to be curiosities and turning into data. “They are coming in bunches now, and we can start to do real science,” says astrophysicist Debra Fischer of the University of California, Berkeley. The expanded sample offers scientists the first real evidence that stars are fecund breeding grounds for worlds. But it also hints that the planetary nurseries may be hellish places to grow up. Some astronomers now suspect that most of the planets they can detect are veterans of ancient combat against hordes of rivals for a handful of stable orbits.

    Until recently, astronomers had no clue that such celestial arenas existed. Inside our own solar system, planets trace out neatly spaced, near-circular orbits, with small, rocky bodies near the sun and cold, gaseous giants farther out. It's an arrangement that seems to have coalesced directly from the swirling pancake of leftover gas that encircled the newborn star. The first trace of mayhem came 6 years ago, when Michel Mayor and Didier Queloz of the Geneva Observatory in Switzerland spotted evidence of an exoplanet in the wavering light of 51 Pegasi. The planet was far too faint to be seen by telescope; picking an exoplanet out of the glare of its star is “like trying to find a firefly in the glow of a nuclear explosion,” says Geoff Marcy, a planet-hunter at the University of California, Berkeley. Instead, Mayor and Queloz looked for indirect evidence of tiny stellar wobbles triggered by the gravitational tug of the planet. As the star oscillates toward Earth, its light Doppler-shifts to higher frequencies, analogous to the rising pitch of an approaching train whistle. By timing the shifts in frequency, the Swiss astronomers could infer the shape and period of the presumed planet's orbit and estimate its mass.

    The planet they turned up was unlike anything astronomers had ever imagined. Its mass, half that of Jupiter, and circular orbit seemed ordinary enough. But the planet's location, a tenth as far from its star as Mercury is from the sun, was all but impossible to explain. Doppler searches of other stars added more “hot Jupiters” to the exoplanetary zoo, along with new oddities such as planets that hurtle along in narrow elliptical orbits or that circle their stars at dizzying speeds. And one star, Upsilon Andromedae, harbors three Jupiter-sized planets locked together in a fierce gravitational wrestling match.

    The nine newly unveiled planets continue the trend. In light from Epsilon Eridani, only 10.5 light-years from Earth, Marcy's team found evidence of a Jupiter-sized body with an orbital radius of 478 million kilometers, the largest of any known exoplanet. Around a star called HD 83443, Mayor's team turned up a second planet, half as massive as Saturn, making it the second multiple-planet system outside our own.

    Astrophysicists are still struggling to make sense of the extrasolar bestiary, but they will venture a few conclusions. Stars appear to be prolific planet factories, they believe, with lighter bodies outnumbering heavier ones. If the pattern holds, droves of even smaller planets are awaiting discovery. For another thing, multiple planet systems are probably common. Right now, the 49 known exoplanets are scattered among 46 different stars. But at the conference, Fischer presented evidence—tiny leftover fluctuations in the data—that suggests many of the apparent loners might have partners in their cosmic dance.

    Some of those shadow partners could make the waltz of the planets look more like a mosh pit. The three planets orbiting Upsilon Andromedae, for example, are poised on the brink of chaos; if the system had even one more planet, interplanetary gravitational forces would tear it apart, sending planets shooting out into interstellar space. Astrophysicists say that such gravitational powder kegs are in a state called dynamic saturation. “If more systems are dynamically saturated, it probably means that planets formed in bunches and then shook out later,” Fischer says. The star-hugging orbits and tight ellipses of many exoplanets, she adds, may be the remnants of such violent shake-outs.

    So far, such ideas are still informed hunches. To confirm them, astronomers will need to track down more, and smaller, planets. That won't be easy, they caution. To detect Epsilon Eridani's relatively hefty planet, even at close range, astronomers had to scrutinize 20 years' worth of measurements of the star's spectrum and 34 years of star-spot observations. Star spots, the extrasolar equivalent of sunspots, are magnetic storms that roil a star's surface, creating cool regions that subtly alter the star's spectrum. On a young, magnetically active star like Epsilon Eridani, cycles of high and low star-spot activity can create a “false Doppler shift” that mimics or fuzzes the effect of stellar wobble. “I'd say there is a 75% chance that it is real,” Marcy says about the purported planet.

    The same problem stymies astronomers looking for low-mass planets. The solution is new methods and equipment, such as NASA's planned Space Interferometry Mission (Science, 25 September 1998, p. 1940) or the ground-based Large Binocular Telescope under construction in Arizona, which combine information from multiple telescopes to achieve the power of one much larger telescope. The Large Binocular Telescope will also incorporate flexible adaptive-optics mirrors to remove atmospheric distortions. Astronomers hope such devices will provide them with a firsthand glimpse of these rough-and-tumble planetary nurseries.

    • *International Astronomical Union 24th General Assembly, Manchester, United Kingdom, 7–18 August 2000.

  8. PHYSICS

    Will Livermore Laser Ever Burn Brightly?

    1. Charles Seife,
    2. David Malakoff

    The National Ignition Facility is supposed to allow weapons makers to preserve the nuclear arsenal—and do nifty fusion science, too. But a new report that examines its troubled past also casts doubt on its future

    LIVERMORE, CALIFORNIA The easiest way to start a fusion reaction is to detonate an atomic bomb. However, the United States swore off thermonuclear weapons testing in 1992, leaving scientists to find other ways to create and study fusion reactions. The National Ignition Facility (NIF), a superlaser being built at Lawrence Livermore National Laboratory in Livermore, California, is designed to overcome that problem by using lasers, rather than nuclear explosions, to create a fusion reaction. At an estimated cost of nearly $4 billion, it's the most expensive single project in the Department of Energy's (DOE's) research portfolio.

    But NIF is over budget and way behind schedule. Originally, DOE officials estimated that the project, approved in 1993 and due to be finished in 2002, would cost about $2 billion. But not long after Secretary of Energy Bill Richardson announced at a June 1999 ribbon-cutting ceremony for NIF's target chamber that the project was “on cost and on schedule,” officials were staggered by a string of revelations that stunned supporters and critics alike. First, NIF chief Michael Campbell resigned after admitting that he had never finished a claimed Ph.D. from Princeton University. Then an angry and “gravely disappointed” Richardson announced that Livermore officials had withheld news of serious technical and managerial problems (Science, 10 September 1999, p. 1647). The cover-up eventually cost several Livermore employees their jobs—and lab chief Bruce Tarter his annual raise. Finally, after months of review and a sweeping reorganization, DOE officials concluded in June that the laser's costs would jump to $3.26 billion, and that completion would be delayed until 2008.

    The turmoil is far from over. Next month, just as DOE delivers a long-awaited final update on the project's cost overruns, congressional critics are likely to try to kill or cut back the project, the latest in a long series of attacks. Even some of NIF's scientific and political allies are beginning to talk openly of a scaled-down version of the original 192-beam design. More than laser science is at stake: NIF's demise could drag down the Clinton Administration's $4.5-billion-a-year stockpile stewardship program, which was sold as a way to maintain the nation's nuclear arsenal without testing. Loss of the laser could also gut the Livermore lab, which depends on the project to pay salaries and attract new talent.

    The critics have a new piece of ammunition: Last week, the General Accounting Office (GAO), Congress's investigative arm, completed a much-anticipated analysis of NIF's problems that finds plenty of things wrong. In a report to the House Science Committee, the GAO criticizes DOE and the University of California (which operates Livermore) for poor management and inadequate oversight. It also assigns the project a new, higher price tag of $3.89 billion. GAO chides lab officials for beginning construction of NIF's stadium-sized building before final plans for the laser were finished, producing a space that proved smaller than ideal. Beneath these administrative missteps, GAO says, lie engineering and physics challenges that are draining budgets. In addition to tighter fiscal reins, GAO recommends an “outside scientific and technical review of NIF's remaining technical challenges.”

    Ultraviolet Optics

    Before striking the target, infrared beams are converted to ultraviolet rays, wreaking havoc on optical components. A better understanding of materials may be needed.

    CREDIT: NIF

    Although public officials are likely to talk mostly about money and management in the coming battle, NIF will ultimately stand or fall on whether scientists and engineers can overcome the daunting technical challenges. So far, NIF backers are confident. George Miller, an associate director at Livermore who oversees the project, says the lab can handle whatever problems arise. But he admits there are no guarantees. “In science, there is no sure thing,” he says. “There are risks and uncertainties. We do our best to ensure that the risks are understood and acceptable.”

    Explosive science

    At its core, NIF is an expensive—and poor—substitute for an atomic bomb. After all, even a weak nuclear explosion can set off a healthy fusion reaction. In contrast, when NIF is fired up and ready to go, the world's most powerful laser will focus its stupendous energy on a BB-sized pellet of heavy hydrogen that might or might not fuse (see page 1128).

    That is because fusion reactions, unlike fission ones, are notoriously tough to start. Whereas fissile atoms, like uranium-235, break apart with the slightest nudge, deuterium and tritium need a huge kick to overcome their mutual repulsion and fuse, releasing energy. In the core of our sun, this kick is supplied by the collective gravitational attraction of hydrogen. On Earth, however, it's a much trickier thing to do.

    Without the luxury of nuclear tests, scientists have been forced to look for another way to study fusion. The most promising method, and the one with the most direct applications to weapons research, uses lasers instead of bombs to start a reaction. This is what NIF is meant to do. When fully outfitted, NIF's stadium-sized laser will shoot 192 beams onto the inner surface of a small gold cylinder, called a hohlraum, that is smaller than the cap of a pen. The intense laser pulse will flash-fry the gold so that it radiates x-rays. Those x-rays, in turn, will smash into a hydrogen-filled pellet in the middle of the hohlraum, causing the outer layer of the pellet to evaporate. Just as a rocket is driven into the air by the explosion of hot gas out of a nozzle, the pellet will be crushed by the explosion of hot gas in all directions. Ideally, the hydrogen inside will be crushed so densely that it will ignite.

    NIF packs nowhere near the energy of a nuclear weapon; the Hiroshima bomb, for example, was 30 million times more powerful than NIF's target energy. Still, NIF is incredibly potent by laser standards. Its 1.8 megajoules are nearly 60 times the energy of the previous laser record holders: NOVA at Livermore and OMEGA at the University of Rochester. Discharging all that energy in a few nanoseconds, the peak power will be 500 terawatts, more power than the entire world uses at any given moment.

    Although many take issue with the assertion, DOE claims that this brief flash of fusion energy will allow scientists to model the workings of the nuclear weapon's “secondary,” the fusion part of a hydrogen bomb. In DOE's eyes, this makes it a cornerstone of stockpile stewardship (Science, 18 July 1997, p. 304).

    Technical challenges

    Building components that can handle NIF's power, from laser glass to the tiny targets, is the challenge at the heart of the project's technical problems. Even the hardiest materials degrade, explode, or eventually fail when subjected to such energy densities. Any one of the laser's more than 200 capacitors, for instance, “can vaporize and turn into a gas,” as several did during tests at Sandia National Laboratory in New Mexico in 1998, says physicist David Smith of Sandia: “The shock wave breaks the insulation on the capacitor, which shoots out as shrapnel.” As a result, engineers had to redesign the capacitor's shielding, putting each capacitor in a three-eighths-inch (0.95 cm) steel shell with flapper doors at the bottom. When one explodes, the doors pop open and the debris sprays toward the floor.

    Producing laser glass in the necessary quantities and of sufficient quality is another ponderous task, and very costly. Early attempts to produce the 150 tons of neodymium-doped glass needed for NIF were marred by moisture- and platinum-based impurities. The problems forced Livermore and the manufacturers—Schott Glass Technologies in Pennsylvania and Hoya Corp. in California—to redesign their manufacturing methods, with some success. After much effort, Schott recently produced more than a ton of glass that is up to the proper specifications.

    Even with glass that's up to snuff, the slightest speck of dust can burst into flame and destroy components. As a result, NIF components are assembled in clean rooms and are toted around by robotic trucks with superclean interiors. After initial difficulties with cleanliness, the problem appears to be under control, but only by straining an already overtaxed budget and further extending the construction schedule.

    Other technical hurdles are equally daunting. The biggest is overcoming the so-called “three-omega” problem. (Omega is the symbol used by physicists to denote frequency.) The neodymium-doped glass produces light in the infrared region of the spectrum. According to John Lindl, a key NIF physicist, too much energy in the beam at long wavelengths produces a situation in which “you can't get light” to the target due to scattering effects. The energy is wasted and doesn't contribute to compressing the target. The problem is much smaller at higher frequencies, however, such as the ultraviolet region of the spectrum. Thus, NIF will use slices of enormous potassium- and phosphate-based crystals to triple the infrared beam's frequency (the three omegas) into the ultraviolet range. This solution, which predated NIF plans, required NIF scientists to invent another manufacturing process.

    Unfortunately, ultraviolet beams are deadly to optical systems, especially at high power. As a result, these powerful beams of light cause small defects in the optics assemblies that handle the frequency-converted light, such as the lenses that focus the beam onto the target. These defects grow exponentially with each shot. At NIF's full power, the optics will have to be replaced every 50 to 100 shots or so.

    “The three-omega optics were designed to be replaced,” says Ed Moses, NIF's project manager. “The issue is how frequently you need to replace them.” At a rate of once every 50 to 100 shots, operating NIF at full power—the requirement for ignition—would be extremely expensive. “This does not mean that NIF does not work or will not work,” says Moses. “The question is how much will it cost to work?” He hints that an etching process might extend the optics' lifetime to about 1000 shots. But last year a NIF review committee suggested that the problem was so severe that it required a “significant materials science R&D program.” The GAO also thinks that the three-omega problem poses “… a major technical challenge …” and notes that “… there is currently no solution for this problem.” Moses doesn't think the problem is a showstopper, although he acknowledges its impact on the budget. “Again, it does not prevent NIF from working; it just doesn't meet spec with respect to the operational costs,” he says. “[But] we have several years to work it through.”

    Even if the optics function properly and last long enough to keep the project within its budget, there are more fundamental physics problems that might derail NIF. Many involve the last stage of a laser shot—the moment when the pulse of light strikes the hohlraum and the resulting x-rays strike the supercooled target capsule full of hydrogen. At that instant, it's important that the x-rays compress the target in an entirely symmetric fashion. Even a tiny asymmetry in the implosion will cause the contents of the BB to squirt out in all directions, rather than to compress and ignite.

    Tiny lumps on the target capsule's surface, uneven distribution of deuterium ice inside the target, and minute asymmetries in the incoming x-rays all create tiny ripples in the imploding plasma. As the capsule gets ever more compressed, those ripples get bigger and bigger. The result is a series of cold fingers invading the central core of hydrogen, causing the hot plasma to squirt in all directions. “We're basically trying to shrink a basketball-sized thing into a pea—it's a 30- to 40-fold convergence,” explains Livermore's Lindl. “It's an inherently unstable process. [The plasma] doesn't want to be squeezed at these high densities and high velocities; it wants to break up into a bunch of droplets.”

    Removing imperfections in the target eases the problem. “The outside of the [target] has to be smooth to within 50 nanometers on a millimeter-scale object,” says physicist Steve Haan of Los Alamos National Laboratory in New Mexico. That's less than the height of a building-sized bump on Earth. The layer of deuterium and tritium ice on the inside has to be similarly smooth, to within about one micrometer.

    Happily, the laws of physics offer some help. It turns out that a deuterium-tritium ice layer smoothes itself out. Because tritium is radioactive, it is constantly emitting electrons that warm up the surroundings; the more ice there is in a certain region, the hotter it gets. “It naturally produces a nice spherical shell,” explains Haan. “It's really pretty cool, like a gift from nature.”

    But what nature gives, it can also take away. NIF's target capsule will be made of plastic or doped beryllium, but each type has its problems. Plastics, such as polystyrene, are relatively easy to manufacture to the required smoothness—and they allow scientists to see the layer of hydrogen ice inside so they can check the quality of the fill. Unfortunately, plastics aren't very good for the implosion itself, because they don't ablate, or vaporize, very well.

    Beryllium is much better, especially when a small amount of a heavier metal such as copper is added. But the metal is opaque, so it's impossible to assess whether a beryllium target has been filled properly. Worse yet, it's difficult to make a perfectly round and hollow beryllium sphere.

    “We could not, this instant, build a target that meets all of the NIF specs,” concedes Livermore's Miller. “But we have demonstrated the technology that we're confident will allow us to make those targets when we need them.”

    Political realities

    In the wake of NIF's problems, however, such assurances are less than totally convincing to the members of Congress who must approve NIF's funding. Spending panels in both houses have so far declined to give NIF the extra $150 million that DOE officials have requested to keep the project from falling ever farther behind schedule. Instead, they are waiting for the results next month of what DOE bills as an “exhaustive” review. And some lawmakers, including Senator Harry Reid (D-NV), have already said that NIF's poor track record has made them more stingy.

    At a budget hearing last month, Reid tangled with Senator Diane Feinstein (D-CA), who argued that any shortfall would trigger layoffs and harm recruiting. That debate is likely to be repeated again next month, when the full Senate debates DOE's funding bill. With the added ammunition of the GAO report, several senators are rumored to be preparing amendments that would kill the project outright. But aides predict that NIF supporters will eventually prevail. “There is a great hesitation to imperil stockpile stewardship and Livermore, even though many members don't like what's happening with NIF,” says one House aide.

    One potential hitch, however, involves how DOE proposes to pay for NIF's overruns. If NIF takes too much money away from other parts of the stockpile stewardship program, for instance, it could anger Defense Department officials and scientists at Livermore's sister labs—Sandia and Los Alamos—in New Mexico. That might lead Senator Pete Domenici (R-NM), a powerful player in lab budgeting, to take a dim view of NIF. So far, however, DOE has not publicly shared its funding plans, and Domenici has expressed only “concern” about NIF's future.

    Concern about NIF may spread once the community digests the GAO report. Agency investigators, for instance, fault the lab for shunting aside fundamental scientific questions in favor of short-term engineering and construction fixes: “The laboratory [focused] too much attention on meeting construction goals at the expense of conducting and integrating necessary research and development solutions,” the report concludes. Outside scientists agree. “Instead of pouring glass, they should be poring over data,” says Bedros Afeyan, a plasma physicist at Polymath Associates, an independent consulting firm in Livermore.

    Amid the uncertainty, Livermore chief Bruce Tarter is upbeat. “If we can get through the [DOE] review and get an initial OK by Congress to at least proceed,” says Tarter, “… then I think I'm pretty—no, I'm very—confident that we're going to succeed.”

    But success may be hard to define. In 1995, Livermore envisioned carrying out six experiments a day at NIF; now they hope for two. And even achieving ignition is far from assured. “Half the value of NIF is in ignition,” says Miller. “For me, if it doesn't get ignition, that will be a major disappointment.”

    Even without achieving ignition, however, NIF is expected to reveal properties of hydrogen at high temperatures and pressures that should be useful to astrophysicists as well as weapons designers. “NIF will provide good science,” says Sandia physicist Rick Spielman. “It can do some really, really slick stuff.” But without ignition, NIF won't be able to look at even higher pressure and temperature regimes that are critical to nuclear weapons designs. “It would have a significant impact” on the weapons program, says Spielman.

    Miller agrees that if NIF fails to achieve ignition it could jeopardize political support for stockpile stewardship. “If we're not good enough to do NIF,” he asks, “why should [Congress] believe us when it comes to stockpile stewardship?”

  9. PHYSICS

    Will NIF Live Up to Its Name?

    1. Charles Seife

    The National Ignition Facility (NIF)—if it works as advertised—will pump almost 2 million joules of laser energy into a BB-sized pellet of hydrogen. Given all of NIF's problems (see main text), that feat would be a significant achievement in itself. But the bigger question is whether the result will justify the facility's name: Will the pellet actually achieve ignition—that is, undergo a sustained nuclear reaction that gives off as much energy as was put in? Most scientists on the project are cautiously optimistic. But others, some of whom are reluctant to speak publicly, have grave doubts.

    “From my point of view, the chance that this reaches ignition is zero,” says physicist Leo Mascheroni, a laser physicist formerly at Los Alamos National Laboratory in New Mexico and an outspoken critic of the project. “Not 1%. Those who say 5% are just being generous to be polite.” Sandia National Laboratory physicist Rick Spielman sees it differently, however. “There is no unified position, and Leo is on one fringe.” Not surprisingly, George Miller, NIF's associate director, stands at the other end. “When the last national-level study was done [in 1997], I think people put the odds [of achieving ignition] at better than 50-50. And in the intervening years, we have actually learned stuff that makes us more optimistic.”

    Why the great uncertainty? One reason is that scientists have very little data on how hydrogen behaves at the temperatures and pressures that NIF will achieve. It's an environment less extreme than the interior of a nuclear weapon, but well beyond the conditions other lasers can create. Without testing data, scientists must use calculations and computer simulations to determine whether a capsule will ignite. “To keep the codes honest, you need to continually [gather data],” says Bedros Afeyan, a plasma physicist at Polymath Associates, an independent consulting firm in Livermore, California. “It needs some benchmark with reality.”

    One yardstick that does exist is clouded in controversy. In the 1970s and '80s, scientists at the Livermore and Los Alamos labs conducted a series of classified nuclear tests dubbed Halite-Centurion. Some of these tests involved sets of gold tubes, called hohlraums, with hydrogen pellets inside. After being bombarded with x-rays from nuclear explosions, the hohlraums reradiated x-rays that, in turn, crushed the hydrogen pellets. The hohlraums received varying amounts of energy—from tens to hundreds of megajoules, much greater than the NIF laser could ever dump on a target. But even with that much energy driving them, 80% of the capsules failed to ignite, says Mascheroni. Worse yet, he says, the failures couldn't be predicted with the computer codes that simulate nuclear explosions—and NIF dynamics.

    Again, others dispute his claims. “I can't comment in detail,” because the experiments are classified, explains John Lindl, a NIF physicist, “but we learned what we set out to learn.” Miller says that Mascheroni's analysis is flawed because it's based on his work with hydrogen fluoride (HF) lasers, which operate on a much longer wavelength than the neodymium-doped glass lasers of NIF. “Sure, if NIF were an HF laser, it wouldn't get [ignition],” Miller says. The different frequency of HF lasers alters the laser's effectiveness; furthermore, Miller says that Mas-cheroni's calculations also ignore the “exquisite pulse shaping” that controls when the energy gets dumped into the capsule. “All these details matter,” Miller adds. Mascheroni, in response, says he's taken these effects into account, and that pulse shaping won't have a dramatic effect on performance.

    Another complication is that the amount of energy needed to achieve ignition has been a moving target. In the 1970s, Department of Energy (DOE) scientists predicted that their experiments would achieve ignition at 1 kilojoule. Over the years, according to Chris Paine, an analyst at the antinuclear National Resources Defense Council, that estimate grew steadily to 5 kilojoules (Livermore's Argus laser) and then to 200 kilojoules (the original design of the NOVA laser at Livermore) before reaching the current level of 1.8 megajoules. But in every case, an unexpected problem prevented ignition. “Given the state of knowledge in the late 1970s,” Lindl confesses, “it looked like NOVA would be able to do ignition.”

    If all goes well, NIF is expected to reach ignition in 2010, about 2 years after it begins full operation and some 7 years after the target set by Livermore officials in 1995. Even such a belated success would be a surprise to some—and a huge relief to Livermore.

  10. EARTH SCIENCE

    Did Volcanoes Drive Ancient Extinctions?

    1. Richard A. Kerr

    Episodes combining climatic warmth, massive volcanic eruptions, oceanic anoxia, and bursts of methane may lie behind major extinctions

    Massive extinctions involving many or most existing species, although not much fun for the organisms involved, are a boon to paleontologists, who use them to mark the passing of geologic time. For most of paleontology's history, however, its practitioners didn't know why life occasionally took a tumble. By the turn of this century, they had just one certain example of extinction cause and effect: An asteroid or comet surely did in many species 65 million years ago when the dinosaurs perished.

    Now, with the publication in recent weeks of two papers on an extinction 183 million years ago, researchers can add five suggestive cases to the list. These extinctions coincide with massive outpourings of lava, accompanied by signs that global warming threw the ocean-atmosphere system out of whack. Although no one can yet pin any of these mass extinctions with certainty on the volcanic eruptions, “it's getting a little hard to believe they're all coincidences,” says geologist Paul Olsen of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York. The episodes “may reflect some poorly understood instability of the ocean-atmosphere system.” And it's possible, notes Olsen, that humankind with its volcano-like outpourings of greenhouse gases could trigger a similar climatic catastrophe.

    The latest example of a possible volcano-climate disaster befell life in the Jurassic period 183 million years ago. The immediate cause of this modest loss of species early in the Toarcian stage was likely the absence of oxygen in bottom waters, says paleoceanographer Hugh Jenkyns of Oxford University. This anoxia left its mark in 2 meters of black, organic-rich sediments laid down with the extinctions early in the Toarcian. It is one of the three big oceanic anoxia events of the past 200 million years.

    Just when the Toarcian extinctions and anoxia occurred, and thus what might have triggered them, is now much clearer thanks to a geochronology study published in the August issue of Geology. Geochronologist József Pálfy of the Hungarian Natural History Museum in Budapest and paleontologist Paul Smith of the University of British Columbia in Vancouver reported that they have improved the dating of both the extinctions and an accompanying large volcanic outpouring. Uncertainties in the age of the extinctions had ranged as high as 15 million years, but by using the uranium-lead radiometric technique to date layers of volcanic ash from local European eruptions, they pinned down the start of the Toarcian to 183.6 million years, give or take a million years or so.

    Putting the early Toarcian extinctions at about 183 million years brought them tantalizingly close to a million-year-long set of volcanic eruptions that spewed a few million cubic kilometers of dark basaltic lava across South Africa and Antarctica when they were joined in the supercontinent of Pangea. Pálfy and Smith gathered the latest argon-argon and uranium-lead dates, recalculated them to account for uncertainties peculiar to each, and found a peak in this so-called Karoo-Ferrar volcanism at 183 million years ± 2 million years.

    The rough coincidence of a large volcanic outpouring and a sizable extinction event brings to five the number of examples of apparent volcanic-extinction correlations, including three of the big five mass extinctions. As Olsen pointed out recently (Science, 23 April 1999, p. 604), increasingly abundant and reliable radiometric dating techniques have linked in time three of the largest mass extinctions—the Cretaceous-Tertiary 65 million years ago, the Triassic-Jurassic 200 million years ago, and the Permian-Triassic 251 million years ago—with three of the largest flood basalts: the Deccan Traps of India, the Central Atlantic Magmatic Province of northeastern South America, and the Siberian Traps, respectively. And deep-sea extinctions and a turning point in mammal evolution 55 million years ago at the Paleocene-Eocene boundary (Science, 19 November 1999, p. 1465) coincide with the massive lavas laid down when Greenland and Europe parted tectonic ways. The coincidences are within a million years or so, as tight as current dating allows.

    “The very big flood basalt provinces are remarkably correlated with extinctions,” says geochronologist Paul Renne of the Berkeley Geochronology Center in California. “The correlation is suggestive, intriguing,” says paleontologist David Jablonski of the University of Chicago. “There are just enough clues to suggest you should go after it.”

    A second recent paper on Toarcian events 183 million years ago should help in the pursuit. In the 27 July issue of Nature, geologist Stephen Hesselbo of Oxford University, Jenkyns, and their colleagues presented evidence that a huge amount of methane gushed into the atmosphere and ocean 183 million years ago over a period of less than 5000 years. They found that the carbon isotopic signature of bits of fossil wood preserved in early Toarcian rock in England and Sweden changed so much and so rapidly that only methane could be responsible; the methane had presumably been trapped below the sea floor as icy methane hydrates. Earlier research had identified similar methane bursts 55 million years ago at the Paleocene-Eocene boundary, 90 million years ago at the Cenomanian-Turonian oceanic extinctions and anoxic event, and 120 million years ago during a massive submarine basalt eruption (Science, 28 January, p. 576).

    Researchers are still “stumbling around,” as Olsen puts it, trying to make sense of the coincidences of extinctions, eruptions, and methane outbursts, but most would agree with paleontologist Peter Harries of the University of Southern Florida in Tampa that “there's certainly something strange going on.” Exactly what, no one is willing to say, but a plausible scenario being discussed begins with the generally warm climate of most of the past 250 million years. The eruption of millions of cubic kilometers of gas-laden magma over a million years or less boosts the greenhouse, warming the world further. That warming destabilizes sub-sea-floor methane hydrates, releasing methane, a powerful greenhouse gas that oxidizes to another greenhouse gas, carbon dioxide. The planet gets warmer still. Then the scenario gets really fuzzy. The heat itself might drive species to extinction on land and sea, or the altered climate might induce changes in ocean circulation and biological productivity that bring on the suffocating oceanic anoxia, an outcome that could conceivably be in our own greenhouse future.

    Sorting out cause and effect in curious coincidences as old as 250 million years is going to be a challenge, researchers note, particularly given the less-than-perfect pattern of coincidences. The Paleocene-Eocene event has no anoxia, the 120-million-year event has methane, anoxia, and a flood basalt eruption but no extinctions, and the Triassic-Jurassic event has as yet no sign of methane. A large impact looks simple by comparison.

  11. EVOLUTION 2000 MEETING

    Evolutionary Trends From Bacteria to Birds

    1. Elizabeth Pennisi

    BLOOMINGTON, INDIANATalks at Evolution 2000, held here in late June, covered the gamut of evolutionary biology, from life history changes in bacteria to the origins of modern bird diversity.

    Stalked Microbe Shows That Even Bacteria Get Old

    They may not get gray hair, wrinkles, or stiff joints, but some bacteria apparently age. Until now biologists have considered bacteria immortal: When a single cell divides, it becomes two seemingly identical daughters, making parent and offspring indistinguishable and, in a sense, rendering each cell perpetually young. But by working with a stalked microbe called Caulobacter crescentus, evolutionary biologists have now shown that these simple organisms slow down in at least one very critical life function: reproduction. Moreover, when populations of this microbe are forced to evolve, they change in much the same way as do multicellular organisms subjected to similar evolutionary pressures, reported Martin Ackermann of the University of Basel at the meeting.

    Ackermann, a Ph.D. student in evolutionary biology, studies how reproductive activity, age of maturity, and life-span respond to altered environmental conditions. Working with his adviser, Steve Stearns, he first looked at the effect of high and low mortality rates on the timing of those traits in fruit flies. Then 2 years ago, Basel microbiologist Urs Jenal introduced Ackermann to C. crescentus. Most bacteria reproduce by dividing symmetrically, making it impossible to know which is the “older” or parent cell. In contrast, C. crescentus puts down roots, then that sessile “parent” buds off new, mobile cells. Moreover, the simple microbe is a far more appealing candidate than the complex fruit fly for studying the genetics of life history traits, says Ackermann.

    It took a while to figure out how to study individual C. crescentus microbes, however. Eventually, Ackermann came up with a special microscopy chamber, a modified microscope slide containing a small channel. Ackermann inoculates the channel with lots of C. crescentus, 20 to 100 of which grow a stalk and take up residence in the channel. Then he keeps a constant flow of media through the channel to remove newly budded progeny. In this way, he and his colleagues can monitor aging in the stationary cells. “The system is awesome,” comments Bruce Levin, an evolutionary biologist at Emory University in Atlanta. “It gives us a first chance of looking at senescence in bacterial populations”—work that may later be applicable to understanding aging in stem cells.

    The bacteria take about 3 hours to build their stalks and bud off their first swarmer cells. Thereafter, budding takes about 2 hours and is repeated over and over, Ackermann reported. He did four experiments with this chamber, following each cell through more than 100 reproductive cycles. In each, he found that reproduction slowed as the cells got older. “The old cells have a large variability in cell cycle length,” says Ackermann, with some taking 50% longer and some failing to reproduce at all.

    His colleagues are quite impressed. “He's demonstrated senescence in the simplest, most stripped-down form,” comments Marc Tatar, an evolutionary biologist at Brown University in Providence, Rhode Island. Even though it lacks mitochondria or meiosis in its life cycle—two putative contributors to aging—C. crescentus still loses its functionality over time. Says David Reznick, an evolutionary biologist at the University of California, Riverside: “Aging may be a more fundamental feature of how organisms work than people thought.”

    Ackermann has also begun experimental evolution studies on C. crescentus. In one, he subjected populations to high mortality rates for more than 2500 generations, effectively cutting short the lives of many individual microbes. In theory, organisms with shortened life expectancies should evolve faster maturation times, so they reproduce as much as possible before they die. And Ackermann believes that's exactly what's happened in the three lines of C. crescentus he is evaluating in this experiment. By the 500th generation, the populations' doubling time had decreased by an average of 34%. A close look revealed that the stalked microbes start to bud off new cells sooner, and the organism was spending less time in the swarmer cell stage.

    Eventually, Ackermann wants to track down the genes underlying these changes. And with the genome sequence of this organism already in hand (Science, 3 March, p. 1572) and the powerful molecular biology techniques available for bacteria, “I think he's going to come up with tremendous insights into senescence,” Tatar notes.

    Southern Origins for Modern Birds

    As the bitter debate about the connection between birds and dinosaurs continues to roil the paleontology community, feathers are about to fly over another aspect of avian evolution: when and where modern birds first took wing. At the evolution meeting, Joel Cracraft of the American Museum of Natural History in New York City introduced his latest genetic evidence—again, challenging conventional wisdom—that modern birds arose in the Southern Hemisphere.

    If he's right, then many birds common throughout the world trace their roots to a time when the southern continents were amassed around the South Pole. Furthermore, some of those roots would reach back millions of years earlier than paleontologists generally believe. “Joel now has really good sequence data, and that is really shifting everything back [in time],” says David Penny, a theoretical evolutionary biologist at Massey University in Palmerston, New Zealand.

    Until recently, most biologists thought that birds diversified into their many modern forms in the past 60 million years. Scant fossils exist from earlier eras, suggesting to paleontologists that few modern birds existed prior to the mass extinction that occurred at the Cretaceous-Tertiary (K-T) boundary 65 million years ago. And because most post-K-T fossils are found in the Northern Hemisphere, these researchers assumed that modern birds arose in the giant continent called Laurasia, which included what is today North America, Europe, Asia, Greenland, and Iceland.

    Cracraft is something of an iconclast. In the 1970s, he first suggested that at least one group of flightless birds had a southern heritage, an idea that only gradually gained acceptance. Now, after more than 2 decades of gathering data, he asserts that heritage “is even more pervasive than we thought.” He thinks most bird lineages got their start on the great southern continent called Gondwanaland, which encompassed what is now Antarctica, South America, Australia, Africa, and India.

    Support for Cracraft's hypothesis comes from several fronts. For one, over the past 5 years, research combining fossil records with genetic and mitochondrial DNA data has indicated that some bird lineages date back to before the K-T boundary. In 1996, for example, Penny and Alan Cooper of Oxford University concluded that 22 of the 42 types of birds they studied had such early origins. Those 22 represent about half the orders of modern birds.

    Another line of evidence comes from the branching pattern of the avian family tree, particularly the phylogenetic relationships at the base of this tree. Cracraft has analyzed phylogenies developed both by himself and by others. These trees are based on morphological and molecular data, including mitochondrial DNA and protein and amino acid comparisons. He then looked at the worldwide distribution of birds on adjacent branches of these trees. His conclusion: To explain the current distribution of close relatives, groups of birds that are now half a world apart had to have come from what used to be a single southern continent.

    This analysis indicates that Anseriformes, which include ducks and geese, and Galliformes, which encompass chickens and their relatives, are close kin from way back. The most primitive anseriforms are screamers in South America, and the most primitive duck/goose is the Australian magpie goose. Such a distribution requires that at one time, South America and Australia were linked. Similarly, the most primitive galliforms have an Australasian distribution, whereas related turkeylike birds called cracids occur in tropical America, again implying Gondwanaland origins.

    Cracraft has also relied on the fossil record to bolster his case, citing finds from Madagascar (Science, 15 May 1998, p. 1048) suggesting that a number of other organisms in that country have relatives in both India and South America. This distribution suggests that these land masses were linked longer than many geophysicists suspect. As India broke away from Antarctica, Cracraft speculates, the Kerguelen Plateau formed a land bridge between them that lasted until 80 million years ago. Such a bridge could explain how the flightless elephant bird wound up in Madagascar, even though it's not that closely related to flightless birds elsewhere in Africa.

    There were other ancient land connections as well. Between 80 million and 40 million years ago, the Norfolk Ridge stretched between what is now New Zealand and New Caledonia. Its existence may explain why the closest relatives of an extinct flightless bird from New Caledonia called the kagu are in New Zealand. And another close cousin of the kagu, the sunbittern, is in tropical America, perhaps because South America stayed connected to western Antarctica until about 35 million years ago.

    “As we get more of those [avian] relationships worked out, more [examples] will reveal themselves,” Cracraft predicts. Even though the data are not in yet, he posits a Gondwanaland ancestry for hummingbirds and swifts, pigeons, penguins, and owls. Cracraft stresses that these conclusions don't go against the widely held assumption that much of bird diversity arose after the K-T extinction event. But some arose earlier, he believes.

    Although Penny finds Cracraft's hypothesis about Gondwanaland plausible, he expects it to be controversial, especially among paleontologists. Nor is Cracraft sure what reception these ideas will get. He'll find out this week, when he presents his Gondwanaland hypothesis at an international ornithology meeting. He expects quite a few bird biologists there won't hesitate to let him know what they think.

Log in to view full text

Log in through your institution

Log in through your institution