News this Week

Science  01 Oct 1999:
Vol. 286, Issue 5437, pp. 18
  1. PLANETARY SCIENCE

    No Easy Answers in Mars Probe's Fiery Death

    1. Richard A. Kerr

    Did the world's best spacecraft navigation team simply miss? When the Mars Climate Orbiter (MCO) spacecraft, scheduled to enter orbit for a 2-year mission to study martian weather, dipped too far into the atmosphere on its arrival last week and perished, officials at the Jet Propulsion Laboratory (JPL) in Pasadena, California, pointed a finger at the lab's navigation experts. “It looks like something was wrong with the ground-based navigation,” said project manager for spacecraft development John McNamee of JPL.

    Yet some outsiders suspect that may not be the full explanation, noting that the team has been unfailingly flawless in shepherding spacecraft to their destinations. “I've never heard of a problem like this,” says spacecraft navigation specialist Robert Farquhar of the Applied Physics Laboratory in Laurel, Maryland. “That's why I'm so amazed. It could end up it was the navigation team's fault, but it would surprise me.” He suspects that there was a good deal more to losing the $87 million spacecraft than a navigation team member slipping a decimal point.

    Over 4 decades, navigating spacecraft across hundreds of millions of kilometers of space to hit targets a few tens of kilometers across has become routine, if still spectacular. In the case of MCO, a small navigation team at JPL tracked the spacecraft using Doppler data—radio frequency changes that yield spacecraft velocity and acceleration—and range data from radarlike signals that made a round trip from Earth to MCO and back, giving the distance between the two. The team compared the craft's actual position and its intended position so that short burns of the onboard rocket engine could bring it on target.

    Last week, after four trajectory adjustments based on tracking data, navigation team members thought they knew the location of MCO to within 20 kilometers—and it appeared to be on target to pass 140 kilometers above the martian surface, safely skirting the atmosphere before going into orbit. But after reviewing tracking data from the final 8 hours before arrival, spacecraft controllers realized that their spacecraft had come in 80 kilometers too low—a huge, 10-sigma error. More than 55 kilometers low would have been fatal.

    Because MCO seemed to be working perfectly as it disappeared behind the planet on its way to the planned close approach, “we're ruling out a spacecraft problem and looking at the possibility of human error and software problems,” said Richard Cook, JPL's project manager for operations, at a press conference. The next day team members, acknowledging that MCO's brush with the martian atmosphere must have overheated it or torn off parts, abandoned the search for radio signals from the craft.

    Farquhar suspects, however, that it was a lot more complicated than the simple catastrophic navigation error implied at the press conference. According to Farquhar, who has talked with JPL staff, the navigation team's calculation of MCO's trajectory “was bouncing around” during the last 2 days far more than the spacecraft itself should have been moving. That created some concern about just how low MCO was going to pass by Mars. Any number of problems could have contributed to last-minute uncertainties, notes Farquhar. Fuel jetting into space from a leak could have pushed the spacecraft off course. Adjustments for the unbalanced effect of solar radiation hitting MCO's single solar panel might have unintentionally altered the trajectory. Or the drive to economize on one of NASA's “faster, cheaper, better” missions might have left too little tracking data. At this point, says Farquhar, “I think we all have to withhold judgment as to who is at fault.”

    Whatever the ultimate cause or causes of the loss, planetary scientists will have to live without observations of clouds, dust, and water vapor that would have helped them understand the martian hydrological cycle. But it could have been worse: Mars Global Surveyor, which has been in martian orbit for 2 years, will continue to return images of clouds and dust. It will also be able to fill in for MCO in another role: serving as a radio relay station between Mars Polar Lander and Earth when that probe rockets onto the surface in December. The most immediate impact of the disaster may be felt on Capitol Hill, where scientists are trying to head off deep cuts in NASA's space science budget. “This isn't going to help,” observes one scientist.

  2. PALEOANTHROPOLOGY

    Neanderthals Were Cannibals, Bones Show

    1. Elizabeth Culotta

    Neanderthals were skilled hunters, working together to fell deer, goats, and perhaps even woolly rhinos with wooden spears. After the kill, they expertly butchered the carcasses, slicing meat and tendons from bone with stone tools and bashing open long bones to get at the fatty marrow inside. Now, on page 128, a French and American team reports that 100,000-year-old Neanderthals at the French cave of Moula-Guercy performed precisely the same kinds of butchery on some of their own kind.

    Marks on the bones clearly reveal that these early humans filleted the chewing muscle from the heads of two young Neanderthals, sliced out the tongue of at least one, and smashed the leg bone of a large adult to get at the marrow. The bone fragments were apparently then dumped amid the remains of deer and other butchered mammals. “Human and mammal remains were treated very similarly,” says first author Alban Defleur of the Université du Mediterrané at Marseilles. “We can safely infer that both species were exploited for a culinary goal.”

    Tantalizing hints of cannibalism have been spotted at other Neanderthal sites for decades, but this is far and away the best documented case, say other researchers, who praise the team's careful comparison of breakage and cut marks in deer and human bones. “Quite convincing,” says anthropologist Fred H. Smith of Northern Illinois University in De Kalb, noting that there's little sign of gnawing or other indications that carnivores rather than people mauled the bones. “And the documented cut marks seal the deal.”

    Smith and a few others say that without an eyewitness, we may never know exactly why Neanderthals handled corpses so seemingly brutally. But most paleoanthropologists are unfazed by the idea of early humans eating each other. As Milford Wolpoff of the University of Michigan, Ann Arbor, puts it, “Why should modern humans be the only violent ones?”

    Defleur began to zero in on cannibalism after he saw cut marks on human bones from a test pit sunk into the cave at Moula-Guercy, a site that had previously yielded stone tools characteristic of the Neanderthals' Mousterian culture. He teamed up with paleoanthropologist Tim White of the University of California, Berkeley, to rigorously compare the pattern of marks on the human bones with those on bones from red deer, presumably hunted for meat, at the same site.

    The bones—78 pieces identified as belonging to at least six humans and almost 400 fragments attributed to other mammals—were scattered over 20 square meters. All the braincases and long bones of both deer and humans were smashed open, presumably to allow brains and marrow to be extracted. “In both taxa, marrow bones were systematically broken, and bones without marrow were not damaged,” says Defleur.

    Analysis of three pieces of a large thigh bone showed how, after its muscles were sliced away, it was set on an anvil stone and hit repeatedly with another stone. Telltale striations mark the bone's outer surface on the anvil side, directly opposite “percussion pits” made by the hammerstone. Cut marks on the clavicle also show where the Neanderthals disarticulated the arm at the shoulder. Others reveal where they cut out tongue and jaw muscles, severed the Achilles' tendon, and sliced other tendons below the toes and at the elbow. The bones bear few signs of burning or roasting, says White, suggesting that even though the Neanderthals had fire, they ate this flesh raw or hacked it off the bone before cooking. “The circumstantial forensic evidence [of cannibalism] is excellent. No mortuary practice has ever been shown to leave these patterns on the resulting osteological assemblages,” he says.

    In White's view, this well-documented case strengthens other reports of Neanderthal cannibalism, from sites such as Krapina and Vindija in Croatia. Modern humans ranging from Fijians (see p. 39) to ancient southwesterners (not to mention the best selling Hannibal Lecter) apparently had a taste for human flesh. But the evidence implies, says White, that “the incidence of this behavior among the Neanderthals and their ancestors may have been higher than among modern people.” Other researchers have suggested that Neanderthals might have been desperate for dietary fat by winter's end—and brains and marrow are rich sources of fat, Wolpoff notes.

    Still, White says, “we are not claiming that all Neanderthals were cannibals, rather, that there were some cannibals among the Neanderthals.” Indeed, sometimes Neanderthals buried their dead, arranging bodies in a fetal position in semicircular graves. At the moment no one knows why the Moula-Guercy corpses were handled so differently—whether they were enemies or because of some different cultural practice. “Actions fossilize, intentions don't,” says Smith.

    Far from implying that Neanderthals were brutes, Smith and others say that the finding of cannibalism may indicate sophistication of a sort. The varied treatment of the dead at different Neanderthal sites, Smith says, demonstrates cultural variation and therefore complexity: “When you see some Neanderthals practicing intentional burial and others practicing cannibalism, that is a clear indication of behavior that is multidimensional—a pattern that mirrors the behavior of more modern people.”

    “To me this is, paradoxically, a very human behavior that indicates a human mind,” says anthropologist Juan Luis Arsuaga, excavator at the Spanish site of Atapuerca, where there is evidence of cannibalism among 800,000-year-old humans. “Cannibalism is very old in human evolution.” Other animals such as chimps sometimes kill and eat parts of their own kind, but “only humans practice systematic cannibalism,” says Arsuaga. “This is the dark side of the human coin.”

  3. BIOMEDICAL FUNDING

    Senate Tops House Panel in Raising NIH's Budget

    1. David Malakoff

    Sometimes it pays to be patient. After months of delays that had made science lobbyists anxious, House and Senate spending committees this week were expected to approve hefty increases in biomedical research funding for the fiscal year that starts today. The increases for the National Institutes of Health (NIH)—$2 billion, or 13%, in the Senate and $1.1 billion, or 8.5%, in the House—would far exceed the White House's 2000 request for the $15.6 billion agency and sustain the biomedical community's drive to double the agency's budget by 2004.

    Figure

    It may take at least another month, however, for Congress and the White House to agree on the exact size of NIH's raise, as Republicans and Democrats engage in last-minute budget negotiations. Still, “the omens are very good for biomedical scientists,” says an aide to one House Democrat, who predicts that “the final number will probably be at or near the Senate's mark.”

    That outcome would delight biomedical lobbyists, who have been struggling to repeat last year's record-setting $2 billion increase for NIH (Science, 23 October 1998, p. 598). Their campaign had an early setback in February, when President Bill Clinton requested only 2.1% more, some $320 million, in his budget proposal to Congress. The outlook dimmed further in recent weeks after Republican leaders shifted nearly $20 billion from the massive appropriations bill that funds NIH and a host of politically sensitive education and welfare programs to other spending measures. The borrowing allowed congressional leaders to claim that they were adhering to strict spending caps imposed by a 1997 budget-balancing law, but left Representative John Porter (R-IL) and Senator Arlen Specter (R-PA)—who lead the House and Senate subcommittees responsible for approving NIH's budget—with the nearly impossible task of recouping the funds with offsetting cuts elsewhere. Both lawmakers had repeatedly delayed scheduled votes on their bills in the hope of finding budgetary gimmicks—such as “forward funding” programs by borrowing money from the 2001 budget—that would allow Congress to break the spending caps without having to admit it.

    The fruits of that labor were revealed 23 September, as Porter won approval, by an 8-6 party line vote, for an $89.4 billion Labor-Health and Human Services (HHS) spending bill that bought the $1.1 billion NIH boost by forward funding some programs and designating other spending as “emergencies.” But some fiscal conservatives chafed at the additional spending, and the White House threatened to veto the bill because it would cancel a program to hire 100,000 new precollege teachers and cut welfare programs. Representative David Obey (D-WI), the appropriation panel's ranking Democrat, praised Porter for his hard work but said the bill was “a fantasy” that would never survive.

    Similar predictions accompany the Senate's version of the bill, a $91.7 billion measure that would give NIH's two dozen institutes increases ranging from 11% to 13%. Specter's subcommittee was pushing to finish its work as Science went to press, but Senator Tom Harkin (D-IA), the subcommittee's ranking Democrat, predicted that the final bill would be a “heck of a lot better” than the House version. Still, staffers were pessimistic that it would ever reach the Senate floor. Instead, they say, Congress and the White House are likely to roll the Labor-HHS bill into a huge spending measure later this year with at least six of the 13 appropriations bills needed to fund government operations.

    The coming weeks also give legislators time to ponder how to reconcile differences in their bills. The House, for instance, called for a 36% boost for NIH's controversial $50 million center for alternative medicine, to $68 million, while the Senate added only $6 million. One aide predicted that sorting out this and other differences could “take until Thanksgiving.”

  4. SPACE SCIENCE

    ESA Gets Flexible to Cut Costs

    1. Alexander Hellemans*
    1. Alexander Hellemans writes from Naples, Italy.

    Naples, Italy—As NASA braces itself for the possibility of deep cuts in its science budget next year, its counterpart across the Atlantic, the European Space Agency (ESA), is already dealing with the reality of diminishing funds. For ESA, the ax fell in the spring when a meeting of government ministers from its 14 member states voted to maintain a fixed rate of science funding that had been in place since 1995: Inflation, which has already eaten into the budget for 4 years, will continue to do so (Science, 21 May, p. 1242). Last week, both ESA's decision-making Science Program Committee (SPC) and the Space Science Advisory Committee met here to discuss how to deal with their shrinking resources.

    They voted for flexibility: In future, several options will be developed in parallel, and the decision on when to fly them will be made at a later stage in the process. Some will be put in a “mission bank” to be revived later when a launch opportunity arises. There were also calls for the world's major space agencies to coordinate missions more closely and avoid costly duplication. “With today's state of worldwide scientific budgets, we cannot afford to compete with each other,” says ESA's director of science, Roger Bonnet.

    ESA has requested proposals by next January for the first of these new “fleximissions.” By the summer, the SPC will select two fleximissions and one backup, which “will go forward in parallel,” says Bo Anderson, director of space and earth sciences at the Norwegian Space Center and newly elected SPC chair. The order in which the fleximissions will be launched will be decided later. In this way, “we have a continuously larger selection of missions which can be implemented faster,” Anderson says. This should result in projects being completed sooner, allowing the agency to disband project teams more quickly. SPC vice chair Giovanni Bignami, science director of the Italian Space Agency, says ESA's contribution to the Next Generation Space Telescope is a likely first fleximission to reach fruition.

    Previously, ESA's space science program, known as Horizons 2000, has adhered to a rigid timetable of launches: A major “cornerstone” mission is lofted every few years, interspersed with medium-sized missions—all chosen by the scientific community. It may take researchers some time to get used to a more flexible approach. Hans Balsiger of the University of Bern in Switzerland, a former SPC chair, points out that scientists building scientific payloads may have to live with extended delays if their payloads sit in the mission bank. Balsiger, a principal investigator for the Rosetta cometary rendezvous mission, thinks the situation is “survivable,” however.

    With their minds set on cost cutting, delegates at the Naples meetings also called for better coordination between the world's space agencies. Bonnet noted, for example, that the Inter-Agency Consultative Group (IACG), which brings together NASA, ESA, and the Russian and Japanese space agencies, doesn't always work very well. As an example, he points to the various programs to explore Mercury. Although a Mercury mission has long been a prospective ESA cornerstone project—and was presented to IACG representatives in Rome in 1994—“I was surprised to find out that the Japanese had included in the program a mission to Mercury without ever telling us anything,” says Bonnet. And Bonnet was “even more surprised” when he recently learned that NASA also has a Mercury mission planned, called Messenger. “This isn't justifiable in today's financial climate,” says Bonnet. NASA's representative in Paris, Jeffrey Hoffman, says the Messenger mission was proposed by groups of scientists and selected by NASA. “If Europe makes a decision to select a Mercury mission as their next cornerstone, then we will do everything possible to make sure that we take advantage of whatever synergy we can have between the two missions,” says Hoffman.

  5. CHEMISTRY

    A Cheaper Way to Separate Isotopes?

    1. Robert F. Service

    For Manhattan Project scientists racing to build the first atomic bomb during World War II, one of the biggest challenges had nothing to do with learning how to set off a nuclear explosion. They also had to devise a way to separate the fuel for the reaction, uranium-235, from its slightly heavier but far more abundant cousin, U-238. Ultimately, project scientists built a stadium-sized gaseous diffusion plant to separate the isotopes, taking advantage of the lighter isotope's tendency to float farther than heavier ones in a given time. Ever since World War II, separation of all kinds of isotopes has remained an industrial-scale operation. Now, new results with a tabletop laser could change all that.

    In this week's Physical Review Letters, researchers at the University of Michigan, Ann Arbor, report using a laser that fires ultrashort, power-packed pulses to separate isotopes of elements ranging from boron to zinc. The technique isn't the first to use lasers to separate isotopes. But this one doesn't require the use of complex and expensive magnets, making it potentially far easier and cheaper, if the cost of the lasers comes down and the technique can be scaled up. Indeed, Todd Ditmire, a short-pulsed laser physicist at Lawrence Livermore National Laboratory in California, describes the new method as a “potentially big deal” that could provide a cheap new isotope source for research, industry, and medicine.

    The Michigan researchers, physicists Peter Pronko and John Nees and graduate students Paul VanRompay and Zhiyu Zhang, were initially trying to grow thin films of boron-nitride, a superhard material. Researchers commonly make such films, which are used for high-tech optical and electronic devices, by aiming a laser at the material, vaporizing it, and depositing it onto a surface. Pronko and his colleagues, however, were trying out an unusual laser: one that delivers up to 1 quadrillion watts of power per square centimeter in extremely short pulses lasting just 150 femtoseconds, or quadrillionths of a second. Trained on a solid block of boron nitride, the laser deposited a film on a nearby silicon disk—and did much more besides.

    Boron comes in two isotopes, B-10 and B-11, which were randomly distributed in the solid target. But much to the researchers' surprise, when they used a device called an electrostatic energy analyzer to study the boron isotopes in the gas plume created by the laser pulse, they found that the two species of boron didn't remain mixed as they flew. “We thought our instrument was broken,” says Pronko. “So we went back and did the experiment over again.” Each time they looked, they found that most of the heavier borons landed in the outer portion of the circle, while the lighter ones stayed toward the middle. After a few tries, says Pronko, “we were convinced that what we were seeing was real.”

    Still, the result was puzzling. Not only did the isotopes separate, but the heavier isotope seemed to travel farther in the vapor than the lighter one—just the opposite of what happens when isotopes drift around in an uncharged gas. The answer, Pronko and his colleagues realized, lies in the electrical and magnetic storm kicked up by the potent laser burst.

    First, when the light hits the target, it kicks out electrons from the surface atoms. As the electrons fly away from the target's surface, they pull the now positively charged borons and nitrogens after them. At the same time, the energy burst at the surface creates a powerful magnetic field, projecting from the surface as a series of magnetic field lines. These lines tug on the ions as they travel, causing them to spiral around the field lines. Key to separating the isotopes, the less massive ions fly in a tighter spiral, while the more massive ones take a wider trajectory, which moves them farther out on the target.

    The result was that the outer region of the disc had about twice the amount of the heavy boron isotope as the inner region—enrichment that Ditmire calls surprisingly good. What's more, the Michigan team had similar results with gallium and copper, two other elements that are widely used in electronic devices. They are already planning to use their technique to make isotopically pure thin films of semiconductors, which are known to have an improved ability to conduct heat, a key requirement for today's densely packed computer chips. And Pronko says the technique may also prove useful for separating medical isotopes, such as yttrium-90, which is used to treat non-Hodgkin's lymphoma.

    For now, he adds that his group has no plans to see whether the technique can be used to purify bomb-grade uranium—and that application may not be economically feasible in any event. Gérard Mourou, who directs Michigan's Center for Ultrafast Optical Science, says that—fortunately—many laser setups would be needed to collect the kilograms of enriched nuclear material needed to build a bomb.

  6. EVOLUTION

    Handsome Finches Win a Boost for Their Offspring

    1. Gretchen Vogel

    Why one individual finds another attractive is, as the old song puts it, a “sweet mystery of life.” For species that have evolved showy feathers or fins, the thinking has been that the ornaments might signal otherwise invisible “good genes” to a potential mate. Peacocks are a classic example: Those that thrive while sporting a magnificent—but unwieldy—tail, the theory goes, must be fit in other ways as well. New results now suggest that at least for birds, the mother's contribution to the fitness of offspring fathered by attractive mates may have been overlooked.

    On page 126, evolutionary ecologists Diego Gil, currently at the Université de Paris X in Nanterre, France, Jeff Graves of the University of St. Andrews in Fife, Scotland, and their colleagues report that female zebra finches that have mated with such males deposit more of the sex hormone testosterone in their eggs than they do after a liaison with males they deem less attractive. Studies in canaries have suggested that developing chicks that receive more testosterone beg more vigorously for food and grow faster than other chicks. Therefore, Graves concludes, it is not clear whether the father's “good genes” or the mother's extra help should get the credit for any added success enjoyed by offspring of an especially attractive father.

    Why the offspring of attractive males should be accorded such favored treatment remains a mystery. But the finding raises a caution about other experiments meant to show that attractive males really do pass good genes to their offspring, says evolutionary ecologist Doug Mock of the University of Oklahoma, Norman. “People want to believe [the good genes theory]. It is a very sexy idea, but people will have to be careful” in testing it, he says.

    Graves and Gil, with St. Andrews University colleagues Neal Hazon and Alan Wells, took advantage of a peculiar taste of zebra finch females. The birds seem to find males wearing red leg bands particularly attractive, but they tend to ignore males wearing green leg bands. No one is sure exactly why red leg bands are the finch's equivalent of a sleek Rolex, while green labels a guy a geek. But because females also pursue males with especially red beaks, it's possible that the leg bands trigger the same reaction, says Nancy Burley of the University of California, Irvine, who was the first to document the attraction. Whatever the explanation, the female zebra finch's fetish allowed researchers to vary a male's attractiveness—and thus distinguish the effects of his sex appeal on the mother from those of his genes.

    The team randomly gave males either a red or a green leg band, and then divided 12 females into two groups of six. The researchers allowed members of one group to mate first with a green-banded male, and then, after collecting the resulting eggs as soon as they were laid, mated each female with a red-banded male. Members of the other group mated with a red-banded male before receiving a green-banded suitor.

    To see if the female's ardor had an effect on the egg content, the researchers analyzed the yolks for testosterone and its breakdown product 5α-dihydrotestosterone, which in other studies had seemed to influence a chick's eventual success. They found that the birds consistently included more of the hormone in eggs fathered by their red-banded mates than in eggs fathered by the green-banded ones. This suggests that the mothers have more influence on the fitness of the progeny of highly attractive males than scientists had thought.

    The new result “certainly raises the bar for people who want to demonstrate good-gene effects from the father in birds,” says evolutionary ecologist Carl Gerhardt of the University of Missouri, Columbia. It leaves several questions unanswered, however. Because the researchers had to destroy the relatively small finch eggs to determine their hormone levels, they cannot be sure that the differences they observed do in fact influence the success of zebra finch chicks. To answer that question, Gil is planning experiments in which he will inject finch eggs with an extra dose of testosterone.

    Nor can the scientists explain how the females control testosterone levels in their eggs, although Gil suggests that it may be due to the attractive, red-banded males increasing the females' general arousal. Other work has shown, he notes, that a female canary's overall hormone levels affect those in her eggs, and another study suggested that testosterone levels in a bird's blood increase with high levels of social interaction. But he adds, “The problem is we don't know much about [these hormones] in females.”

    The team hopes their findings will prompt others to help answer such questions—and a broader question as well. “We do have females choosing particular males,” says Graves. “The question remains, what do they get out of it? Good genes is a nice answer if it worked—and it may well work—but it's not as easy as it seemed” to solve the “sweet mystery.”

  7. EARMARKING

    NSF Shivers at Senate Arctic Research Plan

    1. Jeffrey Mervis

    When Congress earmarks federal funds for a specific institution or project, the process usually begins with the intended beneficiary bending the ear of a sympathetic legislator. But the 2000 budget for the National Science Foundation (NSF) that the Senate passed last week adds a new wrinkle to this already controversial practice: It allocates $25 million for arctic research logistics to an entity that did not request the money, doesn't want it, and says it isn't capable of administering it. NSF officials were also caught off guard by the earmark, which represents a direct assault on the agency's own activities.

    The earmark comes courtesy of Senator Ted Stevens (R-AK), chair of the Senate Appropriations Committee and a longtime critic of NSF's commitment to research in the region, which includes his home state. Stevens believes that the Arctic takes a back seat to NSF's larger and more eye-catching Antarctic program within the agency's Office of Polar Programs, so last year he added $13 million to NSF's $9.5 million request for the transportation and equipment expenses needed to do science in the Arctic. This year he went a step further, proposing that the entire program, pumped up to $25 million, be turned over to the Arctic Research Commission (ARC). Never mind that ARC is a seven-member, part-time body that gets $700,000 a year to advise the government and runs on a staff of three, only one of whom works full-time. “We felt that ARC has a better handle on what is going on in the area than does NSF,” says a Senate aide who follows the issue, “although we would expect them to cooperate fully with NSF in drawing up their plans.”

    Commission chair George Newton, a nuclear engineer with Fairfax, Virginia, consulting firm Management Support Technology, says the Senate report language was a huge surprise: “We certainly didn't ask for it.” The commission runs no grants programs and has no mechanism to do so, he adds. It also has no intention of proceeding without NSF's support and guidance. “Whatever happens, we intend to stay linked to NSF,” he says. “There isn't any other way to get things done in these remote regions.”

    Ironically, this week NSF awarded $2 million to four university-based researchers in the first installment of a 5-year, $17 million program to build environmental observatories in the Arctic. The initiative, which was heavily oversubscribed, is funded by the logistics program and could be jeopardized by a shift to the ARC.

    NSF hopes to modify the Senate language when the spending bill comes up in conference with the House, whose bill contains no such provision. And there are signs that, having sent NSF a message, Stevens may be open to compromise. The ARC language is not meant to hamper the conduct of science, notes the Senate aide: “There's a lot of good work to be done there. We just want to make sure it gets the support it needs.”

  8. RICE GENOME

    U.S. Adds $12 Million to Global Sequencing Push

    1. Dennis Normile

    Phuket, Thailand—Three U.S. agencies are preparing to announce grants totaling $12.3 million to help speed an international effort to sequence the rice genome. The new support, outlined last week by U.S. officials at a meeting here of collaborators from 10 countries and regions, will supplement a proposed big jump in spending by Japan, which is putting up the largest share of the overall funding for the project. But organizers acknowledge that some rough spots remain, and that the additional resources do not guarantee that the work will be finished by the target date of 2004.

    The U.S. funds will be divided between The Institute for Genomic Research in Rockville, Maryland, which will receive $7.1 million; and a consortium including Clemson University in South Carolina, Cold Spring Harbor Laboratory in New York, and Washington University in St. Louis, Missouri, which will share $5.2 million. The Department of Agriculture and the National Science Foundation will each contribute $6 million, and the Department of Energy will kick in $300,000. “It's very nice news,” says Takuji Sasaki, director of Japan's Rice Genome Research Program.

    Last month Japan's Ministry of Agriculture, Forestry, and Fisheries requested $28 million for rice sequencing in next year's budget, double its current spending. “This [proposal] came not from the researchers but from the prime minister's office,” says Sasaki, who sees it as part of a dramatic boost in all biotechnology-related spending (Science, 9 July, p. 183). The ramp-up is also a response to an announcement this past spring by Celera Genomics of Rockville that it could sequence the 430-megabase rice genome in 6 weeks if it received outside financing. Celera intends to use a yet-to-be-proven technique of breaking up the entire genome into small pieces, sequencing the pieces, and then using computers to sort it all out, while the consortium will proceed individually through all 12 chromosomes, a more painstaking but tested approach.

    In addition to its scientific value, the project has enormous symbolic value for countries where rice is the most important cereal crop and an essential element of the region's culture. Being a participant is a point of national pride for these countries, as well as a chance to further their scientific capabilities. “In addition to funding rice sequencing, the government is about to launch an effort that will move on to functional genomics,” says Apichart Vanavichit, a molecular biologist at Thailand's National Center for Genetic Engineering and Biotechnology in Nakorn Pathom, which is helping to sequence chromosome 9 (see graphic). “At the end of 5 years, Thailand will have a new [tool] for its rice-breeding programs.”

    Although the U.S. support is welcome, some say it falls short of the $20 million that scientists recommended last year as a minimum contribution (Science, 23 October 1998, p. 653). “The U.S. [financial] input is disproportionately small,” says Benjamin Burr, a plant geneticist at Brookhaven National Laboratory in New York. “But it could have a disproportionate impact because the labs picked have high sequencing capacities.” In addition, two other U.S. groups intend to continue sequencing efforts on their own. The University of Wisconsin, Madison, a finalist in the competition for the new grants, hopes to sequence portions of chromosome 11, in part to demonstrate the effectiveness of its optical mapping technique. A group at Rutgers University's Waksman Institute in Piscataway, New Jersey, will link up with other U.S. labs working on chromosome 10.

    Even with the additional resources, though, Sasaki says “[he] can't promise” to complete the sequencing by 2004. For one, although groups in Canada and the United Kingdom have indicated an interest in sequencing, their governments have not yet committed money. And outside China and Japan, the other Asian groups are expected to contribute minimal amounts of sequence data because their genomics efforts are just getting off the ground. Even China's effort comes with a proviso: Its scientists are sequencing a different rice cultivar from the Nipponbare used by the rest of the international collaboration.

  9. PLANETARY SCIENCE

    Neptune May Crush Methane Into Diamonds

    1. Richard A. Kerr

    Diamonds might become as cheap as coal if miners could ever plumb the hellish interiors of Neptune and Uranus. Laboratory researchers are now creating tiny bits of those interiors, where heat and pressure can be far more intense than in the depths of Earth. They are finding, among other surprises, tiny flecks of diamond.

    On page 100 of this issue of Science, mineral physicist Robin Benedetti of the University of California, Berkeley, and her colleagues report that methane—a major constituent of Neptune and Uranus's deep interiors—decomposes far more easily than predicted when it is heated and squeezed in the laboratory. That decomposition, which produces diamonds and complex organic matter, could have altered the chemical composition and internal churning of those planets. “This is an exciting piece of work,” says mineral physicist Russell Hemley of the Carnegie Institution of Washington's Geophysical Laboratory, “because it shows the promise of this sort of experiment in studying planetary interiors.”

    Experimentalists have only recently started exploring the highly fluid interiors of the gas giants—Jupiter, Saturn, Uranus, and Neptune. They first squeezed the hydrogen that makes up the bulk of such bodies to see when it might turn into a liquid metal (Science, 22 March 1996, p. 1667). Now they're working on methane, which becomes a prominent constituent of Neptune deeper than 4000 kilometers below the planet's visible cloud tops. Benedetti and her colleagues sealed liquid methane between the tips of two gem-quality diamond “anvils” and squeezed them together to raise the pressure as high as 50 gigapascals (GPa, equal to 500,000 atmospheres). Then they shot a laser through the diamonds and the sample until the temperature of the methane rose as high as 3000 kelvin. Under such extreme conditions, equivalent to those as deep as 7000 kilometers below Neptune's cloud tops, the methane decomposed into two identifiable forms of carbon—diamond crystals about 10 micrometers in size and complex, polymerized organic matter.

    Theorists had suggested that diamonds might form in Uranus and Neptune, but only toward the center of the planets, above a pressure of 300 GPa. The shallower level for diamond formation is a surprise, says Hemley, and it means that far more of the interior could be producing a girl's best friend, with proportionately greater effects on the planet as a whole. Being denser than the fluid from which they formed, diamonds would sink, releasing heat from their store of potential energy. That heat would help churn the interior, perhaps boosting Neptune's magnetic field, which is driven by such convection. It might also add to the heat seen escaping the planet.

    Methane might also be breaking down at depths even shallower than those at which diamond forms, producing byproducts such as light hydrocarbons that telescopes and spacecraft might detect. In other diamond-anvil experiments, mineral physicist Thomas Schindelbeck and his Geophysical Laboratory colleagues found that methane is unstable at just 7 GPa and 2000 kelvin. From such shallow depths, decomposition products such as ethane could waft up to the visible cloud tops—fumes from the hell a few thousand kilometers down.

    The new diamond-anvil results are reminding researchers to take a critical look at the textbook picture of gas giants as being neatly subdivided into layers of unchanging composition. “One needs to take into account high-pressure chemistry in understanding the icy planets like Uranus and Neptune,” says Hemley—not that the experiments so far give a complete chemical picture of the planets' innards. “The real Neptune is a more complicated soup of chemical molecules” than experimentalists have cooked up in their first tentative forays, says planetary scientist William Hubbard of the University of Arizona, Tucson. There's water mixed in with the methane, he notes, as well as hydrogen. Either one might affect reactions in the planet's interior. So recreating the depths of hell on Neptune and other planets will take a while longer.

  10. NEUROSCIENCE

    India Creates Novel Brain Research Center

    1. Pallava Bagla

    New Delhi—India is hoping to break into the front ranks of neuroscience with a new National Brain Research Center (NBRC) that opens here this week. The venture hopes to capitalize on India's large population and on a pool of talent now scattered around the world: Indian researchers now working abroad are expected to fill most of the 12 new scientific slots, working in areas ranging from developmental and computational neurobiology to the effects of malnutrition on the brain.

    The center, funded by the Department of Biotechnology, will be devoted to basic research. “It will be a state-of-the-art institute … and will have no clinical facilities attached to it,” says Manju Sharma, a botanist and secretary of the biotechnology department, adding that the center will serve “as a national apex for brain research.” In another unusual twist, half of its $4 million budget over the next 3 years will be earmarked for extramural research, including scientists at labs funded by other ministries, such as the well-regarded National Institute of Mental Health and Neurosciences in Bangalore.

    India has a special opportunity to contribute to the field of neural imaging, says Vijayalakshmi Ravindranath, a neurochemist in line to be director of the center, by carrying out large-scale functional mapping studies on so-called “drug-naïve patients,” those with neurological disorders who have not yet received treatment. Supporters acknowledge that it will be a while before the center can hope to enter the front ranks of global science, however. “Catching up is a Herculean task, and it may take another 20 years,” says Prakash Narain Tandon, a neurosurgeon at the All India Institute of Medical Sciences in New Delhi. But they argue that creating the center is an important, and necessary, step.

    India is already looking for international collaborators. About 250 scientists from five countries are participating in a Colloquium on Brain Research here this weekend to showcase the new center, and Richard Nakamura, deputy director of the U.S. National Institute of Mental Health, is heading a delegation that expects to sign a memorandum of understanding for future collaborations and scientific exchanges with NBRC. The center also hopes to link up with Japan's Brain Science Institute at the Institute of Physical and Chemical Research (RIKEN), outside Tokyo. “India is not without promise in the neurosciences,” says Nakamura. “Indian scientists have always done very well in the U.S. because they are well trained and do not face a major language barrier. By setting up strong centers within India, this brain drain can be slowed, and talented scientists can help develop the economy of India and work to improve the health of its people.”

    The center is currently housed in temporary quarters at the International Center for Genetic Engineering and Biotechnology. Work is under way on a new home in Gurgaon, about 35 kilometers outside Delhi, where an unused vaccine laboratory built several years ago is being renovated.

  11. HUMAN GENETICS

    Gene Defect Linked to Rett Syndrome

    1. Trisha Gura*
    1. Trisha Gura is a writer in Cleveland, Ohio.

    As the parent of a child with Rett syndrome, Patty Campo describes the genetic disease as “a horrific nightmare.” Second only to Down syndrome as a cause of female retardation, Rett syndrome left Campo's daughter, who appeared normal at first, unable to stand, talk, or use her hands by age 5. Now, a team led by pediatric neurologist and geneticist Huda Zoghbi of Baylor College of Medicine in Houston may have tracked down the gene at fault in Rett syndrome, which afflicts at least one in 10,000 girls.

    In the October issue of Nature Genetics, Zoghbi, Uta Francke at Stanford University in California, and their colleagues report that mutations in a gene called MeCP2 cause nearly a third of the Rett syndrome cases they studied. The protein encoded by the gene is known to help “silence,” or shut down, other genes that have been tagged with a methyl group during development. The group's work is the first to link a human disease to a defect in this process, says geneticist Brian Hendrich, who co-authored a News and Views editorial on the work.

    Exactly how the defect leads to the neurological decline of the afflicted girls has yet to be deciphered. Still, geneticists say they welcome the discovery. “MePC2 silencing, which underlies the defect, is not only fascinating in its own right,” says Huntington Willard of Case Western Reserve University in Cleveland, Ohio, “but the discovery of its link to Rett syndrome also opens all kinds of doors to new [research] avenues” that might lead to a treatment.

    Although at least a half-dozen labs pursued the Rett syndrome gene, the hunt stretched out into decades. Geneticists normally locate disease genes by studying how the disease is inherited in afflicted families. But Rett syndrome rarely seems to run in families, and even when the mutation is passed down in a family, it is often hidden.

    One problem is that males who inherit the mutation, which affects a gene on the X chromosome, usually die before or shortly after birth, presumably because they do not have a second copy of the X chromosome that might compensate for the defective one. In contrast, females carry two X chromosomes. But their cells randomly inactivate one to avoid overproducing proteins encoded by X chromosome genes, and so a female may not have symptoms unless more than half her cells express the bad gene.

    Despite these hindrances, Zoghbi's group found two families with more than one affected member and Stanford's Francke found another. They combined their samples to help Zoghbi narrow down the location of the Rett syndrome gene to a region around the q28 segment of the X chromosome. In addition, Eric Hoffman of the University of Pittsburgh also had evidence that the gene is located in Xq28. To pinpoint the gene, Zoghbi then turned to what she terms “a brute-force way.”

    She scrolled through databases to identify all the known genes in Xq28—at least 100 candidates, she notes—and then sequenced the versions carried by the patients and compared their DNA to that of unaffected individuals. After more than a dozen failed tries, her team hit pay dirt with the MeCP2 gene, which had been cloned in 1992 by geneticist Adrian Bird's team at the University of Edinburgh in the U.K.

    Still, many questions remain. The gene was mutated in only three of eight affected family members and in five of 21 spontaneous cases, raising the possibility that mutations in other X chromosome genes might also cause the disease. Zoghbi offers another possible explanation: Her team only scoured MeCP2's protein-coding regions and so might have missed mutations in regulatory regions that affect the gene's expression.

    Even more mysterious is how mutations in one gene involved in the silencing of other genes could cause the myriad of Rett syndrome defects. “We really don't know the answer to that,” says Zoghbi, although she and others have several ideas. One is that a failure of MeCP2 to prevent excess gene expression results in genetic “noise” that harms the brain, in particular.

    Studying MeCP2 function in patients' cells and analyzing the defects in mice that have had their MeCP2 genes knocked out could yield some answers. Bird and his colleagues have already made knockout mice and found, consistent with what's already known about Rett syndrome, that male mice without a functional MeCP2 gene die before birth, while females that have one bad copy of the gene may develop symptoms similar to Rett's. Only after learning the targets of the MeCP2 defect, Zoghbi says, “can one start thinking about treatments.”

  12. PHYSICS

    Beaming Into the Dark Corners of the Nuclear Kitchen

    1. Andrew Watson*
    1. Andrew Watson writes from Norwich, U.K.

    A new generation of accelerators capable of generating beams of exotic radioactive nuclei aims to simulate the element-building processes in stars and shed light on nuclear structure

    Imagine being a chef trying to re-create all of the world's cuisines with only flour, rice, potatoes, and yams as ingredients. That's the plight of physicists trying to probe the structure of the nucleus and explore the range of exotic nuclear reactions that take place in the cosmos. The hellish interiors of stars and supernovae and the surfaces of neutron stars, for example, are the scenes of frenzied nuclear cookery, where unstable nuclei overburdened with protons or neutrons collide or decay, spawning new, equally unstable nuclei. Thousands of different radioactive nuclei take part in these reactions. But as a physicist on Earth, says William Gelletly of the University of Surrey in the United Kingdom, “you are restricted to the roughly 283 nuclear species that you can dig out of the ground.”

    It's not just a taste for the exotic that's made nuclear physicists chafe at this restriction. The abundances of exotic nuclei, together with the rates at which they react, help determine how stars evolve and explode. The exotic nuclear reactions in stars also have consequences closer to home: They are the ultimate source of many of the elements we know. And the behavior of the unstable nuclei that are involved is rich in clues about the structure of the atomic nucleus, which physicists picture as being built of concentric shells of particles, like the atom itself. “All of the things we've learned about the shell structure of stable nuclei we expect to just be completely wrong in nuclei which are a long way from the stable nuclei,” says Gelletly.

    By colliding stable isotopes, physicists have been able to produce some exotic nuclei, but only in small numbers, providing tantalizing glimpses of the exotic nuclear structures and reactions that have been outside their reach. “The stable [nuclei] do not tell us too much about nuclear structure or nuclear synthesis, like in stars or supernovae,” says Victor Ninov of Lawrence Berkeley National Laboratory in California. What's been needed was a way to create intense beams of radioactive nuclei, which could then be collided with other nuclei to mimic the nuclear recipe books followed in stars.

    A new generation of accelerators is beginning to provide just that. A technique called isotope separation online, or ISOL, is being combined with a second accelerator stage to create powerful, bright beams of short-lived radioactive nuclei that can re-create previously inaccessible reactions and probe totally new nuclear species. The two-step technique essentially takes a beam of stable nuclei and collides it with a stationary target. The debris from the ensuing collisions is collected, the unstable species of interest are separated out, and in the second stage they are channeled into another accelerator to create a radioactive nuclear beam (RNB) that can collide with another target to perform an experiment.

    At present, there are only two genuine ISOL-based RNB machines with accelerated beams in the world, one at Oak Ridge National Laboratory in Tennessee and one at the Catholic University of Louvain in Belgium. But physicists have quickly realized that such machines have the capability to transform nuclear studies. “It really is brand-new territory,” says Michael Smith of Oak Ridge.

    A case in point came this summer, with the first physics results from the Oak Ridge machine, the Holifield Radioactive Ion Beam Facility (HRIBF), which opened at the end of 1996. The team members there created a beam of a short-lived fluorine isotope, fluorine-17, by slamming a beam of light ions from their cyclotron into an oxygen-rich target. The fluorine ions, magnetically separated from the debris in the ISOL stage, were reaccelerated electrostatically as a radioactive beam. By driving the beam into a proton-rich target, Smith and his colleagues measured the rate at which fluorine-17 captured protons to yield neon-18 plus a gamma ray—part of a reaction chain that influences the violence of stellar explosions such as novae and x-ray bursts. The fluorine reactions bypass a slow nuclear reaction with a faster one, says Smith. “If you do that, you can generate energy faster, and that can affect the dynamics of the explosion”—as well as the amounts of other exotic elements it produces.

    Nuclear reboost

    Machines such as Holifield aren't the only routes into the realm of exotic nuclei. Four major sites across the globe are already producing a trickle of these nuclei by a technique known as fragmentation. It involves smashing a beam of fast, heavy ions into a target, then sorting out the fragments of the beam particle with a magnetic fragment selector and sending them to the experiment proper without reaccelerating them. The advantage of this approach is its ability to deal with isotopes that decay quickly, even in microseconds, while the ISOL approach needs many seconds to extract and sort the ions. The flip side is poor beam intensity and quality.

    ISOL, which originated nearly 3 decades ago at CERN near Geneva, generates radioactive nuclei by smashing protons rather than heavy ions into a target, forging new nuclei that are then sorted from the debris magnetically. The original ISOL facility, called ISOLDE, did not accelerate these ions, however, so the resultant beam had very low energy—too low to simulate hot stellar interiors or drive nuclei together with sufficient violence to spark certain nuclear reactions. About 10 years ago, researchers realized that they could beat both these problems by reaccelerating the output of an ISOL device, resulting in a worldwide program to build ISOL-based RNB machines (see table). ARENAS, at the Catholic University of Louvain, fired up in the mid-1990s but still not fully operational, “was the first facility to reaccelerate these particles up to energies of interest, for example, for astrophysics studies,” says Smith. Holifield, which was adapted from an existing heavy-ion facility, was hot on its heels.

    View this table:

    Now, researchers are converting other machines to make RNBs by bolting on a second accelerator stage. The TRIUMF accelerator center in Vancouver, for example, has already equipped an old cyclotron with an ISOL system, reports John D'Auria of Simon Fraser University in Vancouver. “In fact, it is probably the most intense ISOL system in the world,” he says. The second-stage accelerator will soon be added. A small second accelerator is currently being added to the original ISOLDE at CERN, and Surrey's Gelletly is seeking backers for an RNB facility at the Rutherford Appleton Laboratory near Oxford, which will share a proton beam from the ISIS spallation source as its first stage. At the GANIL heavy-ion research lab in Caen, France, the SPIRAL dual cyclotron ISOL system has already been built and is awaiting approval to do physics.

    Next up will be purpose-built facilities. This month, a U.S. task force led by Hermann Grunder, director of the Thomas Jefferson National Accelerator Facility in Newport News, Virginia, will recommend building a half-billion-dollar, “second-generation” national RNB facility. And across the Atlantic, Björn Jonson of the University of Gothenburg in Sweden heads a similar group looking at pan-European options for a major RNB facility. Such a facility is the “obvious next step,” says Jonson. “I think that it's a very high priority.”

    With this smorgasbord of new machines, nuclear physicists hope soon to move beyond their staple diet of stable nuclei and to start cooking up a storm of spicy new isotopes that will help them extend their theoretical picture of the nucleus. A key plank of nuclear theory is the shell model, according to which the combined quantum effect of the neutrons and protons—collectively known as nucleons—creates a set of energy levels in the nucleus, not unlike the energy levels that govern the movement of electrons orbiting it. A single shell comprises one or more levels of similar energy. This works well as far as it goes, but as soon as a nucleus gets packed with an excess of either neutrons or protons, the tidy shell picture starts to look too simple.

    Simple geometry predicts a pretty constant density, so extra nucleons simply enlarge the nuclear radius by a one-third power law—just as adding a given volume of water expands a water balloon. Although stable nuclei obey this rule, early glimpses of unstable nuclei, obtained at fragmentation facilities, indicate that they deviate wildly. “One of the big surprises has been that for some nuclei at the limits of stability, their sizes do not at all follow our expectations,” says Nigel Orr of France's nuclear and particle physics lab IN2P3. The isotope lithium-11, for example, is much larger than expected, compared with neighboring isotopes, says Orr.

    By monitoring the spread in the momentum of the fragments left when the nucleus breaks up following a collision, physicists have learned that this bloat in lithium-11 is due to the outermost or “valence” neutrons, which form a kind of neutron “halo” outside a dense nuclear core. Nuclear physicists have also found neutron halos in helium-6, beryllium-11, beryllium-14, and boron-17. Most recently, researchers from the four main fragmentation facilities have pooled results to demonstrate that carbon-19 also has a neutron halo, says Orr.

    One step further on from neutron halos are neutron skins, the thin, neutron-rich surface layers surrounding a tightly bound nuclear core. Physicists believe that helium-8 probably has a skin of four neutrons and, according to Orr, “there is very good experimental evidence now that nuclei such as the very neutron-rich sodium isotopes have skins.” A skin of nuclear matter would drastically alter the reactions between nuclei, says Gelletly, in ways that researchers hope to tease out at RNB machines.

    “For the moment, we are trying to understand the interplay between structure and reaction mechanisms,” says Gregers Hansen of Michigan State University in East Lansing. “With this in place, we can study the structure of as yet completely unknown nuclei far away from stability.” In the process, researchers expect to find nuclear structures far more weird than the halos and skins they are now getting glimpses of. “To me, halos and skins are like pandas in the zoo. They are interesting animals to see, but the zoo is packed with many other species which are very exciting,” says Witek Nazarewicz of the University of Tennessee, Knoxville. These exotica should have fundamental implications for physicists' understanding of the nucleus. Nazarewicz is at pains to point out that skins and halos are just the beginning, a first glimpse of a brave new neutron-rich world—“small consequences of this very big picture,” he says.

    Other items on physicists' RNB wish list include assessing the extent of the nuclear family. “What are the limits of nuclear existence?” asks Nazarewicz. On the chart of nuclei, plotted by neutron number versus proton number, stable nuclei occupy a broad stripe down the middle, with proton-rich nuclei above that stripe and neutron-rich nuclei below it. The extremes of the nuclear family are the so-called “drip lines” at the top and bottom of this broad swath of isotopes, marking out the limits beyond which protons or neutrons added to a nucleus simply “drip out.” But these drip lines are far from well charted.

    The neutron drip line is particularly fuzzy because nobody has had the means of mapping it out beyond the light nuclei, and theorists cannot agree on where it should go from there. For low-mass isotopes, the number of extra protons or neutrons needed to step out from stability to the drip lines is small—and within reach, using existing fragmentation machines. “With today's capabilities, the drip line has been explored up to isotopes of oxygen,” Hansen says.

    Beyond that, theorists predict wildly different drip lines, explains Philip Walker of the University of Surrey. “The calculations are rather delicate, and the neutron drip is not well established at all, even theoretically,” he says. There may be thousands more nuclei on the high-neutron side of the chart waiting to be discovered, says Walker. Stepping out toward the neutron drip line for the heavier elements means colliding species that between them have enough neutrons to make one of these exotics, something that is only possible using RNBs. “There is no way to approach the very neutron-rich nuclei using stable beams and stable targets,” says Nazarewicz.

    Although the neutron drip line is exotic territory, it has direct implications for the genesis of many stable elements. The lightest elements—hydrogen, helium, and lithium—date from the big bang. But others were made in the nuclear furnaces of stars, then flung into interstellar space during supernova explosions. Astrophysicists are pretty confident that they understand some of the element-forging processes, such as the slow neutron capture cycle, or s-process, in red giant stars, in which nuclei can acquire neutrons at a rate of perhaps one a year. The nuclei later beta-decay, shedding an electron and changing a neutron into a proton, on the road toward creating elements such as iron.

    Astrophysicists are less confident about rapid neutron capture, the r-process, which is thought to take place in supernovae. In the r-process, a nucleus can take up one or more neutrons every second and can rapidly march all the way out toward the neutron drip line before it beta-decays and moves one rung up the ladder of elements. Only by forging sufficiently neutron-rich—and therefore highly unstable—species will astrophysicists be able to explore the r-process sequence, explains Jerry Nolen of Argonne National Laboratory near Chicago. “The beams that are needed … are just not accessible with any present-day facility,” he says.

    Mapping the reactions that take place close to the drip line will help astrophysicists refine their picture of element formation in supernovae, thought to be the ultimate source of most of the heavier elements all the way out to the end of the periodic table. Understanding the complex reactions in supernovae and other cosmic processes requires knowing at least something about the thousands of contributing nuclear processes. “In the laboratory, you make nuclear measurements of how fast things fuse together … and that's basically input information into a theoretical model of how these systems might explode,” says Smith. “And then you compare the output of that model with [astronomical] observations and try to get the two to match up.”

    As well as exploring the drip lines, researchers also want to venture beyond the very topmost point of the table, the realm of so-called superheavy elements, which do not exist in nature and can only be produced in the laboratory. A number of labs across the globe are already actively trying to create new superheavy elements, and so far this year they have claimed discovery of three previously unknown elements: 114 (Science, 22 January, p. 474), 116, and 118 (Science, 11 June, p. 1751). This is not just nuclear stamp collecting, however. The teams are trying to confirm an important prediction of the shell model.

    Physicists already know that a nucleus gains stability if it has the right number of protons or neutrons to fill a shell completely. Even being close to one of these “magic numbers” confers some extra stability. Oxygen, calcium, nickel, tin, and lead all have magic numbers of protons, so these elements tend to have larger numbers of stable isotopes than do nearby elements. Some shell theorists predict a magic number at 114. By synthesizing element 114 and the nuclei around it, physicists hope to find out if there is indeed an “island of stability” in this region of the table of nuclei.

    And radioactive beams will be critical to exploring this island. That's because the stablest superheavy nuclei tend to be more neutron-rich than lighter elements are. Fusing two lighter stable nuclei—now the standard way to make superheavy nuclei—yields a neutron-poor, and hence unstable, superheavy nucleus. “The superheavies they have discovered are at the bottom edge of the predicted region of stability, where the half-lives are still extremely short,” says Nolen. “If you can really get up into the center of the region of stability that's predicted, the half-lives may become days or years.” Existing facilities can't get off the beach and into the interior of the island of stability, explains Nolen. Creating beams of neutron-rich nuclei at an RNB facility is the only way forward.

    Nucleus as superbattery

    Exploring the island of stability promises the thrill of fundamental discovery, but other goals of RNB research could have practical value: energy-dense nuclear “batteries,” for example. Pump energy into an atom to excite one of its electrons and the atom, now unstable, will dump that excess energy as fast as possible and drop to a lower energy state. A nucleus can also absorb energy—much larger amounts, because nuclear forces are vast compared to those binding electrons to atoms—which can go into winding up the spin of the nucleus. This energy and any extra spin are normally lost promptly when the nucleus emits a photon. But the nucleus of one natural isotope, tantalum-180, was left engorged with so much extra spin when it was forged and energized billions of years ago in a supernova explosion that it cannot shed the energy via a single photon. As a result, tantalum-180 is caught in a near-eternal excited state, with a lifetime so long it has never been measured. It is nature's only example of a spin trap, says Surrey's Walker.

    But researchers can make other, less permanent spin traps in the laboratory. One is tantalum-180's neighbor, hafnium-178. It has a spin trap state with a half-life of 31 years that can deliver a giant 2.4 mega- electron volts when it decays, in the form of a jolt of gamma rays, with the added bonus that the ground state is stable (Science, 5 February, p. 769). “That's the sort of state I like,” says Walker. In principle, such spin traps offer a kind of stored nuclear energy with no radioactive waste. “If you could make a kind of superbattery that you could take into space with you and power your space station for 5 years, all in a kilogram box or something, it would be pretty useful,” notes Walker.

    Almost by definition, however, trapped energy is hard to release, but that's where RNBs come in because of their ability to make new spin traps. “The ones we can make at the moment are not the best ones that are predicted theoretically,” says Walker. Recent studies have hinted that it may be possible to create nuclear spin traps that could be triggered to offload their excess energy using a laser beam, he adds.

    Such nuclear cookery—and much more—will be made possible with the extra ingredients provided by RNB machines of the future. And those same machines will show nuclear structure researchers just how the nucleus, that cauldron of nucleons, seethes and stews and, in some cases, boils over.

  13. BIOMEDICAL RESEARCH

    Ethical Loophole Closing Up for Stem Cell Researchers

    1. Sabine Steghaus-Kovac*
    1. Sabine Steghaus-Kovac writes from Frankfurt.

    Embryonic germ cells, derived from fetuses, are less ethically contentious than their stem cell cousins. But they may not hold the same promise

    Munich—In the rapidly growing field of stem cell research, the demands of science and those of medical ethics are colliding head on. Ever since U.S. researchers revealed last year that they had created “immortal” lines of human embryonic stem (ES) cells—a type of cell extracted from an embryo that can be tweaked to grow into any form of human tissue (Science, 6 November 1998, p. 1014)—teams around the world have been eager to use ES cells to grow tissues for transplant. But creating ES cell lines requires researchers to destroy an embryo, so research is either heavily restricted or banned altogether in many countries. One hope was that lines of embryonic germ (EG) cells, which are taken from aborted fetal tissue, could be used instead. But results presented at a workshop on stem cell and nuclear transfer research here last month have dampened those hopes.

    EG cells are derived from primordial germ cells, which later in development give rise to eggs or sperm. Like ES cells, they regenerate seemingly forever, and researchers can coax them to differentiate into any type of tissue. Because of this apparent similarity between EG and ES cells, the DFG, the main research funding agency in Germany, where the production of human ES cells is banned, has advised scientists to use EG cells for their research.

    But work presented by Azim Surani of the Wellcome/CRC Institute of Cancer and Developmental Biology in Cambridge, U.K., casts strong doubt on the assumption that EG cells can simply be substituted for ES cells. It shows that when mouse EG cells are implanted into early mouse embryos, the tissues containing the cells develop abnormally. This happens because the genes in the EG cells lack certain modifications needed for their normal activity during development. For many at the meeting, Surani's data cast doubt on the suitability of EG cells as a source of transplant tissues. “This report has discouraged German researchers from staking everything on this one chance,” said Anna Wobus of the Institute for Plant Genetics in Gatersleben.

    As part of his ongoing studies of germ cells, Surani had decided to test whether the development potential of EG cells is equivalent to that of ES cells. As a pioneer of research into a phenomenon called imprinting, he had reason to be concerned that it might not be. During the formation of the sperm and egg, some genes undergo a type of biochemical modification known as methylation that selectively inactivates the paternal or maternal copies of a gene, so that both are not active at once in the developing embryo and adult. The gene imprints imposed by the male and female are different, and both types must be present when the egg and sperm come together if normal development is to occur. But before that imprinting can occur, the original imprints inherited by an embryo have to be erased—a change that happens in the primordial germ cells. So Surani reasoned that if an EG cell line is derived from germ cells with their imprints absent, the cells may not develop normally. And that's what he and his colleagues found.

    When ES cells are injected into early mouse embryos, the tissues appear to form normally. But when the researchers injected EG cells into the preimplantation embryos of naturally mated mice, the EG cells that became incorporated into the tissues of the developing chimeras caused them to grow bigger and heavier than controls and the embryos also suffered from skeletal abnormalities.

    Hints that these problems are due to lack of imprinting in the EG cells came when Surani's team transplanted EG cell nuclei into egg cells that had their own nuclei removed. The resulting embryos were small and had abnormal placentas. When the researchers tested for expression of particular genes that should have been imprinted, they found that both parental copies were either completely repressed or that both were active in the embryos, indicating that lack of imprints was at least part of the problem. In another experiment, Surani's team fused white blood cells and EG cells and found that several imprints were erased from the blood cell nucleus—implying that EG cells can still erase imprints even in mature cells.

    The question now is whether human EG cells will suffer from the same problems as the mouse cells. If so, it might be possible to avoid the problems by harvesting the cells while they retain their imprints. Despite that possibility, Surani's results came as a blow to German stem cell researchers for whom working on human EG cells is the only legally permitted alternative. DFG Vice President Rüdiger Wolfrum of the Max Planck Institute for International Law in Heidelberg says: “This may mean that certain research projects … have to be conducted abroad.”

  14. RESEARCH FUNDING

    Something Rotten in the State of Danish Research?

    1. Lone Frank*
    1. Lone Frank writes from Copenhagen, Denmark.

    Researchers blame political indifference for a steady decline in the fortunes of Danish science over the past few years

    Copenhagen—A drumbeat of public criticism has been building up here on a topic that's not often in the news: science funding. For years, the universities have watched their budgets dwindle, and academic and industry leaders have finally begun to speak out, filling the opinion pages of national newspapers. There is talk of the “Danish sickness”: a gradual whittling away of the country's research capacity by governments that show little interest in science. “At a time when other countries are stepping up their efforts in research and education, Denmark risks becoming a second-rate nation,” says University of Southern Denmark president Henrik Tvarnø. “Modern society is knowledge based, and Danish politicians must wake up to meet this challenge.” And Børge Diderichsen, director of research for the pharmaceutical company Novo Nordisk, warns: “In some scientific disciplines, we are already experiencing difficulties in recruiting qualified local university graduates, and this could become a general trend.”

    Fueling this outburst is a report published last month, by the presidents of the country's 10 universities, which are all publicly funded. They conclude that although their nominal budgets have risen along with increasing student numbers, those budgets have been thoroughly eroded by rising costs. Between 1994 and 1998, the government provided the universities with a much-needed cash injection of $35 million for teaching and research. But, the presidents contend, a host of new taxes and related expenses imposed on the institutions during this period more than consumed the extra money. Furthermore, as part of general cuts in public spending, state university funding, apart from grants, dropped by 2% this year, and additional decreases have just been proposed in the budget for next year.

    Science teaching is already suffering. A recent 15% cut in the staff of Copenhagen University's science faculty (Science, 2 April, p. 25) sent the number of undergraduate courses into a nosedive. The biology department has eliminated entire scientific fields, such as parasitology, and cut down on expensive activities, such as lab courses. The total number of students has remained unchanged, however, so the remaining courses are forced to admit many more students. The department of physics has maintained all its courses, but faculty members are devoting less time to research.

    Academic and industry leaders lay part of the blame for this sorry state of affairs on the low political esteem in which science is apparently held. They point, for example, to upheavals at the Ministry for Research and Telecommunication, set up in 1993—supposedly to give research and development a higher profile. It has since been headed by six different ministers. “The ministry has been used as a political steppingstone, or for demotion purposes,” says Søren Isaksen, who chairs the National Research Council, a ministerial advisory body. “The continuity needed to carry out reforms and deal with difficult issues demanding a lengthy political process [has] never existed.”

    Isaksen hopes improvements will come with the appointment, in June, of Social Democrat Birte Weiss. She told Science that she considers her post a long-term engagement and lists among her most urgent problems “the fundamental need to make a university career attractive for the best and brightest.”

    Encouraging words, but researchers are looking for some sweeping changes. In a recent open letter to Weiss, for example, president of the Danish Technical University Hans Peter Jensen expressed concern about increasing political control of research funding, now running at about $1.5 billion a year. Figures published by the six research councils indicate that for years, peer-reviewed funding for basic research has been whittled away at the expense of research programs with aims that are politically defined. Such concerns were heightened when it was revealed that next year's national budget includes a proposed 11% cut in total funding for the research councils.

    Engineer Jørgen Staunstrup of the Technical Research Council complains that “the many programs serving narrow political goals are favoring second-rate science, since funds are often allocated to projects which fit the programs and not to the best researchers.” Endocrinologist Henning Beck Nielsen agrees: “In biomedicine, scientists are being forced away from doing basic work because more and more funding is directed at research with applied aspects.” As chair of the Medical Research Council, Beck Nielsen is now preparing an investigation of this issue.

    One of Weiss's first tasks is to begin reforming the university employment structure, which combines relatively few permanent positions with hardly any entry-level, tenure-track jobs. This provides little opportunity for young scientists to get onto the career ladder. Many Danish graduates do postdoctoral work at top universities in the United States and elsewhere but find few positions available back home when they finish. “There is a desperate need to create positions which allow the most talented young expatriate scientists to return and apply their skills in Danish academia,” says molecular pharmacologist Thue Schwartz of the University of Copenhagen. Weiss says she will seek an appropriation earmarked for future recruitment that “should include making more professorships available to younger scientists.”

    Unless wide-ranging changes are made in the support for science, researchers argue that Denmark risks missing a major opportunity to compete internationally in biomedical science and biotechnology. Currently, universities in Sweden and Denmark, along with various organizations, companies, and the two governments, are working to integrate the university powerhouses and biotech industry of southern Sweden with their counterparts across the Oresund Strait in the Copenhagen area to form a zone dubbed Medicon Valley. Diderichsen says that although the Swedish authorities have proved to be dedicated and professional in their efforts, “the Danish government fails to understand that it must provide substantial economic support to fully realize the region's potential.” Despite great enthusiasm among scientists and companies, he worries that Medicon Valley could lose out to biotech competitors such as Munich and London.

  15. DECLASSIFICATION

    Panel of Scientists Helps Open Lid on Secret Images

    1. Jeffrey Mervis

    Antarctic pictures are the latest in a series of releases shepherded by the low-profile but high-impact Medea Committee

    Scott Borg's eyes flit between a photograph and a geological map of Antarctica's Dry Valleys unfolded on his desk. “Look, that island is a peninsula in the photo, and there's a finger that's no longer visible,” he exclaims, comparing two images of Lake Bonney. He looks at the photo again, jabbing at a land feature, and then blurts out, “Boy, I'd really like to measure how wide that is the next time I'm down there.”

    As program manager for Antarctic geology and geophysics at the National Science Foundation (NSF), Borg helps scientists venture into one of Earth's most inaccessible and stark terrains, the McMurdo Dry Valleys region. The largest relatively ice-free area in Antarctica, this cold desert ecosystem houses one of 21 sites in NSF's Long-Term Ecological Research (LTER) network and is extremely sensitive to global climate change. Last month, those scientists also benefited from the world's changing geopolitical climate.

    On 15 September, during a visit to NSF's Antarctic staging offices in Christchurch, New Zealand, President Clinton announced the release of a clutch of previously classified photographs that will help researchers establish baseline data on the environment there (www.nsf.gov/od/opp/antarctic/imageset/satellite/start.htm). The seven pictures, taken in 1975 and 1980, offer sufficiently good resolution and digital elevation data, for example, to track the apparent rise in the levels of glacier-fed lakes in the Dry Valleys region. The images are the latest declassified images taken by billions of dollars worth of intelligence assets—satellites, planes, ships, and other sources (Science, 3 March 1995, p. 1260). The effort has been championed by Vice President Al Gore, who as a U.S. senator set the bureaucratic wheels in motion. But an unsung panel of scientists called the Medea Committee has done most of the heavy lifting.

    The group meets regularly with the federal intelligence community to discuss how such disclosures can make important contributions to knowledge without jeopardizing the nation's security. “All of their instincts are to be secret, and all of our instincts are to be open,” says Harvard atmospheric scientist Michael McElroy, who chairs the committee. “There has to be someone at the table to persuade them that [disclosure] is worth considering.” The scientists' security clearances let them “look behind the window,” adds remote-sensing specialist Linda Zall, technical director of the Central Intelligence Agency's (CIA's) 3-year-old environmental center. “It's the only environmental science group that has access to both worlds.”

    Medea is modeled after the Jasons, a group of scientists who for decades have advised the government on issues relating to nuclear weapons technology. Its members receive security clearances that allow them to view the fruits of intelligence gathering from the Cold War era. Gore got the ball rolling in 1990 by asking if the CIA had archival data that might shed light on a range of current environmental issues, from biodiversity to natural disasters. In 1992, some 70 scientists attended the first meeting of what was then called the Environmental Task Force, discussing what types of data might be most useful. That led to a report outlining more than a dozen possible experiments, which members later shopped around to various federal research agencies.

    Committee member William Schlesinger, a soil chemist at Duke University, studied the changing vegetation in Sudan, using satellite and aircraft pictures going back to 1940. Concluding that aerial photos could track changes in the distribution of large plants, he then requested historical pictures of an LTER site, the Jornada Experimental Range in southern New Mexico, for which he is the principal investigator (PI). The idea was backed by NSF director and Medea member Rita Colwell and later approved by the National Imagery and Mapping Agency. This fall, the government will release 37 pictures going back to the 1960s that Schlesinger hopes will help his team monitor the encroachment of mesquite into a grasslands region at Jornada, on the northern tip of the Chihuahuan desert, as a marker for a changing climate. “We're pretty excited,” he says. “This will allow us to document the rate and pattern of change for a particular parcel, including the soil chemistry, going back before we started working there in 1981.”

    McElroy says more than a year of “intense negotiations” preceded the release of images from the Antarctic and New Mexico LTER sites. Medea has also been successful in obtaining more recent records. Last month, the government declassified 57 pictures of the site surrounding the 1997–98 SHEBA (Surface Heat Budget of the Arctic Ocean) experiment, in which a Canadian icebreaker was frozen in drifting pack ice for a year. “I put in the request to acquire imagery from SHEBA long before the experiment began,” explains atmospheric scientist Norbert Untersteiner, professor emeritus at the University of Washington, Seattle, and a co-PI of SHEBA. “This was a unique opportunity, since there would be people on the ground at the same time.”

    Medea scientists foresee the release of several more caches of photographs. McElroy says that a half-dozen subgroups are exploring new potential targets, but the budget to do additional studies is tight. Despite limited resources, most Medea members seem pleased with the results of their work to date. “It's easier for them [government officials] to just say no and have no regrets,” says Schlesinger. “But it's clear that Gore's original view that there are national assets valuable to science has proven to be absolutely correct.”

  16. COMPUTER SCIENCE

    'Self-Tuning' Software Adapts to Its Environment

    1. Barry Cipra

    Programmers are making computers do the hard work of adjusting software to make the best use of the hardware it runs on

    In computation, speed is of the essence, as children quickly learn when they start taking arithmetic tests. But advances in computer capabilities, which have tempted researchers to develop more complex programs and tackle larger problems, have made speed trickier than ever to achieve. Programs tailored to run fast on a particular machine may not work at all on others, while “portable” codes often run inefficiently on all platforms. Programmers expend a lot of effort fitting software to hardware—and the target keeps moving.

    Now, some researchers are figuring out how to make the computers do the work. New “self-tuning” software probes the capabilities of the hardware it's running on and generates code that takes advantage of what it finds. The software—mostly designed for scientific computation—creates subroutines for common computational tasks, such as multiplying matrices. The gain in efficiency from such subroutines “can be dramatic,” says Jack Dongarra, a computer scientist at the University of Tennessee, Knoxville, who is developing some of these programs. “I'm not talking about 10% or 20%, I'm talking about 300% improvement.” So far, the self-tuning bandwagon has picked up only the most hardcore hardware users. But its proponents say that self-tuning software will soon be the only way for programmers to keep pace with their machines' capabilities.

    These programs address variations in the way computers cope with a universal problem: the mismatch between today's speedy processors and the sluggish rates at which data can be gathered from memory. Computer architects deal with the discrepancy by creating hierarchies of memory—from the big, slow main memory, through several levels of smaller but faster “cache,” up to the “registers,” where the processors do their thing. If a computer were a supermarket, with high-volume, impulse items placed at the checkout lines (the registers), other items stocked on shelves, and yet more in the storeroom, software would be a shopping list. But unless the organization of the list matches that of the store, the unlucky consumer may dash back and forth between the same aisles and make repeated calls for help from the storeroom. The obvious solution is to reorder the list; the hard part is to find an optimal reordering.

    “In the old days, people would get a new machine, run some experiments by hand, and make some changes to the structure of the software to get it to run very efficiently,” explains Dongarra. “What's being done in this new paradigm is to put that [kind of] smarts into a program.” He and colleagues R. Clint Whaley and Antoine Petitet have developed such a program—dubbed ATLAS, for Automatically Tuned Linear Algebra Software—to create efficient algorithms for multiplying matrices, the rectangular arrays of numbers that underlie virtually all scientific computation. Similar software, called PHiPAC (Portable High-Performance ANSI C), is being developed by Jim Demmel and colleagues at the University of California, Berkeley.

    An efficient matrix-multiplication program breaks each matrix into smaller blocks, multiplies the pieces, and then recombines the results. The trick is to decide, for example, how big the blocks should be and in what order to multiply them—and that depends on the computer's own memory structure. ATLAS and PHiPAC explore the computer's cache structure and match it to a set of parameters that describe a range of possible algorithms. They then settle on an algorithm that minimizes the movement of data up and down the memory hierarchy.

    The tuning process is slow: ATLAS takes a couple of hours to run, and PHiPAC typically takes several days. But once the self-tuning software finds a good algorithm, the result can be used for the lifetime of the machine in any program that requires matrix multiplication.

    Another new self-tuning program, whimsically called the Fastest Fourier Transform in the West—the FFTW, for short—is virtually cost-free. It takes mere seconds to adapt itself to a given machine, then cranks out computations at a rate that rivals implementations painstakingly written for a particular computer. The program is the brainchild of two Massachusetts Institute of Technology graduate students: computer scientist Matteo Frigo, who has since earned his doctorate, and physicist Steven Johnson.

    The FFTW creates subroutines for computing Fourier transforms, an operation that is crucial in all kinds of scientific computation. A Fourier transform finds the periodic patterns in seemingly jumbled data, such as the waxing and waning of ice ages as recorded by geological formations.

    The basic procedure for computing Fourier transforms quickly, known as the Fast Fourier Transform, or FFT, was introduced in the 1960s. It works by decomposing a single transform, which analyzes a large data set, into smaller ones, each of which can be broken down still further. Small transforms are much easier to compute than large ones are, and there's a net saving even after all the small computations are counted together.

    “The FFT algorithm actually gives you a lot of freedom in how you decompose the transform,” Johnson says. A transform of size 100, for example, can be broken into two transforms of size 50, or four of size 25, and so forth. The challenge is to find a decomposition that fits neatly into the computer's memory hierarchy. The tuning phase of the FFTW solves a test problem, trying all possible decompositions until it finds the fastest one for a transform of a specified size. That's a more limited problem than the matrix programs face, which is why the FFTW is so speedy. Frigo and Johnson say they are getting thousands of hits daily on their FFTW Web site (www.fftw.org), from clients who range from astrophysicists wanting to analyze pulsar data to “people who want to tune their guitar by computer,” Johnson says.

    Computer vendors, who often provide libraries of code designed to run quickly on their own machines, are also keen on the new self-tuning software. “These are great tools,” says Bruce Greer, a senior software architect at Intel. He says Intel programmers “will use FFTW as a benchmark against which to judge the quality of our software,” before tweaking their code to squeeze out additional performance. “If we can't beat FFTW, then we probably haven't tried hard enough,” he says.

  17. PALEOBIOLOGY

    Permafrost Comes Alive for Siberian Researchers

    1. Richard Stone

    Long thought to be barren, the Arctic is yielding new information about life-forms on Earth—and the likelihood of finding similar organisms elsewhere

    Point Chukochii, Russia—Hydrologist Victor Sorokovikov yanks the threaded steel rods out of the hole in the tundra like a magician laboriously pulling one knotted handkerchief after another from a hat. The sun has broken through after several days of rain and snow, but even in August, drilling for microbes above the Arctic Circle is no picnic. With help from Dmitrii Fedorov-Davydov, a red-headed soil biologist with an encyclopedic knowledge of the tundra, Sorokovikov gives a final tug and the last hunk of metal emerges from a depth of 30 meters, its tip bearing a plug of what looks no more remarkable than frozen dirt.

    But looks can be deceptive. The plug harbors millions of microbes that, once liberated from their Siberian prison, resume normal activity. This chunk of wind-blown sediment, deposited along the shores of Lake Yakutskoe 40,000 years ago and frozen ever since, may hold the key to understanding whether life could persist on Mars or other planets coated with permafrost. “If microbes can survive here for hundreds of thousands of years or more, why couldn't they survive on Mars?” asks expedition leader David Gilichinsky of the Russian Academy of Sciences' Institute for Basic Biological Problems in Pushchino, Russia.

    Researchers gearing up for missions to Mars, Europa, and other cosmic bodies prepare by studying life in some of the harshest environments at home. Their venues range from ancient arctic permafrost and ice to antarctic lakes that haven't seen daylight for millions of years (see sidebar). But the work on the microbes themselves is proving far more interesting—and in some instances, disturbing—than scientists had ever imagined. “We're seeing things we've never seen before,” says diatom expert Richard Hoover, astrobiology group leader at NASA's Marshall Space Flight Center in Huntsville, Alabama.

    The first hints that permafrost may not be a sterile wasteland came in 1911, when Russian researchers reported that they had cultured bacteria from a mammoth unearthed in Siberia. Although many scientists suspect that modern bacteria had invaded the carcass, those findings—along with indications from later research that ancient permafrost soil may contain viable life—intrigued Gilichinsky, a geocryologist. In 1979 Gilichinsky, with microbiologists Dmitrii Zvaygintsev and Elena Vorobyova of Moscow State University, began hunting for microbes near where Russia's Kolyma River empties into the East Siberian Sea. His team quickly tapped a microscopic menagerie of bacteria, fungi, yeast, green algae, cyanobacteria, and mosses.

    For years, however, he refrained from publishing out of fear that his group was inadvertently contaminating the samples. One reason for concern was the dearth of spore-forming bacteria, which cocoon themselves from freezes or droughts—the kind of hardy critters you'd expect to survive in frozen soil. So each year the researchers improved their equipment—settling on a dry drill that uses no chemicals or fluids—and refined their techniques, saving only the innermost core sections that never touch a septic surface. Gilichinsky's team is so careful, says Hoover, that “it would be hard to argue that what they're finding is a result of contamination.”

    Gilichinsky's group started publishing its findings in the mid-1980s, including a report of viable microbes from 3-million-year-old permafrost. Surprisingly, the scientists have found few microbes adapted to life in the cold. Aside from tending to be heavily pigmented, the ones they've dug up appear to be mostly run-of-the-mill species that survive in thin films of water that stay liquid even a dozen degrees below zero. Other strains may live in arctic cryopegs—underground marine ponds, tens of meters in diameter and a few meters thick, sandwiched between ice-hardened permafrost. Clinging to these life rafts, the microbes may derive nourishment from the ever-so-slow leaching of minerals and gases from the sediment into the water. But the Russian scientists, working with scientists at NASA and elsewhere, didn't think there were enough nutrients in these ponds for the microbes to reproduce and, thus, maintain a viable colony.

    The biggest puzzle is how the microbes cope with DNA damage from background radiation. Radiation levels should have been high enough to kill off half the population in 200,000 years, prompting skepticism of Gilichinsky's initial reports of million-year-old microbes. But that calculation is based on the assumption that the microbes are in a resting state with no metabolic activity, and thus are not repairing their DNA. Gilichinsky believes that the microbes have sustained a modicum of unexplained metabolic activity—primarily to repair DNA and purge toxins—over the eons, although he admits he doesn't know how. Although the idea is far from proven, says Hoover, the permafrost findings show that “a lot of things scientists thought were constraints on microorganisms were simply wrong.”

    The durability of the microbes raises the specter that the permafrost—a meat locker for countless carcasses of extinct mammoths and saber-toothed cats, not to mention a tomb for Stalin-era political prisoners and smallpox victims—could also harbor dangerous pathogens. During a visit to Siberia in 1990, Imre Friedmann of Florida State University in Tallahassee and a colleague came across a corpse, clothed like a native Yakutian, protruding from thawing permafrost. A local archaeologist, looking at the clothing, estimated that the person had died 100 to 300 years ago. “I immediately thought of smallpox, of course,” says Friedmann. But “the body was buried fast and the matter forgotten.” So far, says Gilichinsky, none of the microbes dug up at his site has resembled pathogens. Indeed, most bacterial pathogens—with anthrax a notable exception—do not develop durable resting forms, says Valery Galchenko of the Institute of Microbiology in Moscow. However, Gilichinsky's team has not sampled for viruses.

    A more disturbing message has come from work on ice sheets. On one hand, scientists studying ancient microbes in the antarctic ice sheet have uncovered no lurking pathogens. Indeed, “we use masks to prevent us from contaminating the ice core,” says Sabit Abyzov of the Institute of Microbiology. But in the July issue of Polar Biology, Scott Rogers of the State University of New York, Syracuse, and his colleagues report having detected RNA from the tomato mosaic tobamovirus in 140,000-year-old ice in Greenland. The RNA, more fragile than DNA, almost surely came from viral particles, although it's unclear if they are still infectious, says Rogers, whose group has also isolated more than 200 kinds of fungi, some 140,000 years old, from the ice. The find is not surprising, says Friedmann: Lacking an active metabolism, viruses “should be tougher survivors than bacteria.” But it does raise the possibility that the ice sheets may serve as a viral reservoir that, with global warming, could continuously release ancient forms of microbes.

    Gilichinsky doubts that his team will unleash any scourges, ancient or otherwise, but he's well aware of how his work can be misperceived. “Some religious people have come to me and said, ‘If the organisms are dead, they are dead; we shouldn't bring them back to life’ “—even though the microbes are, despite the odds, still alive. But Gilichinsky—who in 1991 kept working in Siberia despite news of the August putsch that toppled the Soviet government—has no plans to stop now. “It's important to at least know what is out there in nature,” he says.

  18. PALEOBIOLOGY

    Lake Vostok Probe Faces Delays

    1. Richard Stone

    Cambridge, U.K.—Scientists have discovered tantalizing evidence that microbes are living under nearly 4 kilometers of antarctic ice, leaving them more eager than ever to explore a vast lake beneath the ice sheet. But a host of issues—including how best to probe for life, who should pay for the big-science project, and whether scientists should cut their teeth on smaller subglacial lakes—may delay any plunge into one of the world's most isolated ecosystems.

    The scientific drumbeat to explore Lake Vostok began 3 years ago with a report that a body of water nearly the size of Lake Ontario lay beneath Russia's Vostok station in East Antarctica. Among those eager to tap into Vostok are space scientists, who see it as a good test-bed for a mission to search for life in an ice-covered ocean thought to exist on Europa, one of Jupiter's moons. At a meeting here earlier this week, 70 researchers from 14 countries argued that the exploration of a water body sealed from the rest of the world for millions of years should be a top priority for antarctic science. “If the point is only to test technology for a Europa mission, you might as well do that somewhere else” that's easier and cheaper, says glaciologist Dominique Raynaud, director of research for France's CNRS research agency. “But from what I have seen, I'm convinced the science is worth it.” Adds Chris Rapley, director of the British Antarctic Survey, “It's one of the most high-profile and interesting projects of the next decade.”

    It's also expensive. Hopes that a Vostok mission could be mounted in 2001 for under $10 million (Science, 30 January 1998, p. 658) have been dashed as scientists learn more about the challenges it poses. The latest estimates indicate that developing the technology to avoid contaminating the lake and providing logistics for a full-scale Vostok mission could run $20 million or more, putting it beyond the reach of a single country. In addition, the U.S. planes needed for an expedition to the interior are already booked for the next few seasons ferrying construction materials for the new South Pole station. Seismic mapping surveys to pick a prime drilling spot could get off the ground within the next couple of years, but it may be 2004 or later before instruments reach the lake. Officials of NASA and the National Science Foundation (NSF) plan to meet next month to hash out a possible U.S. role.

    Scientists hope that new data pointing to microbial life at Vostok will help them sell their governments on the importance of the mission. A joint French-Russian-U.S. program, which has spent a decade extracting ice cores at Vostok Station, stopped last year within about 120 meters of the lake after passing through 100 meters of refrozen lake water. Although the primary aim of the work has been to infer ancient climate patterns from the gases and particles trapped in the ice, scientists have also sampled the core for microbes. At the meeting, microbiologist John Priscu of Montana State University in Bozeman created a buzz with electron micrographs of what appear to be rod-shaped bacteria isolated from core samples of refrozen lake water. He's now analyzing DNA to try to classify the microbes.

    Working on another piece of the same core, David Karl of the University of Hawaii, Manoa, ran a battery of tests to affirm life, such as measuring levels of ATP, an energy molecule vital to all known organisms, and tracing the incorporation of radiolabeled acetate into biomolecules. The sluggish biochemistry that his team saw, Karl says, “is consistent with a population growing very slowly.” Although researchers were unable to detect any viruses, they confirmed the presence of bacteria, estimating that there are roughly 1000 cells per milliliter of meltwater. Sabit Abyzov of the Institute of Microbiology in Moscow and Richard Hoover of NASA's Marshall Space Flight Center in Huntsville, Alabama, who were not at the meeting, have also imaged what appear to be bacterial filaments and other microbes just above the lake.

    With their appetites whetted, scientists want to know just how these and other life-forms—if they are alive—can survive. But although further ice-core studies may give clues to which bacteria may colonize the ice-water interface, lake samples are needed to reveal what organisms, if any, live deeper in the water column or in the thick sediments coating the bottom. Despite the crushing pressures beneath the ice sheet, core studies suggest Vostok contains about as much dissolved organic carbon as do temperate lakes: “There's food down there,” Priscu says. “I'd be very shocked if there is not microbial life,” adds Jim Tiedje of Michigan State University in East Lansing.

    Those who feel Vostok may be too ambitious have proposed starting with one of Antarctica's several dozen other subglacial lakes, including one under the South Pole that is 8 kilometers wide and at least 25 meters deep. “We should take into account all the possibilities,” says Martin Siegert of the University of Bristol in the U.K., who is part of a team that has mapped many of the lakes but describes himself as “not anti-Vostok.” But Erik Chiang of NSF, however, discounts any cost savings from such a mission, noting that the same technology would be needed to access either lake. If that's true, say most scientists, then Vostok's bigger potential scientific payoff should make it the preferred target.

  19. PLANETARY SYSTEMS

    From a Swirl of Dust, a Planet Is Born

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Hard-won observations are at last beginning to confirm long-standing theoretical ideas of how planets form—and they suggest that the universe is full of them

    No one has ever seen the birth of a star, let alone the formation of a planetary system. The clockwork of the universe ticks far too slowly for human beings to witness either event, and 4 centuries of telescopic observations amount to no more than a snapshot, a still frame from the slow movie of cosmic evolution. Moreover, that snapshot is too blurry to reveal the details of full-grown planets outside our own solar system.

    So, to explore how a planet is born, astronomers must make their own movie, by finding and comparing stars in a wide variety of early evolutionary stages. That's not an easy task, because nearby youthful stars are relatively scarce, notes astrophysicist Jane Greaves of the Joint Astronomy Centre of the University of Hawaii, Hilo. Nevertheless, by training new and more sensitive instruments—particularly infrared and millimeter-wave telescopes that can peer through cosmic dust—at newborn stars and the clouds that spawn them, researchers are beginning to build a coherent picture of the genesis of planets.

    Star by star, these observations are providing physical evidence to support an old theoretical idea: that planets coalesce out of the dust disks that surround many young stars. Researchers have discovered one star with both a disk and a planet, and other dust-enshrouded stars show features, such as gaps in their dust disk, that are “very suggestive, although not yet conclusive evidence for the existence of planets,” says Ray Jayawardhana of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.

    Besides firming up the link between dust disks and planets, the findings are pointing to the crucial events along the way, together with a rough timetable for planet formation. A few million years after a star's birth, the tiny particles of dust encircling it rapidly coalesce into larger bodies and eventually into a handful of full-blown planets. After a few hundred million more years, the remaining debris crashes into the planets or is flung out of the system, ultimately leaving a relatively clean and dust-free planetary system like our own.

    The process appears to be routine, cosmically speaking. After viewing hundreds of young stars, astronomers have found that many if not most are surrounded by these dust disks. So researchers now tend to believe that planets are the normal consequence of the birth of most stars—which would mean that there are billions and billions of solar systems hidden in the heavens. “It's becoming increasingly clear that the formation of our solar system is just one case of a general process accompanying the formation of a star,” says Harm Habing of Leiden Observatory in the Netherlands.

    Youth

    No telescope is powerful enough to directly image any planets circling other stars, so final confirmation that our solar system is not alone was slow in coming. But in 1995, astronomers detected the first crop of extrasolar planets by measuring the tiny wobbles these planets induce in the motion of their parent stars. So far, that technique has revealed some 20 extrasolar planets. One of these orbits a star known as 55 Cancri—which infrared observations reveal is also surrounded by a dust disk (Science, 16 October 1998, p. 395).

    How were these planets born? Although most of the observational evidence goes back just 2 decades, the original idea comes from the Prussian philosopher Immanuel Kant, who nearly 250 years ago proposed his “nebular hypothesis” for the origin of our solar system. Kant's vision—of a central star and orbiting planets condensing out of a rotating, flattened nebula of gas and dust—is remarkably similar to modern theories about the birth of stars and planetary systems.

    Imagine a vast interstellar cloud of molecular hydrogen and helium, a mere 10 degrees above absolute zero, more dilute than the best laboratory vacuum, but almost completely opaque because of a smattering of dust particles. This is a run-of-the-mill stellar nursery. Slow turbulence and magnetic pressure in the cloud tend to resist the force of gravity, but every now and then, in some particularly dense part of the cloud, gravity wins and a collapsing core of gas and dust heats up, signaling the imminent birth of a new star.

    Theory has it that this slowly rotating blob of matter speeds up like an ice skater drawing in her arms and flattens like a ball of dough flung into the air at a pizza restaurant. No one really doubted that any newborn star would be surrounded by a rotating disk of gas and dust—probably the stuff planets are made of. Nevertheless, it came as a welcome confirmation in the mid-1980s when infrared observatories on the ground and in space detected a glimmer of excess heat near some newborn stars, evidence that a disk of warm dust was swirling around them. And a few years ago the Hubble Space Telescope captured actual images of protoplanetary disks around baby stars in the Orion Nebula, an active region of star formation about 1600 light-years away.

    According to theoretical calculations, these young, gas-rich disks don't last forever. In a few million years, most of the gas in the disk is thought either to end up in the central star or to flow back into the interstellar medium through huge jets along the star's rotational axis, a process observed in many young stars but not yet completely understood. Only a relatively small portion of the gas may eventually find its way into planets. In any case, by the time the star is 10 million years old, the calculations suggest that the original gas-rich disk has almost completely vanished, leaving a rarefied disk of dust particles, mainly silicates and ice crystals.

    By then, planet formation has probably already begun. Computer simulations suggest that within a few hundred thousand years, the dust particles have already accreted through molecular forces into pebble-sized bodies. It takes another few million years for these dusty or icy golf balls to accrete into kilometer-sized bodies called planetesimals, which later collide to form planets. Observations made in 1998 at infrared and millimeter wavelengths—which can discern small dust particles but not large planetesimals—support this picture by showing that many dust disks around older stars do indeed have a central cavity about the same size as our own solar system. “The most obvious explanation for these gaps,” says theorist Peter Goldreich of the California Institute of Technology in Pasadena, “is the existence of planets.”

    To learn more about how quickly the inner parts of the disk are swept clean, astronomers would love to have a young star-forming region at their doorstep, where they could get a close look at stars in the very throes of planet formation. The TW Hya Association (TWA for short), recognized 2 years ago by a team led by Joel Kastner of the Massachusetts Institute of Technology in Cambridge, seems to be just that (Science, 4 July 1997, p. 67). It's less than 200 light-years from Earth, and some of its stars, including TW Hya itself, have been dated at about 10 million years old—exactly the age when theorists expect planet formation to occur.

    In the past 2 years, Jayawardhana of Harvard and his colleagues have searched the young TWA stars using a midinfrared camera, which can spot relatively warm dust close to the central star, on the 4-meter Blanco telescope at the Cerro Tololo Inter-American Observatory in Chile and on the 10-meter Keck II telescope on Mauna Kea in Hawaii. They already knew that several stars were surrounded by dust at great distances, as evidenced by excess radiation at longer, colder wavelengths. Now, seeking to learn how fast the inner dust disks are scoured away, they have found that an inner disk survived only around TW Hya itself. The immediate surroundings of the others were already swept clean in the midinfrared images, implying that the innermost dust had accreted into larger particles that were invisible to the telescopes—golf balls, planetesimals, or even full-blown planets. “This implies that at least the inner parts of the disks evolve fairly rapidly,” says Jayawardhana, just as theory suggests.

    Strong additional evidence that the vanishing disk material is clumping together into solid objects comes from spectroscopic observations, made by the European Infrared Space Observatory (ISO) in 1996 and 1997, which give clues to the composition of disk material. In the outer reaches of the dust disk surrounding the 10-million-year-old star HD 100546, ISO found the spectral signature of crystalline silicates such as forsterite. Silicates are also found in relatively large objects in our solar system, such as comets, but not in interstellar dust. “Without doubt, what we're seeing here is an early stage in the formation history of a planetary system, when comets and planetesimals are still very numerous,” says Christoffel Waelkens of the University of Leuven in Belgium.

    Middle age

    As a star grows to be a few hundred million years old, the remaining small dust particles should blow away, driven by radiation pressure, or spiral into the star. Yet many stars of this age still have extended dust disks—implying that the disk is continually replenished, probably by debris from collisions of icy comets and rocky asteroids. That's additional convincing evidence that accretion is happening in other systems, says Jayawardhana.

    In the long run, however, even these debris disks should disappear, as the planetesimals and comets themselves become rarer and their collision rate drops. This winnowing takes place as the gravity of any giant planets either ejects the kilometer-sized objects from the system altogether or slings them inward, where they collide with other planets. In our own solar system, the Oort cloud of cometary nuclei, which surrounds the sun at great distances, and the heavily impact-scarred surfaces of the moon, Mercury, and Mars all offer testimony to this cleanup phase. Both the formation of the comet cloud and the so-called “heavy bombardment” occurred before the sun was about half a billion years old. Ever since, the amount of dust in the solar system has been relatively small.

    Sure enough, the debris disks surrounding other stars seem to disappear rather quickly as soon as the star is about 400 million years old, according to Habing and Carsten Dominik, also of Leiden Observatory. In work appearing in this week's issue of Nature, Habing and Dominik and a group of French and Spanish colleagues used ISO to observe a sample of 84 bright stars at the relatively long infrared wavelengths thought to indicate cold dust far from the star.

    Of the 15 stars younger than 400 million years, 60% show evidence of a disk, but fewer than 10% of older stars have disks. The similarity to our own solar system is striking. Stars showing a debris disk “are in the cleanup phase of their planetesimal disks. If planets have formed in these disks, they are undergoing a ‘heavy bombardment’ and are generating their own Oort cloud,” according to Habing and Dominik.

    All this adds up to a convincing scenario for how solar systems are born and evolve, but the details are poorly understood and many questions remain. For instance, it's unclear how much the environment of a star can disturb or inhibit the formation of planets. Many stars are part of binaries or multiple systems. In some wide binaries, each star sports its own disk; in some close binaries, the stars share one common disk. But if the distance between the stars is comparable to the size of our solar system, the gravitational interplay between them seems to disrupt any disk, according to astronomer Eric Jensen of Swarthmore College in Swarthmore, Pennsylvania.

    Planet formation also seems to be thwarted in large, star-forming regions such as the Orion Nebula, where the energetic ultraviolet radiation of massive young stars evaporates neighboring dust disks before planet formation can commence. Thus a large gathering of young stars may not be the best place for planets to form.

    The strange bright clumps observed at millimeter wavelengths in the debris disks of the stars Vega, Beta Pictoris, and Epsilon Eridani pose another mystery. “They must be some kind of dust cloud around some kind of companion,” says Greaves of the Joint Astronomy Centre in Hawaii; they might even mark the birthplaces of giant planets like Jupiter, but there's no obvious way to find out. “It's an intriguing possibility worth exploring,” says Jayawardhana.

    Despite such remaining mysteries, one thing seems clear. Planets are the norm, not the exception, around other stars. Says Waelkens, “As soon as [planetary formation] can happen, it will.”

  20. PLANETARY SYSTEMS

    Making New Worlds With a Throw of the Dice

    1. Richard A. Kerr

    A new round of computer simulations of the formation of Earth and the other rocky planets underscores the role of chance in shaping the character of a planet and its prospects for life

    The four terrestrial planets nestled close to the life-giving sun make an unlikely family. Little, moonlike Mercury is mostly iron, covered with a bit of rock, and has no atmosphere. Venus, Earth's twin in size and composition, is smothered by a most un-Earth-like inferno of an atmosphere and is drier than any desert. On Earth, which is nearly drowning in water, continents drift across a surface infected in every crack and crevice by life. And Mars—a tenth the mass of Earth—has an ancient, immobile face, now dry and lifeless but with hints of an earlier, more hospitable era. A single family? More like a bunch of unrelated adoptees from alien planetary systems.

    Actually, as computer models of the early solar system are showing, this motley crew is a case study in the effects of chaos. In the earliest days of the nascent solar system, when dozens of Mars-sized protoplanets roamed the inner solar system and met in catastrophic collisions, tiny variations in trajectory made all the difference. These variations, as subtle and unpredictable as the factors that control a roulette ball, ultimately determined the orbits of the four planets, how big they grew, and perhaps even what they were made of. “Chance is likely to have been a very big factor” in the genesis of the planets, says cosmochemist Christopher Chyba of the SETI Institute in Mountain View, California.

    After the four terrestrial planets formed, planetary evolution amplified the effects of chance even further. A planet's size and proximity to the sun, for example, may have determined its final allotment of water, which in turn affected everything from its geology to its fitness for life. “Everything seems to influence everything else,” as planetary physicist David Stevenson of the California Institute of Technology (Caltech) in Pasadena puts it, an interdependence that complicates efforts to sort out the ultimate causes of planetary diversity. “It's frustrating,” says Chyba.

    Until the last couple of years, planetary scientists couldn't calculate the particular fate of each of the scores of miniplanet-sized bodies that had accreted from dust and gas late in the formation of the solar system. Because of limitations in computer algorithms, errors in the calculation accumulated until, long before a final set of virtual planets formed, the model's planetary embryos flew out of the solar system or fell into the sun. Modelers had to settle for statistically averaged simulations.

    Now, several groups are able to run the needed 100-million-year simulations, thanks to an error-minimizing mathematical technique called “symplectic integration,” which was originally developed by planetary dynamicists Jack Wisdom of the Massachusetts Institute of Technology and Matthew Holman of the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts. Martin Duncan of Queen's University in Kingston, Canada, and his colleagues provided a specific symplectic algorithm that is designed to handle close encounters of the most massive bodies. By running Duncan's algorithm, planetary dynamicists Craig Agnor, Robin Canup, and Harold Levison of the Boulder, Colorado, office of the Southwest Research Institute (SWRI) have modeled how 22 planetary embryos, each about one-tenth the mass of Earth (or about the size of Mars), become a few terrestrial planets. As the group will soon report in Icarus, a typical model run produced a pair of planets, each at least half the mass of Earth, orbiting between the distance of Earth from the sun (1 astronomical unit or AU) and 0.5 AU, or inside of Venus's orbit. A third, less massive planet tended to form near 1.5 AU, the orbit of Mars.

    Typical doesn't mean predictable, however. “Even very slight differences in initial conditions can produce different planets” in the simulations, Canup says. Depending on exactly where each planetary embryo started out, the orbital positions of new planets varied randomly from run to run, and the total number of planets ranged from one to five, meaning that a planet's final mass could vary greatly. “This is a very chaotic process,” notes Canup.

    Dynamicist Jack Lissauer of NASA Ames Research Center in Mountain View, California, agrees. He and Eugenio Rivera of Ames have been modeling the fate of the last two large planetary embryos that—according to some radiometric dating—may have remained after Mercury, Venus, and Mars had taken shape but before Earth had had a chance to take its final form. “Just trivial” changes in the starting orbital positions of these two planetary embryos in their simulations made all the difference, says Lissauer. In one simulation, the two embryos collided energetically enough to form Earth and splash off material to form the moon—re-creating what researchers suspect actually happened—but in another, the two settled into stable orbits as two smaller terrestrial planets. In other runs, one of the planetary embryos hit the simulated Venus and gave it, instead of Earth, a large moon. (Such an impact, earlier in planetary formation, may have removed much of Mercury's rocky exterior, leaving the relatively huge iron core inferred from the Mariner 10 flybys of the 1970s.) “Little changes can make a difference,” says Lissauer.

    The chance variations that shaped planet formation could have had a cascade of later effects. Take water, a key ingredient in both geology and life. Exactly where all of Earth's water came from is still debated, but in one scenario, some planetary embryos were water-rich, endowing any growing planet they happened to hit with extra water. Because the most important impacts were the last few hits by the largest remaining bodies, says Chyba, a planet's water allotment might be determined by a few very large impacts, making the difference between a wet and a dry planet even more of a roll of the dice.

    In another scenario, distance from the sun—another planetary property heavily influenced by chaos—is the critical factor determining the difference between bone-dry Venus and watery Earth. “Earth had the benevolence of having a fairly good temperature, so water could be liquid,” explains cosmochemist Akiva Bar-Nun of Tel Aviv University in Israel. Because it formed at 0.7 AU, “Venus was too close to the sun.”

    Even if the two planets had started with similar amounts of water, the sun's heat would have vaporized all of Venus's, notes Bar-Nun. That would have set in motion a “runaway greenhouse” driven by water vapor and carbon dioxide that would have ultimately driven water from the planet. On Earth, liquid water puts a brake on greenhouse warming by helping to remove carbon dioxide from the atmosphere through the weathering of rock and the deposition of carbonate minerals in the ocean. But Venus's full allotment of carbon dioxide remained in its atmosphere, warming the planet to its present scorching temperatures and causing a slow loss of water through the outer atmosphere to space.

    On Mars, in contrast, size rather than distance from the sun may have been a key to the planet's dryness. “I'm starting to think that size plays a bigger role [than previously thought] in determining how much water a planet ends up with,” says planetary scientist David Grinspoon of SWRI in Boulder. During large impacts, water on the surface of a planet, in its atmosphere, and in the impactor are blown into space. A larger planet's stronger gravity can hold onto the water, but Mars's gravity may have been too weak, allowing it to escape. As a result, says Grinspoon, little Mars “probably never had that much water.”

    Hydrogen isotopes in what little water is left on Mars support this scenario, he says. In the past couple of years, astronomers have measured the deuterium-hydrogen ratio in the water vapor of three comets, yielding an isotopic fingerprint of the water that comets—relatively small, icy bodies from the outer solar system—delivered to the planets after their formation. Comparing the cometary fingerprint with that of terrestrial water suggests that comets, at least of the sort studied so far, did not deliver most of Earth's water; our oceans are mostly the legacy of water in the planetary embryos. But the deuterium-hydrogen ratio in water in martian meteorites suggests that comets could have supplied the water in Mars now. Grinspoon concludes that during large impacts, Mars could have lost the water inherited from its formation; what water it has could have come in a late influx of comets.

    The consequences of these chance variations in the amount of water include the presence or absence of oceans, life, and plate tectonics—the surface motions driven by Earth's internal heat. “Although water is what we would call a minor constituent,” says Caltech's Stevenson, “it seems to play a major role in determining how a planet works.” On Earth, it appears to act as a kind of lubricant. Ocean plates sinking into the mantle carry traces of water that lower the melting temperature of mantle rock, helping to fire overlying volcanoes. The subducted water also seems to soften the layer of mantle rock on which the plates glide. “If there were no water,” says Stevenson, “you might not have plate tectonics” on Earth. Venus lacks plate tectonics, even though it is nearly Earth's twin in size and has similar reserves of internal heat. Its lack of water, he says, may be the crucial difference.

    Sulfur, another possible legacy of chance events in planetary assembly, might also have had an effect out of proportion to its planetary abundance, says Stevenson. A difference of just a few percent in sulfur abundance—which could result from the chance impact of a planetary embryo unusually rich or deficient in this trace element—might determine whether a planet maintains a long-lived magnetic field like Earth's or loses its initial field, as Mars and Venus may have done. The key would be in the planet's iron core, where the flow of heat churns the liquid iron to drive the dynamo that produces the magnetic field. Early on, heat lingering from the planet's formation is enough to drive the dynamo, Stevenson notes, but later another heat source comes into play: the heat that liquid iron gives up as it solidifies to form a solid inner core. Sulfur lowers the melting point of iron; a little too much could inhibit solidification, slowing the dynamo until the magnetic field dies out. An early martian magnetic field does seem to have died eons ago (Science, 30 April, p. 719).

    Chance may not be a very satisfying explanation for so many patterns of nature, but planetary scientists are philosophical about it. Random events played a “very big role” in planetary formation, concedes cosmochemist Tobias Owen of the University of Hawaii, Manoa, but that only behooves planetary scientists to decipher the patterns that remain, looking for the ties that unite even the most dissimilar siblings into a single family.

  21. PLANETARY SYSTEMS

    Expanding the Habitable Zone

    1. Gretchen Vogel

    Once restricted to a relatively narrow slice of the solar system, the possible environments for life in space are multiplying, reaching Pluto and even into interstellar space

    On Star Trek, the best place for Captain Kirk and his crew to “seek out new life and new civilizations” was on what the show's writers called a class M planet: a world with a thick atmosphere of oxygen and nitrogen, often close to a stable star and having fertile soil and a pleasant climate—a place just like Earth. But Kirk and company suffered from a failure of imagination. Whereas scientists, like the Star Trek crew, once defined the “habitable zone” around a sunlike star as a halo no larger than about 1.5 Earth orbits, they are now expanding the list of places in the universe that might nurture living things.

    New finds on Earth, such as colonies of bacteria deep underground, have suggested that organisms can thrive even if sealed off from the sun, by living on chemical rather than solar energy. And discoveries in space, such as a possible subsurface ocean on Jupiter's moon Europa, have opened up any number of odd corners of the universe as possible wellsprings of life. Pluto's moon Charon and even rogue planets in interstellar space are now all contenders. “Life might have a far wider canvas to work on than people had thought,” says planetary scientist David Black of the Lunar and Planetary Institute in Houston.

    Assuming that life elsewhere follows the rules we know on Earth, there are only a few requirements—in particular, water, which provides a solvent for life's essential chemical reactions. “The search for life has been the search for liquid water,” says cosmochemist Christopher Chyba of the SETI Institute in Mountain View, California. “That's the sine qua non.” Also high on the list, he notes, are an energy source and protection from radiation damage—and these too are turning out to be more common than previously thought.

    In spite of its focus on Earth-like planets, the crew of the Starship Enterprise managed to discover unusual organisms nearly every week. Microbiologists seeking new life-forms on Earth have been almost as successful, finding life just about everywhere they look. Take for example the diverse chemosynthetic organisms at hydrothermal vents, which thrive on Earth's internal heat and chemicals. The existence of these organisms, discovered in the 1970s, proved that life can thrive without sunlight, although such organisms still rely on carbon and oxygen produced by photosynthesis near the surface. Other startling discoveries include microbes buried deep under northeastern Virginia; these may have been living independently of the surface by existing on ancient organic matter in nearby rocks for millions of years, says Princeton University geomicrobiologist Tullis Onstott (Science, 2 May 1997, p. 703).

    Onstott and his colleagues have also found life more than 3.5 kilometers down in a South African gold mine, in rocks that may have been sealed off from the surface 2 billion years ago. The scientists have cultured bacteria that thrive on iron oxides and hydrogen in the lab. And in the Columbia River Basin, researchers found evidence for bacteria that seem to eke out an existence on hydrogen gas and carbon dioxide from dissolved rock thousands of meters belowground (Science, 20 October 1995, p. 377).

    The environment of all these microbes includes water, which gives amino acids and other organic molecules a liquid medium in which to mingle and react. Substances that stay liquid at chillier temperatures, such as ammonia or hydrocarbons, might be possible solvents. But reactions at low temperatures would likely be so slow as to rule out Earth-like life.

    Here on Earth, the need for liquid water is not much of a limit. For example, cosmochemist Christopher McKay of NASA Ames Research Center in Mountain View, California, and his colleagues cultured microbes from 3.5-million-year-old permafrost in remote Siberia (see p. 36), where microscopic films of liquid water surround grains of soil even at −20°C. As they reported at a recent meeting, these organisms incorporate radioactively labeled carbon, a sign that they are indeed alive, if barely. Whereas many bacteria double their populations in a matter of hours, these cells divided only once a day at about 5°C, and only about twice a year at −20°C, the team found. Also encouraging is the preliminary evidence that life can survive on scarce water deep underground, says Onstott.

    This is good news to those who suspect underground microbial life exists on Mars, which lost its surface water—and therefore presumably any surface life—some 3.5 billion years ago. If life can flourish in isolated regions deep within Earth, then “the prospect of going to Mars, drilling a couple of kilometers down, and … coming back with organisms becomes exceedingly better,” Onstott says.

    In space, new discoveries keep widening the region researchers believe might harbor water—the so-called habitable zone around a star. Many variables, including the star's mass and age and the presence of a heat-trapping atmosphere on the planet, influence the size of that zone. But for a star like the sun, traditional estimates extend from an orbit as close as Venus's (about 0.7 times Earth's orbit) to one just inside Mars (about 1.4 times Earth's orbit).

    But if some other energy source can keep water liquid, life could flourish without a sun, and such habitable zone estimates would be way off. The amino acids and other organic molecules required for life's origins are plentiful throughout the solar system, as are chemical energy sources such as hydrogen and iron oxides. And as researchers probe the solar system, they are finding a variety of heat sources, from gravitational tugging to internal radioactivity. “The bottom line is that if life could emerge in a liquid water environment, then the main energy source need not be solar radiation,” says planetary scientist Douglas Lin of the University of California, Santa Cruz. “The physical range which would allow habitable environments to evolve can extend over the entire solar system.”

    For example, the pictures of Jupiter's moon Europa, sent back by the Galileo spacecraft, suggest a liquid ocean sloshing under an icy crust (Science, 8 August 1997, p. 764), more than tripling the textbook width of the habitable zone. That's physically possible, scientists say, because the gravitational strain from Jupiter and Europa's sister moons might knead Europa's insides enough to generate heat that would melt a subsurface ocean.

    Even farther afield, Lin and his colleagues suggest that Pluto's moon Charon may be hot enough to have liquid water—in an orbit 40 times more distant from the sun than Earth's is. Like an amusement park Tilt-A-Whirl, Charon's orbit is tipped 110 degrees with respect to Pluto's own path around the sun, and the team's preliminary calculations suggest that the conflicting pulls from Pluto and the sun may be enough to melt Charon's interior. “All of a sudden the so-called habitable zone has extended by an order of magnitude,” says Lin.

    Earth-sized rocky bodies that formed far from their parent sun might not need gravitational interactions to maintain a deep ocean, says Fred Adams of the University of Michigan, Ann Arbor. According to computer models that he and space scientist Greg Laughlin of NASA Ames presented at the American Astronomical Society meeting in June, rocky planets could readily form between Mars and Jupiter, at two to four times Earth's orbit. Planets formed so far from the sun would likely have deeper oceans than Earth's, as the sun's heat would evaporate less water during their formation. And internal heating alone could melt any ice deeper than 14 kilometers below the surface on an Earth-sized planet. Says SETI's Chyba, “If you make the ocean deep enough, you're not going to freeze it all.”

    Because deep-ocean planets are common in model solar systems, they “might be the most likely place for life to exist in the galaxy,” Adams says. The recent discovery of liquid water in a meteorite (Science, 27 August, p. 1377) implies that life could conceivably hop from one such water world to another. Water must have been present on the meteorite's parent body—perhaps a large, rocky protoplanet, smaller than a full-grown specimen but warmed enough by internal heat to melt ice. If life got started on such a body, it might survive even after impacts broke up the earlier planet, in fragments that could seed larger, more hospitable worlds.

    At least one prominent theorist thinks extraterrestrial life on small bodies is a good bet. At a meeting this summer, theoretical physicist Freeman Dyson of the Institute for Advanced Study in Princeton, New Jersey, offered to bet $100 that the first extraterrestrial life would be found on an asteroid or even in a cloud of space dust, rather than on a planet. Smaller objects account for much of the solar system's mass, and so have “simply so much more real estate” for life to colonize, both at and below the surface.

    But other researchers think Dyson is likely to lose his wager. Even if life could survive on an asteroid, it would probably find only a temporary home, most astrobiologists say. Asteroids are too small to support an atmosphere or produce much heat, so any Earth-like life-form would be frozen and dormant, with activities such as DNA repair systems shut down. With radiation streaming in from a distant sun or internal radioactivity, organisms there would accumulate lethal doses of radiation after millions of years, says NASA's McKay.

    An even more far-out possibility is that life might not need a star at all, says planetary scientist David Stevenson of the California Institute of Technology. In July in Nature, he proposed that an Earth-sized planet might sometimes be ejected from an embryonic solar system before the star's increasing heat had a chance to drive off the planet's tenuous hydrogen atmosphere. As the wayward planet cooled, he calculates, the atmosphere would condense enough to cause a sort of greenhouse effect, trapping the heat produced by radioactive decay in the planet's interior. This could melt water at the surface and provide a promising place for simple life to develop while the planet drifted in interstellar space.

    The theory makes sense, says Michigan's Adams, who notes that a planet being ejected from a solar system is more probable than a person winning a lottery on Earth. Indeed, a lone planet might provide a safe long-term refuge for life. In 3.5 billion years, the expanding sun is expected to burn most of Earth's life away, but a rogue planet's internal heat could keep some life alive for at least 30 billion years. The catch, Stevenson acknowledges, is that it would be nearly impossible to detect such dark planets. “It's in the category of things you bring up to stimulate the thinking,” he says. “It's not in the same category as real discoveries.”

    And even if simple life-forms could thrive in such a desolate place, researchers say it would not be a promising habitat for advanced life—the “new civilizations” so often encountered on Star Trek. “Complex life like animals and plants needs a lot of energy,” McKay says, and the energy on such a starless planet would be one-thousandth of that available to us from the sun.

    All this, of course, is completely in the realm of theory. The next real discoveries—less sensational but more concrete—may come when the Mars Polar Lander touches down on 3 December. If all goes well, the lander will deploy two probes to search for water ice under the Red Planet's surface, allowing scientists to add some data to their speculations about liquid water there. And scientists envision the planned Terrestrial Planet Finder as a suite of telescopes working together to see much deeper into the sky—able to spot Earth-like planets around other stars and even collect evidence of their chemistry, including any oxygen or liquid water, based on the wavelengths of light they reflect (Science, 17 September, p. 1864). Finally, depending on the outcome of this fall's budget negotiations, NASA hopes to design a probe to visit Europa. “The million-dollar question is to go to Europa and see what's there” in its deep ocean, says Chyba. For Dyson, at least, it's a $100 question.

Log in to view full text