News this Week

Science  02 Jan 2004:
Vol. 303, Issue 5654, pp. 22
  1. ITER

    No Meeting of the Minds on Fusion Megaproject

    1. Daniel Clery*
    1. With reporting by Charles Seife in Washington, D.C., Dennis Normile in Tokyo, and Barbara Casassus in Paris.

    CAMBRIDGE, U.K.—It was supposed to be a banner day for the world's nuclear fusion community. After 18 years of study, experiment, and debate, politicians gathered in Washington, D.C., just before the holidays to give the long-awaited green light to a $5 billion reactor project that would demonstrate fusion's potential to generate almost limitless amounts of power. But on 20 December, there was no joyous announcement to ring in the new year. Half of the partners behind the International Thermonuclear Experimental Reactor insisted that ITER be sited in Japan, and the other half backed a site in France.

    The standoff put the signing ceremony on hold and has thrown the project into an acrimonious and dangerous limbo. Negotiators have returned to their capitals for another month of deliberations amid accusations of political bias and smear campaigns. “If [the indecision] goes on much longer,” says Alex Bradshaw, director of the Max Planck Institute for Plasma Physics in Garching, Germany's biggest fusion center, “it will start harming the project.”

    Last month's debacle was a frustrating climax to years of behind-the-scenes negotiations. Officials from the six partners—China, the European Union (E.U.), Japan, Russia, South Korea, and the United States—had whittled down the list of potential sites, roughly divvied up the cost of building and operating the reactor, and worked out ways to manage an effort involving thousands of researchers.

    But in the end it came down to a staring contest, and neither side blinked. Russia and China supported the E.U.'s candidate of Cadarache in southern France, whereas the United States and South Korea favored Rokkasho in northern Japan. “We wanted the meeting to be a technical comparison of the two sites, but there was no real exchange of views,” says Achilleas Mitsos, the E.U.'s director-general of research. Despite Russia's stab at a compromise—it suggested splitting the reactor from other elements so that each candidate site would gain something—the meeting ended in a deadlock. A working group is now studying the Russian proposal, and proponents of the two sites will spend January addressing additional technical questions.

    In the wrong kind of flux.

    ITER's partners can't decide where to site the tokamak.

    CREDIT: ITER

    One thing the ITER partners can agree on is the project's potential payoff. Researchers are convinced that with enough heat and pressure, they can fuse deuterium and tritium into helium in a reaction that would shed prodigious energy. But achieving the necessary hundreds of millions of degrees inside ITER's 6.2-meter-wide tokamak, a doughnut-shaped reactor vessel, will take many new technologies, including reliable superconducting magnets and heat-and radiation-tolerant materials. If ITER gets the go-ahead in 2004, it is expected to fire up in 2014 and cost $10 billion over its 30-year lifetime.

    With those stakes, political pressures approach tokamak-like ferocity. Press reports suggest that the United States supports Japan because it does not want to award such a prize to France after its opposition to the Iraq war. U.S. and Japanese officials deny that claim. “That sort of nonsense really gets us angry,” says Satoru Ohtake, head of fusion energy at Japan's education ministry. But doubts linger. “The U.S. says its decision is not political,” says Mitsos. “I'm not convinced, but we have to take their word for it.”

    Russia's attempt to broker a peace deal involved weaving in a planned fusion lab that is not part of ITER. The International Fusion Materials Irradiation Facility is a $600 million particle accelerator designed to produce high-energy neutrons, just the sort of radiation that would bombard the interior walls of a fusion reactor. The idea is to use it to study degradation of materials in future commercial reactors. Other elements that could be sited separately from the tokamak include a computer simulation center and the reactor's control room. Russia's overture is “a positive step psychologically. We want to make this a world project,” says Christopher Llewellyn Smith, head of the U.K.'s Culham Laboratory, home of JET, the world's largest tokamak. But some politicians used the proposal to support their claim to the ITER tokamak. “We recognize the capacity of Japan in supercomputers. Japan [should] recognize our capacity in fusion,” says E.U. research commissioner Philippe Busquin.

    Other anonymous officials were even less generous. Shortly before the Washington meeting, an unsigned document was circulated to all the delegations apart from Japan describing the merits of Cadarache as well as many claimed shortcomings of Rokkasho, including high costs of labor and electricity, seismic risk, and lack of infrastructure. “This gave us a shock. The way this was done was not fair,” says Nobuhiro Muroya, science attaché at Japan's embassy in Paris. French and E.U. officials who spoke with Science say they know nothing of the document's origins. Smearing their rivals would be “a very bad tactic,” says Jean Jacquinot, head of France's fusion program. “In the end, we must all work together.”

    The partners hope to find common ground and reconvene politicians in February to anoint a home for ITER. “We can wait another month or two,” says Jacquinot. Any longer, though, and political fusion may be harder to achieve than the nuclear variety.

  2. CANCER RESEARCH

    Drug Candidate Bolsters Cell's Tumor Defenses

    1. Jean Marx

    In an effort to develop more effective drugs for combating cancer, researchers have increasingly set their sights on the molecular defects that cause the disease. One particularly attractive target is the cellular pathway involving the p53 tumor-suppressor gene, which is inactivated in most human cancers. In work published online by Science this week (www.sciencemag.org/cgi/content/abstract/1092472), a research team led by molecular oncologist Lyubomir Vassilev of Hoffmann-La Roche's labs in Nutley, New Jersey, identifies a class of small molecules that can turn p53 activity back on. They may thus be good candidates for drugs that shrink cancerous tumors.

    In about 50% of cancers, the p53 pathway malfunctions because the p53 gene itself has been hit by a debilitating mutation. In the other 50%, however, the defects lie in other genes, such as one encoding a natural p53 inhibitor called MDM2. When the gene becomes overactive, its protein product suppresses p53's activity. The drugs identified by the Roche team, called “Nutlins,” work by preventing MDM2 from binding to p53 and inhibiting it.

    Other researchers have identified larger molecules—peptides and proteins—that block p53 binding to MDM2, but their use as drugs is very limited. Small molecules can be taken orally and therefore are much more desirable as drugs. But identifying a small-molecule blocker of the p53-MDM2 interaction proved to be “technically challenging,” Vassilev says. That's because proteins, including p53 and MDM2, typically contact one another over large surfaces, and small molecules don't readily disrupt such contacts.

    “Many people doubted that it would be possible to develop drugs that interfere with protein-protein interactions,” says cancer gene expert Bert Vogelstein of Johns Hopkins University School of Medicine in Baltimore, Maryland. “I think [the new result] will stimulate a lot of investigators to rethink this issue.”

    Nestling in.

    The structure shows a Nutlin compound settling into the p53-binding pocket on the MDM2 protein.

    CREDIT: L. VASSILEV ET AL.

    An early clue that it might be possible for a small molecule to prevent MDM2 from binding p53 came in 1996 when Nikola Pavletich of Memorial Sloan-Kettering Cancer Center in New York City and his colleagues produced a crystal structure showing how the two proteins connect. It revealed a relatively deep and well-defined pocket on the MDM2 surface that makes contact with three of p53's amino acids. “We got excited … about a drug-designing effort to target this interaction,” Vassilev says.

    The Roche team began by using high-throughput screening to identify compounds that keep MDM2 from binding p53. Compounds of the imidazoline class fit the bill. X-ray crystallography confirmed that these Nutlins fit into and fill the p53-binding pocket on MDM2.

    The researchers then tested the effects of Nutlin-1 in cultured tumor cells that retained a functional p53 gene. Levels of the p53 protein went up in the cells, a change that had functional consequences. The protein turns on a variety of genes, and the researchers detected increased expression of certain of those genes.

    The main role of p53 is to protect cells from damage by radiation and other insults. It first blocks cell division in order to allow cells to repair the damage, and if they can't do that, p53 triggers a form of cell suicide known as apoptosis. Vassilev and his colleagues found that the Nutlins can inhibit cell division and trigger apoptosis. None of these effects occurred in cancer cells without a functional p53 gene, indicating that the compounds work by the postulated mechanism.

    More importantly, when the researchers administered Nutlin-3 orally to mice bearing human tumor transplants, it inhibited tumor growth by 90%. It was “as good as or better than established drugs,” Vassilev says. What's more, the drug didn't seem to produce harmful side effects in the animals. The work could be “very important for those tumors that have high levels of MDM2,” says Gigi Lozano of the M. D. Anderson Cancer Center in Houston, Texas. These include many sarcomas, which are relatively uncommon but very hard to treat, as well as some lung tumors, she says.

    The work is “still at an early stage,” cautions David Heimbrook, who heads Hoffmann-La Roche's cancer drug discovery effort. “It's very difficult to say when we would have something suitable for clinical testing.” One concern is safety. Although the mice tolerated Nutlin-3 without apparent ill effects, the long-term effects of an MDM2 blocker on normal cells remains to be established.

  3. EUROPEAN SCIENCE

    Momentum Builds for Basic Research Agency

    1. Gretchen Vogel

    BERLIN—Scientists in Europe got some holiday cheer last month from a report calling for the creation of a $2.5-billion-a-year basic research agency. Issued on 19 December by a group of experts assembled to advise research ministers, the report “is a major push” toward a European Research Council (ERC), says Enric Banda, secretary-general of the European Science Foundation and a longtime supporter of the concept.

    Although national agencies fund basic research, many scientists have complained that a European Union-wide program is needed to complement the E.U.'s $25 billion, 5-year Framework program, which is devoted largely to applied research. ERC's creation “will be the big issue” in 2004, says Kai Simons, a cell biologist at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany. “There is really a window of opportunity now.” The big question, though, is whether politicians will warm to the idea.

    The final report from the expert group, set up in November 2002 by the Danish science ministry, hews closely to an interim report issued last September (Science, 3 October 2003, p. 29). But by focusing at the outset on funding research excellence, the latest iteration contains a stronger warning that the ERC keep its ambitions in check. Other responsibilities, such as support for infrastructure or for young scientists, should wait “until the ERC has established its value and proved itself,” says panel vice chair Mogens Flensted-Jensen of the Danish Research Councils.

    Another big boost for ERC should come later this month, when the European Commission, the E.U.'s executive branch, is expected to come out in favor of the idea. It is likely to call for the creation of an independent body, similar to the U.S. National Science Foundation, that would be “run by scientists for scientists,” says commission spokesperson Fabio Fabbi. Such an agency, he says, should “reward outstanding individuals who have good ideas.”

    Despite growing political support, the ERC faces uncertainties. Ten new countries will join the E.U. in May, and the term of the current E.U. research chief, Philippe Busquin, expires in November. Nevertheless, Fabbi says, “we hope to stay on schedule” with political approval for the new agency by the end of 2004.

  4. MICROBIOLOGY

    The Pfiesteria Conundrum: More Study, Less Certainty

    1. Jocelyn Kaiser

    WOODS HOLE, MASSACHUSETTS—More than a year after controversy erupted over whether a microbe called Pfiesteria has caused massive fish kills on the U.S. East Coast, questions about the research have deepened. The lab that tagged Pfiesteria as a potent fish killer faces ongoing skepticism in part because it has declined to share its “toxic” cultures with critics. And one lab that did receive cultures has failed to confirm that Pfiesteria has an unusually complex life cycle that includes an amoeboid form.

    At a meeting here last month, * researchers heard some good news, however: Two rival groups began discussing plans to figure out together why only one of them has found evidence that Pfiesteria produces a fish-killing toxin.

    Researchers say the collegial tone was a breakthrough in a field that has been riven by disputes about the work of Pfiesteria expert JoAnn Burkholder, who with North Carolina State University colleague Howard Glasgow first linked the dinoflagellate to fish kills. The same meeting 3 years ago was so contentious that some scientists walked out, says organizer Don Anderson. “The atmosphere was much better” this time, says Anderson, an algae expert at Woods Hole Oceanographic Institution in Massachusetts. Burkholder did not attend, however, because of an illness in her family.

    In spite of the collegiality, however, attendees left without resolving a critical public health question: Does Pfiesteria pose a significant risk to wild fish and humans?

    Poisonous dispute.

    Scientists are still debating the toxicity of Pfiesteria, here feeding on fish epidermis.

    CREDIT: VIRGINIA INSTITUTE OF MARINE SCIENCE

    Burkholder and colleagues have blamed Pfiesteria for killing millions of fish since 1991 and sickening fishers and lab workers. But researchers have not yet fully characterized a toxin. Some have even suggested that fish deaths in the lab may not be caused by a toxin. Last year, a team led by fish pathologist Wolfgang Vogelbein of the Virginia Institute of Marine Science (VIMS) in Gloucester Point reported in Nature that one species, Pfiesteria shumwayae, can kill larval fish simply by feeding on them (Science, 11 October 2002, p. 346).

    At the meeting, a Burkholder collaborator, Andrew Gordon, who had obtained from a public repository the culture VIMS used, reported that in his lab it does appear to produce a toxin. Gordon's group at Old Dominion University in Norfolk, Virginia, found that when fish were placed in a tank of the VIMS strain but separated from Pfiesteria cells by a membrane, in a few trials, one in four fish died over 48 hours. Fish died much faster in contact with cells, however. Hoping to resolve the conflicting results, Gordon is now considering running parallel experiments in collaboration with the VIMS group, whose requests for Burkholder's own cultures have been denied. Meanwhile, Burkholder says that data in press will show that the Pfiesteria toxin is highly potent.

    Uncertainties run deeper about Burkholder and Glasgow's claim that Pfiesteria has a 24-stage life cycle, including amoebalike forms. Last year, National Oceanic and Atmospheric Administration molecular biologist Wayne Litaker published a simpler life cycle for P. piscicida with no amoeba stages.

    At the meeting here, Patrick Gillevet of George Mason University in Fairfax, Virginia, reported that when his team used molecular probes to test sediment samples containing Pfiesteria, all amoebae turned out to be true amoeba species. An outside lab using DNA assays to test Burkholder's cultures has also failed to verify the existence of Pfiesteria amoebae. David Oldach of the University of Maryland Biotechnology Institute in Baltimore detected only true amoeba species and not Pfiesteria in three samples received since November 2002. “I can't confirm that a pure culture of amoeboid Pfiesteria exists,” Oldach told Science. Burkholder notes that Oldach did detect Pfiesteria in mixtures of estuarine amoebae and says she and collaborators are working on isolating more possible Pfiesteria amoebae.

    Another major Pfiesteria study has come to a close without yielding any answers. A 5-year, $9 million study sponsored by the Centers for Disease Control and Prevention (CDC) of over 700 fishers in Virginia, North Carolina, and Maryland has found no evidence of health effects, although no major Pfiesteria-linked fish kills occurred during the study period. The study was worthwhile, however, says CDC's Carol Rubin, because it provided baseline data on fishers. Along with other new Pfiesteria knowledge, it will give researchers the tools they need to analyze the next outbreak.

    • * Second Symposium on Harmful Marine Algae in the U.S., 9 to 13 December 2003.

  5. INFECTIOUS DISEASES

    Second Lab Accident Fuels Fears About SARS

    1. Dennis Normile

    The latest case of severe acute respiratory syndrome (SARS), confirmed 17 December in a Taiwanese medical researcher, has again raised alarms that the lab—and not the presumed animal reservoir—may be the most likely source for reintroduction of the virus. It is the second time a researcher has been infected with the virus through lab contamination since the outbreak was contained last July. “This kind of thing should not happen in a BSL-4 [biosafety level 4] lab,” says Hiroshi Oshitani, head of the SARS Response Team in the Manila regional office of the World Health Organization (WHO). “This kind of accident is avoidable.”

    The most recent case involved a 44-year-old researcher, identified only as Lieutenant Colonel Chan. For several months Chan had been screening antiviral drugs for effectiveness against the SARS coronavirus at the National Defense University in Taipei.

    According to Chang Shang-tsun, chief of infectious diseases at National Taiwan University Hospital, who headed an investigation into the cause of the infection, the SARS virus samples were handled within a closed cabinet using attached gloves, in accordance with WHO recommendations for a BSL-4 lab. A transporting chamber that could be securely attached to the cabinet was used to transfer waste materials to an autoclave for sterilization. On 6 December, Chan noted that some liquid waste had spilled into the chamber. Unable to reach the material through the attached gloves, he sprayed the area with alcohol and waited 10 minutes. Thinking it had been disinfected, Chan then opened the transporting chamber door to finish cleaning up. And that probably exposed him to the virus. This is “the most reasonable explanation for his acquiring the SARS infection,” Chang says.

    Bad news.

    Officials at Taipei Municipal Ho Ping Hospital confirm on 17 December that a patient, a 44-year-old medical researcher, has SARS.

    CREDIT: STR/REUTERS

    Not suspecting that he had been infected, Chan left for a conference in Singapore the following day. He returned to Taiwan on 10 December and fell ill that evening, at first suspecting it was the flu. He was admitted to a hospital on 16 December. Subsequent tests in Taiwan and by a lab in Japan confirmed that he had SARS.

    Although people infected with the SARS virus are at their most contagious when they show symptoms, as a precaution, Taiwan and Singapore quarantined a total of 90 people Chan had come in contact with. As of 23 December, the quarantine was lifted on all but two: the patient's wife and father, who were expected to remain under quarantine until 26 December. Chang says that they appear to be free of the disease, and Chan himself is recovering.

    The case is similar to, but with important differences from, an infection that occurred last August at the Environmental Health Institute in Singapore, where a researcher contracted the disease through working with samples of West Nile virus that had been contaminated with the SARS coronavirus. In that case, a lower biosafety level lab had been pressed into service during the SARS outbreak. This time, it was a full BSL-4 facility. WHO's Oshitani says, “It's not just a matter of having [a biosafety] facility; all lab workers should be strictly following safe procedures.”

    WHO has once again distributed guidelines and supporting information to governments in the region in hopes of gaining greater compliance with safety procedures.

  6. ASTROPHYSICS

    Are Most Life-Friendly Stars Older Than the Sun?

    1. Robert Irion

    Our solar system has lived in a hospitable part of the Milky Way for nearly 5 billion years. But most of the galaxy's other inhabited systems—if they exist—would have had even longer to nurture life, according to a study on page 59. The analysis intrigues astronomers who dare ponder the conditions for complex life elsewhere, but others warn that we know too little about those conditions for the research to mean much.

    The study explores the physical requirements for a “galactic habitable zone” (GHZ), a term coined in 2001 by astronomer Guillermo Gonzalez, now at Iowa State University in Ames, and colleagues. They identified the life-friendly zone as a narrow annulus of stars in the middle of our galaxy's disk, the plane within which our sun revolves. Planetary systems closer to the crowded galactic center would face too much danger from exploding supernovas and passing stars that stir up comets, the team reasoned. And stars in the sparse outskirts wouldn't contain enough heavy elements to spawn planets like Earth.

    The logic made sense, says astronomer Charles Lineweaver of the University of New South Wales in Sydney, Australia. In the current study, he and two computational astrophysicists retraced the arguments taking a more rigorous approach. In particular, they applied a detailed model of how the key elements of terrestrial planets have built up in the galaxy since its birth, produced by the flash fusions of supernovas. They also considered what Lineweaver calls “Earth destroyers”: giant planets that migrate, formed by stars rich in heavy elements. Finally, the researchers included the time needed for complex organisms to arise. With no easy way to decide this slippery factor, they adopted Earth's time scale of 4 billion years as typical. “We don't assume that complex life exists, or that it is common or rare,” Lineweaver notes.

    In the zone.

    A ring spreading within the Milky Way (green zone, bottom to top) embraces the galaxy's life-friendly stars.

    CREDIT: C. LINEWEAVER ET AL./UNIVERSITY OF NEW SOUTH WALES

    When they crunched the numbers, the researchers found that a habitable ring of stars emerged about 8 billion years ago and 25,000 light-years from the galaxy's core—roughly the sun's distance today. This zone has slowly spread toward and away from the galactic center since then, a spread not evident in the study by Gonzalez and his colleagues. All told, the authors conclude that the GHZ has embraced fewer than 10% of the stars ever born in the Milky Way.

    Moreover, about three-quarters of the stars in the zone today are older than Earth—indeed, 1 billion years older, on average. “If you're interested in whether extraterrestrial intelligence has evolved, this should be a sobering result,” Lineweaver says. “A billion years is a long, long time.”

    Gonzalez lauds the work. “Our paper was not as quantitative in terms of the chemical evolution of the galaxy,” he says. Other astronomers, however, think the galaxy's influences on extraterrestrial biology are too myriad for a basic astrophysical analysis to grasp. “We hardly understand the origin of life, let alone the evolution of complex life,” says astronomer Mario Livio of the Space Telescope Science Institute in Baltimore, Maryland. “Until we do, it is extraordinarily difficult to talk about habitable zones.”

    Astronomer Virginia Trimble of the University of California, Irvine, who considered the galaxy's habitability in 1997, agrees. “Quantitative isn't necessarily any better unless you can be sure you assigned the right numbers,” she says. For instance, if complex life typically takes twice as long to arise as it did on Earth, then older stars nearer the galactic center would be the best abodes—despite the supernovas and close neighbors. “I think the authors may have attached too much importance to the dangers of the environment there,” she notes.

    Lineweaver encourages the debate. “When life is mentioned, astronomers have winced and haven't talked about it. It's been a taboo,” he says. “I'd like to convince the astrobiology community that there is credence to this approach.”

  7. NEUROSCIENCE

    Long-Term Memory: A Positive Role for a Prion?

    1. Ingrid Wickelgren

    Prions are famous evildoers. These proteins, which are thought to be misfolded versions of normal ones, cause deadly neurodegenerative diseases, including “mad cow disease,” in mammals. In yeast, however, prions are largely benign, if nonfunctional (Science, 2 August 2002, p. 758). Now, a team led by neuroscientist Eric Kandel and postdoc Kausik Si at Columbia University College of Physicians and Surgeons in New York City may have discovered the first positive function for a prionlike protein: the formation of long-term memories.

    In two papers in the 26 December issue of Cell, the Columbia researchers, along with Susan Lindquist of the Whitehead Institute in Cambridge, Massachusetts, show that cytoplasmic polyadenylation element binding protein (CPEB) is required for cementing cellular long-term memories in neurons of the sea slug Aplysia. In yeast, the papers show, this same protein acts like a prion, and its prion form appears to be the one active in memory formation.

    The work has led to the radically new notion—which is far from proven—that prionlike changes in protein shape may be a key molecular event in the formation of stable memories. Neuroscientist Solomon Snyder of Johns Hopkins University in Baltimore, Maryland, calls the work “a breath of fresh air.” He adds: “It's the first truly novel concept about a molecular mechanism for learning and memory in perhaps 30 years.” If the work holds up, it will broaden scientists' views of prions, hinting that they could play roles in development and other body functions that require long-lasting proteins, a hallmark of prions.

    Switchover.

    Yeast cells (top) with active CPEB are blue, inactive CPEB white. But inactive CPEB can flip into an active, prionlike form, producing blue cells (right), and the reverse conversion can also occur (left).

    CREDIT: K. SI ET AL., CELL 115, 879 (2003)

    Kandel had no inkling he'd end up with a prion when he and his colleagues began investigating a mystery of memory a few years ago. In Aplysia neurons, they had found that repeatedly spritzing one branch of a sensory neuron with the neurotransmitter serotonin induces cellular changes that underlie long-term memory in that neuronal branch but not in a second branch of the neuron. But the neuron appeared to be sending the messenger RNAs (mRNAs) needed to synthesize the required memory-forming proteins to all its branches. So in some unknown way, the serotonin input apparently marked the affected branches so that only they could use the mRNAs.

    Si set out 3 years ago to determine the nature of this mark. He was interested in CPEB because it was known to activate mRNAs, chemically preparing them to be translated into proteins, and because it springs into action when neurons are stimulated. Indeed, Si soon found support for the idea that CPEB is the mark. Pulses of neural activity caused Aplysia neurons to produce many copies of CPEB in the stimulated nerve terminals. In addition, selectively blocking CPEB production caused the persistent cellular changes that underlie long-term memory to disappear. The work provides a “nice mechanism for localized changes to a single set of nerve terminals,” Kandel says.

    Still, one mystery remained. Because most typical proteins degrade within hours, it was unclear how CPEB could maintain changes within the nerve terminal that last many years, as some memories do. But then Si noticed that one end of CPEB carries a sequence that resembles one found in prions. Prions are proteins that exist in two conformational states, one of which is soluble whereas the other is insoluble and long-lasting in cells. The insoluble form is thought to turn the soluble form into its insoluble state when the two forms come in contact. That's the mechanism suspected in mammalian prion diseases.

    In the second paper, Si showed that CPEB acts like a prion—at least in yeast. He linked the gene for CPEB to the gene for an enzyme that produces a blue color whenever CPEB is active in bringing about mRNA's translation into protein. When put into yeast, this hybrid gene turned most of the cells blue, indicating the presence of active protein. But some cells remained white. Further work showed that the active (blue) form of CPEB behaves like a prion. It forms insoluble clumps and also converts the inactive (white) protein into the blue form when blue and white yeast cells mate.

    Although nobody knows whether CPEB behaves the same way in neurons, Kandel, Si, and Lindquist speculate that small amounts of prion CPEB, produced in a stimulated nerve ending, may convert many more inactive proteins into active forms. The active forms would help activate mRNA and stabilize the synapse, forming the memory.

    Now the challenge is to test this idea in Aplysia and then in fruit flies and mice, both of which contain CPEB, as do humans. If the results pan out, they could lead to a new molecular theory of memory and a radically improved reputation for prions.

  8. SCIENTIFIC CONDUCT

    Charges Don't Stick to The Skeptical Environmentalist

    1. Lone Frank*
    1. Lone Frank is a science writer in Copenhagen.

    COPENHAGEN—Danish statistician and environmentalists' bête noire Bjørn Lomborg has won a major victory in his fight to rehabilitate his reputation as a scholar. Last month, Denmark's science ministry repudiated an earlier finding by one of its committees that Lomborg's controversial 2001 bestseller, The Skeptical Environmentalist, is “scientifically dishonest.”

    The Danish Research Agency's Committee on Scientific Dishonesty (DSCD) launched an investigation in mid-2002 into allegations that Lomborg selected sources to fit his views. For example, he was accused of disregarding known extinction rates when estimating species loss and glossing over uncurbed population growth in some regions when discussing the reassuring implications of a global slowdown in population. After the DSCD issued its ruling last year (Science, 17 January 2003, p. 326), Lomborg, head of Denmark's Institute for Environmental Assessment, filed a complaint with the ministry.

    Victim?

    Bjørn Lomborg accuses critics of mudslinging.

    CREDIT: http://www.lomborg.com/

    In an 18 December 2003 ruling signed by section chief Thorkild Meedom, the ministry found DSCD's findings flawed on several counts. It held that DSCD's legal mandate is to rule on allegations of fraud, not on accusations of failure to follow “good scientific practise.” It also criticized DSCD's ruling for lacking documentation, for failing to document the argument that the book is dishonest, and for describing Lomborg's research in unduly emotional terms. The ministry did not evaluate the soundness of the science or the claims in the book.

    The ruling leaves DSCD officials chagrined. It's “exceedingly tough and [made] in an unforgiving tone which is unprecedented,” says committee chair Henrik Waaben, a high-court judge. He notes that the ruling does not vindicate The Skeptical Environmentalist and says that DSCD may yet reexamine the original complaints. Ecologist Stuart Pimm of Columbia University, one of three original complainants to DSCD, says he's not surprised by the ruling. But rather than an exoneration, Pimm calls it “a pardon from the political leadership.”

    Not surprisingly, Lomborg, who labels the case against him “infected from the beginning,” sees it differently. The ruling “sends a clear message that sound arguments are necessary. Mudslinging isn't enough,” he told Science.

    The ultimate loser may be DSCD. Jens Morten Hansen, director of the Danish Research Agency, says that standards of good scientific practice should vary by field. “The Lomborg case shows that social scientists should not be judged within the same framework as natural scientists,” he argues. Later this month, the agency is expected to release new rules for investigating complaints against scientists that could call for a shakeup of the dishonesty committee.

  9. CHEMISTRY

    Newcomer Heats Up the Race for Practical Fuel Cells

    1. Robert F. Service

    Powering cars with fuel cells is an inviting prospect. The devices siphon electricity from fuel efficiently and without pollution. But they still face a bumpy road: The fuel cells most carmakers are pinning their hopes on—called polymer electrolyte membrane (PEM) fuel cells—have considerable drawbacks. They're expensive and operate at low temperatures, which reduces their efficiency and makes their fuel-converting catalyst prone to being poisoned by traces of carbon monoxide (CO) often present in the fuel. Two years ago, a group led by Sossina Haile of the California Institute of Technology in Pasadena offered a potential solution. Haile replaced the polymer electrolyte with one made from a crystalline material called a solid acid, which promised to be cheaper and more tolerant of CO. But solid-acid fuel cells had their own issues. Chief among them: The hydrogen fuel powering the cells reacts with sulfur in the crystals, causing the material to disintegrate.

    Now on page 68, Haile and colleagues report sidestepping this problem with a new material that also boosts the power output of their devices fivefold. That is still orders of magnitude below what state-of-the-art PEM fuel cells put out, so the new cells won't be powering minivans anytime soon. Nevertheless, the work is “quite interesting,” says Robert Savinell, a chemical engineer at Case Western Reserve University in Cleveland, Ohio. Savinell notes that solid-acid cells may need smaller amounts of precious metal catalysts and be simpler to manufacture than PEMs, which together could dramatically lower their cost.

    These days, fuel cells come in nearly as many varieties as the cars they strive to power. However, all work essentially the same way. A catalyst at a positively charged electrode, or anode, strips hydrogen or other fuel molecules of electrons. The positive ions left behind drift through an electrolyte toward a negatively charged electrode, called a cathode. The electrolyte is impermeable to electrons, so they must travel to the cathode through an external wire, where along the way they can power an engine. At the cathode, the electrons and hydrogen ions combine with oxygen from air, creating water.

    Check your mirror.

    Solid-acid fuel cells might someday give the polymer-based cells that power this Mercedes a run for their money.

    CREDIT: DAIMLERCHRYSLER

    Carmakers like PEM fuel cells in part because the electrolytes are highly conductive and therefore generate abundant power quickly. But they do so at a cost. The cells work best with hydrogen, which is difficult to store in large quantities. And because they require liquid water to help shuttle hydrogen ions through the electrolyte, they must operate below 100°C. At such low temperatures, precious-metal catalysts work slowly and can become inactivated by binding to carbon monoxide. In principle, solid-acid electrolytes could operate at higher temperatures. But for decades few researchers took them seriously because they dissolve in liquid water.

    Two years ago, Haile's team tried to solve that problem by operating solid-acid fuel cells at about 250°C—hot enough to turn any stray water into harmless vapor. Unfortunately, hydrogen in the fuel reacted with sulfur in the solid acid, tearing apart the electrolyte and generating hydrogen sulfide, another catalyst wrecker. Haile's team considered replacing the sulfur with phosphorus by using cesium dihydrogen phosphate (CsH2PO4) as an electrolyte. But at high temperatures, the H2 reacts with oxygen to form water, a change that causes the remaining solid to crumble. “We did not think you could use the phosphate compound,” Haile says.

    Now the researchers have discovered an elegant solution: fighting water with water. Adding a tiny amount of water vapor to their system with the phosphate electrolyte, they found, prevents hydrogen molecules from leaching out of the solid to form more water. The new solid-acid fuel cells still don't put out as much power as the PEMs do. But Haile suspects that her team can get much closer by making the electrolyte membrane thinner so that ions can cross more easily. If so, solid-acid fuel cells could become cheaper than their rivals. The cells also work well with alternative fuels such as methanol, which are far easier to transport and store than gaseous hydrogen. Those advantages could make solid-acid fuel cells inviting indeed.

  10. PLANET HUNTING

    The Search for Pale Blue Dots

    1. Robert Irion

    As European and U.S. astronomers prepare for their most ambitious mission, can they overcome steep technological and political challenges to study other Earths?

    PASADENA, CALIFORNIA—In a famous photo taken by the Voyager 1 spacecraft, a remote planet appears suspended in a shaft of light. It isn't Jupiter or Neptune, but tiny Earth—seen from 6.4 billion kilometers away as Voyager looked back upon its home. Plucked from the sun's brilliance, Earth's reflected light barely registered in Voyager's camera. The image evoked an eloquent turn from the late Carl Sagan in his 1994 book, Pale Blue Dot. “Our planet,” he wrote, “is a lonely speck in the great enveloping cosmic dark.”

    Other lonely specks exist around nearby stars, astronomers believe—perhaps scores of them. But in the years after the Voyager missions, finding worlds like Earth was a distant dream. Calculations showed that light from the parent star would swamp the feeble signal from an orbiting rocky planet by a factor of 10 million to 10 billion. Special space telescopes equipped to counteract that glare seemed far in the future.

    That future, it now seems, is nearly here. Growing teams of astronomers and engineers in Europe and the United States are convinced that they can use optical wizardry to nearly erase a star's light. The methods promise not only to expose Earth-like companions but to detect the basic ingredients of their atmospheres. If life like that on Earth has arisen on another pale blue dot, such telescopes could see its chemical imprints.

    Terrestrial planet-hunting missions—called Darwin by the European Space Agency (ESA) and the Terrestrial Planet Finder (TPF) by NASA—remain more than a decade from launch. But at a recent meeting here devoted to TPF,* researchers touted their steady technological progress and the landmark science they envision. “This could be the signature mission of NASA's existence,” says astronautical engineer Gary Blackwood of the Jet Propulsion Laboratory (JPL) in Pasadena. “It will completely redefine how we look at the universe and how we look at ourselves.”

    To reach that milestone by 2015, astronomers will soon face a difficult choice among several options for how to build the satellites. Moreover, it appears that ESA and NASA must join forces on a single Earth-finding project to afford the multibillion-dollar price tag. That's a lot of money to find pinpricks of light, but officials in both agencies argue that it's an essential step toward the eventual characterization of habitable worlds elsewhere. Although an early agreement between ESA and NASA is in place, that political fusion could prove touchy because of some transatlantic differences in science and engineering ideas.

    So near, so far.

    Blue Earth dwindled to a speck (top) when seen from 6.4 billion kilometers away.

    CREDITS: NASA

    But the senior scientists working on TPF feel that the program is building a head of steam, especially from an influx of junior colleagues who view hunting for Earths as astronomy's newest frontier. “I can't imagine a more interesting and worthwhile challenge for the next 10 years than to find other planets like ours and to say something about what's on them,” says astronomer Wesley Traub of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “And to do this just with light—it really is kind of magical.”

    Our sparse stellar suburbs

    Traub's magic must overcome many unknowns. The most basic is our ignorance about the nature of other planetary systems. To date, the discoveries of extrasolar planets are skewed toward massive bodies such as Jupiter. Theorists assume that small planets should also form in such systems, but their abundance is hotly debated. “We simply don't know which stars have Earth-like planets,” says Philippe Crane, theme scientist for the Origins Program at NASA Headquarters in Washington, D.C. “If they are numerous, the mission is simple. If not, the mission is almost impossible.”

    The problem is that our pleasant neighborhood in the Milky Way is sparsely flecked with stars. If a good fraction of those stars have spawned Earths, Crane notes, TPF could search within a comfortable distance of the sun—perhaps 50 light-years—and expect to find some rocky planets. Fewer Earths would require a bigger telescope to scan at least twice as far away. “But it's much more difficult beyond [100 light-years],” Crane says.

    The TPF team assumes that 10% of the long-lived stars similar to our sun harbor Earths in orbit around them. That leap of faith is based on trends in the masses of extrasolar planets found so far, combined with models of how smaller planets might form. But truth be told, no one really knows. “When we start cutting metal [for the telescope], we'll need to demonstrate that 10% is right,” says project scientist Charles Beichman of JPL. “If it turns out to be less than 1%, we might design no mission at all.”

    One worry is that many larger planets may migrate inward toward their stars soon after they form, pushed by interactions with the young disk of gas and dust. If that happens in most planetary systems—as some simulations suggest—baby Earths would be swept by their larger brethren to fiery graves as well. “We have to hope that all the Jupiters have not whizzed through,” says astronomer Neville “Nick” Woolf of the University of Arizona in Tucson.

    Beichman's counterpart on Darwin cites the uncertainty over Jupiter-sized bullies as one reason to be less sanguine about the galaxy's terrestrial abodes. “Earths could exist at the 1% to 3% level,” says project scientist Malcolm Fridlund of ESA's European Space Research and Technology Centre in Noordwijk, the Netherlands. “Before we have that data, the scientific way of doing things is to play it safe. We still feel that a detailed study of a few hundred stars is the least we should do.” And that means a mission on a grander scale than many researchers at NASA are contemplating.

    Insights about the prevalence of Earths should come later this decade. NASA's Kepler satellite, planned for launch in 2007, will monitor 100,000 stars in a small patch of sky for slight dips in brightness caused by gas giants and small rocky planets crossing in front of the stars. A smaller French mission called COROT will watch for planetary eclipses as well. Together, the two missions should help astronomers narrow their estimates of how commonly terrestrial planets arise. However, the scientific preparations for TPF and Darwin suffered a setback in November when ESA canceled Eddington, a versatile satellite that promised to verify and extend Kepler's results (Science, 14 November 2003, p. 1130).

    Spotted.

    A 36-meter interferometer might see excess light from an Earth-like planet (center right) 30 light-years away.

    CREDIT: JPL/NASA

    In 2010, a far more sensitive telescope will start producing the atlas of stars for TPF to scrutinize: NASA's Space Interferometry Mission (SIM). The satellite will survey 2000 relatively nearby stars in all parts of the sky for indirect signatures of medium and large planetary companions, as evidenced by changes in the stars' positions as the planets tug on them. For 200 closer stars, SIM should reveal hints of Earth-sized bodies for TPF to zero in on. “SIM will make the definitive planet census for TPF,” says deputy project scientist Stephen Unwin of JPL.

    An observatory now in orbit will examine another crucial scientific issue for Earth hunters: the effects of dust. Ground-based studies and early images from the Spitzer Space Telescope, which views the heavens at infrared wavelengths, show that dusty disks exist around most young stars. That dust dissipates with age, but some remains. Astronomers haven't known how much dust to expect around middle-aged stars, says planetary scientist Karl Stapelfeldt of JPL.

    “Debris disks are a nuisance for TPF,” Stapelfeldt notes. The light they scatter from their stars might confuse the telescope's sensors and—if the dust ripples or clumps in some way—even mimic a planet. By cataloging dust levels around stars of all ages, the Spitzer Space Telescope will give TPF scientists the data they need to remove the dusty noise sources from their images. Dust isn't all bad, Beichman observes: “When it gets up to centimeter sizes, that's the stuff from which dreams and planets are made.”

    An intramural design competition

    In the next 2 years, a more immediate challenge looms for TPF and Darwin astronomers. The teams must demonstrate that they can build an orbiting telescope capable of seeing the equivalent of a firefly next to a lighthouse beacon from a distance of 1000 kilometers, with some coastal fog thrown in. That's the true barrier in the path of studying other Earths.

    Until a few years ago, NASA seriously considered just one solution: interferometry. In this method, light beams collected by several telescope mirrors combine to form one image. By inducing precise delays of a half-wavelength of light in some beams relative to others, opticians can design an interferometer that cancels almost all of the light from a bright central source. If a pinprick of light enters the mirrors from an angle slightly off to one side—such as a planet lurking a short distance from its star—its light won't cancel out. Instead, it will leave a residual blip in the interferometric wave pattern.

    Astronomers are proving this method, called “nulling interferometry,” on the ground. For instance, a team from the University of Arizona reported in the 1 December Astrophysical Journal Letters using a nulling interferometer at the 6.5-meter Magellan telescopes in Chile to resolve a gap in a dusty disk around a young nearby star—the first such observation from the ground. Nulling systems also are in the works at several of the world's largest telescopes. “Interferometry has a legacy of development on the ground that makes us more confident about adapting it for space,” says Gary Blackwood of JPL.

    In unison.

    Four free-flying mirrors combine light in a deep-space interferometer, in one artist's concept of TPF.

    CREDIT: JPL/NASA

    However, a dark horse has complicated the design decision. Some astronomers now believe that TPF will work as a coronagraph: a single elliptical mirror, 3.5 meters wide by 6 meters long, that uses optical tricks to get rid of starlight from the zone where planets might exist. The technique is more complex than simply using an opaque disk to block the light of the star. Rather, researchers have created elaborate “pupil masks” that diffract starlight into tight patterns at the center or around the margins of the focal plane. That leaves the planetary zone nearly pitch-black. Any source of light in that zone will then show up on the detector, no longer washed out by its host star.

    Proponents of this approach think that the resulting images of alien planetary systems will be easier to understand and more appealing to the public. “Coronagraphs have the advantage of being able to get a direct image,” says Charles Lillie, an industrial contractor on TPF from Northrop Grumman in Redondo Beach, California. Interferometers, he notes, force researchers to do a “synthetic reconstruction,” aided by models of the light patterns created by one or more planets.

    However, the visible light regime of coronagraphs has a big disadvantage. Stars blast the mirrors with 1 billion to 10 billion times more photons than the paltry reflected light from planets. To redirect all of that glare, says Traub, “you must have as close to perfect optics as you can imagine.” Indeed, the light waves must be uniformly reflected to within less than a nanometer, requiring extensive corrections in space with an adaptive optics system. Interferometers, conversely, work at longer wavelengths of light in the infrared spectrum. There, planets shine with their own warmth. The extra emission makes planets a mere 10 million times fainter than their host stars. But there is a challenge here as well: The entire system must work cold, within a whisper of absolute zero.

    To make matters yet more complex, NASA is considering not just one but two options for interferometers. The first features a metal lattice, perhaps 20 to 40 meters long, with four mirrors arrayed in fixed positions. This “structurally connected” unit (or, as Beichman calls it, an “interferometer on a stick”) would fold up inside a rocket housing and deploy itself in deep space. The second option consists of four or more “free-flying” mirrors, drifting in a formation 70 to 150 meters wide. These identical units would send data to each other across empty space, in a concept similar to that of the planned Laser Interferometer Space Antenna (LISA) for detecting gravitational waves (Science, 16 August 2002, p. 1113).

    Although the principal goal of TPF is simply to spot other Earths, astronomers think each of the three design options could measure the basic atmospheric contents of the planets as well. Simple spectral features would point to gases such as carbon dioxide, water vapor, methane, oxygen, and ozone. An abundance of the latter three gases might reveal that alien creepy-crawlies have thrown the atmosphere out of a natural chemical equilibrium. This evidence of life elsewhere would spark an even more ambitious follow-up mission—variously called Terrestrial Planet Imager or Lifefinder—to peer more closely at the atmospheres and take resolvable images of the planets.

    Tricks with light.

    In one TPF coronagraph idea, eye-shaped apertures (top) would steer starlight to expose black zones (bottom) that might reveal planetary specks.

    CREDITS: JEREMY KASDIN/PRINCETON UNIVERSITY

    Teams of researchers at JPL, industrial contractors, and universities are now testing elements of each of the potential TPF mission designs, primarily on optical lab benches and with virtual simulations. A design selection—the “architecture down-select,” in NASA-speak—is set for early 2006.

    Beichman has no idea how things will turn out. “I'm a technology and wavelength agnostic. I hate all of them equally,” he says with a wry smile. “I can tell you all of their blemishes. But if we're going to fly by some date, we have to make a choice. In the next 2 years, we'll see which of the thousand flowers will bloom.”

    The European connection

    European astronomers decided a few years ago which flower to pick: the free-flying interferometer. And some TPF scientists fret that ESA's partnership on the project will wilt should NASA choose differently.

    Europe's decision was rooted in several factors, says Sergio Volonté, coordinator of astronomy missions at ESA Headquarters in Paris. First, he says, “our community is extremely well versed in interferometry. It's really one of our strong points in Europe.” For instance, European engineers have led the way in developing Field Emission Electric Propulsion thrusters, which control the positions of spacecraft by emitting minuscule jets of liquid metal ions. The devices will fly on two approved test missions, including a pathfinder mission for LISA in 2007.

    Second, European scientists have long felt that scientific observations of extrasolar planets are more promising in the infrared spectrum. “The [spectral] features are strongest and broadest in the infrared,” says Darwin's Fridlund. Moreover, Fridlund claims, any follow-up mission would have to be a big interferometer, so it makes more sense to pursue that technology now.

    TPF scientists acknowledge that scientific factors may push NASA in the same direction. Arizona's Woolf notes that current projections of the number of Earths put the teams “right on the edge” of being able to study enough stars with a coronagraph or a structurally connected interferometer. “If you need to look at 100 stars or 500 stars, you're probably forced into a free flyer,” he says.

    The Europeans praise the open approach NASA has taken thus far. “It's too early to decide the final architecture of such an ambitious project,” says Volonté. But if NASA selects the coronagraph, he concedes, “the nature of the collaboration would be different. We would have to reconsider our own plans.”

    Both agencies are funding their Earth-finding programs steadily, although NASA's outlay ($40 million for TPF this year and $50 million committed for fiscal years 2005 and 2006) is about five times greater than ESA's. But when it comes time to build and fly the mission, costs could soar to $5 billion or beyond. For that reason, both ESA and NASA officials are eager to see a formal collaboration replace the current agreement, which calls for sharing information and having seats on each other's scientific advisory panels.

    “Neither of us can afford this by ourselves,” says Lia LaPiana, program executive for the Navigator terrestrial planet-hunting program at NASA Headquarters. “But the good news is that the public can understand this very easily, much more than the event horizons of black holes.” Public and congressional enthusiasm will help keep the program on track, she believes.

    U.S. scientists realize they need to convince their peers—specifically, members of the field's next decadal review panel—that TPF deserves top billing as astronomy's most critical project of the 2010 decade. Anything less, they fear, will doom the massive effort. “I think we're ready as a community for a challenge at this level,” says astrophysicist Anthony Hull of JPL. “It's the frontline of astronomy, and it's attracting really good people.”

    Hull and his colleagues are braced for a calendrical challenge as well, because schedule slips are part of the big-project culture at NASA. “It's been 16 years since I started working on devices to look for Earths,” says Woolf. “I'd hate to think it will be another 16, but it could be even longer.” Still, he's in it for the long haul: Pale blue dots are a powerful draw.

    • * TPF Science, Technology, and Design Expo, 14 to 16 October 2003.

  11. ARCHAEOLOGY

    A Surprising Survival Story in the Siberian Arctic

    1. Richard Stone

    Artifacts dated to 30,000 years ago tell of human resilience in an unforgiving environment, and they may provide new clues to the peopling of the Americas

    Primates are simply not primed for Arctic survival. A person lost on the tundra in winter will quickly perish, and even the sturdiest shelter atop the permafrost provides scant refuge without a supply of fuel. Yet somehow, at the height of the last Ice Age, humans endured a similarly unforgiving environment in northern Siberia, in the Yana River valley 500 kilometers above the Arctic Circle. That's the surprising conclusion from a trove of artifacts uncovered at Yana and dated to about 30,000 years ago (using corrected radiocarbon dates). The find, described on page 52, pushes back the earliest known human occupation in the Asian Arctic by some 16,000 years.

    It's impossible to know what forces drove or enticed these Stone Age pioneers so far north or how long they clung to a precarious existence in the High Arctic. That they were there at all, though, is a revelation. It's “an extremely important site,” says Donald Grayson, a paleoanthropologist at the University of Washington, Seattle. Or as paleoanthropologist Frank “Ted” Goebel of the University of Nevada, Reno, sums it up: “Wow.”

    The discovery, by a team led by archaeologist Vladimir Pitulko of the Institute for the History of Material Culture in St. Petersburg, Russia, poses many questions. For instance, where did the hardy individuals come from? And did they adapt to hyperborean life year round, or were they fair-weather hunters chasing big game to the northern edge of the mammoth steppe?

    The biggest question of all, however, is whether this desolate corner of Siberia offers a clue to the peopling of the Americas, one of archaeology's enduring puzzles. Some artifacts from Yana—two beveled foreshafts of spears carved from mammoth ivory and a spectacular specimen fashioned from the horn of a woolly rhinoceros—are strikingly similar to ones left by the Clovis people, long presumed to be the first North Americans, beginning about 13,600 years ago. “This discovery reconfirms the Siberian connection to the early peoples that entered the New World,” argues Michael Waters, an archaeologist at Texas A&M University, College Station. But other artifacts, such as chipped stone cutters or diggers, don't resemble Clovis tools, leaving plenty of room for debate.

    Common heritage?

    This exquisite spear foreshaft, made of rhino horn, resembles those of the Clovis culture.

    CREDIT: V. PITULKO ET AL.

    Yana is only the second site offering evidence that humans penetrated the Arctic before the final deep push of glaciers southward during the last Ice Age, the so-called glacial maximum, between 20,000 and 25,000 years ago. The first to claim this honor is a site near the Arctic Circle in European Russia called Mamontovaya Kurya, where a few years ago Pavel Pavlov of the Komi Scientific Center in Syktyvkar, Russia, and colleagues discovered stone artifacts and a mammoth tusk bearing what appear to be cut marks and dating to roughly 40,000 years ago.

    The initial clue that Yana had similar secrets to reveal came in 1993, when Mikhail Dashtzeren, a Russian geologist, came across the rhino horn foreshaft while prospecting for the bones and ivory of Ice Age animals. Pitulko and his colleagues got wind of the find, and over the summers of 2001 and 2002 they uncovered a wealth of artifacts, including 376 flaked slate pebbles—a third of which the team classified as choppers, scrapers, and other tools—along with the three foreshafts.

    Just how tough a survival test these Arctic denizens would have faced is uncertain. Clearly, the mammoth steppe extended to the region at the time, as Pitulko's group unearthed nearly 800 bones and bone fragments belonging to typical steppe fauna—mammoths, horses, bison, cave lions, and the like—and pollen data point to a cool, dry climate with stands of larch and birch. “Abudant game means lots of food,” says Julie Brigham-Grette, an expert on Beringia at the University of Massachusetts, Amherst. “It was not stark tundra as one might imagine.”

    More speculative is Yana's connection, if any, to the New World. The first traces of humans in Alaska date to roughly 14,000 years ago. But surveys have turned up few sites of similar age on the Asian side of the Bering Land Bridge, which linked Alaska and northeastern Asia when sea levels ebbed during the last Ice Age. “Every ‘pre-Clovis’ archaeological site in Beringia has either been refuted or seriously questioned,” says Goebel, whose team recently reported that one site long considered a way station on the Bering road, Ushki Lake in Kamchatka, was inhabited more recently than had been thought (Science, 25 July 2003, p. 450).

    Yet Yana's antiquity opens up tantalizing possibilities. “To me, the Yana site finally makes it plausible that the first peopling of the Americas occurred prior to the last glacial maximum,” says Daniel Mann of the Institute of Arctic Biology at the University of Alaska, Fairbanks. Waters agrees: If the Yana people were indeed adapted to circumpolar life, he argues, “then there were no environmental barriers that these people could not overcome that would have prevented them from migrating eastward into the Americas.”

    Many experts are wary, though, of assuming that the beveled foreshafts from Yana imply a common heritage with Clovis. Such foreshafts from the last Ice Age have turned up in many places in Europe and western Asia, notes archaeologist David Meltzer of Southern Methodist University in Dallas. They may be similar, he says, “because they were independently developed for similar uses.” And Yana's stone-tool assemblage is so dissimilar to Clovis technology that it undermines a link. “The intriguing mixture of expected and unexpected artifact assemblages seemingly says little, if anything, about the peopling of the Americas,” argues archaeologist Tom Dillehay of the University of Kentucky, Lexington, leader of the team at a 15,000-year-old pre-Clovis site in Monte Verde, Chile.

    Only further sites in Beringia could settle the debate. Yana, at least, has reinvigorated that hunt.

  12. ENVIRONMENT

    Uncertain Science Underlies New Mercury Standards

    1. Erik Stokstad

    There's no doubt mercury is dangerous, but despite new findings, it's still tricky to figure out how much emissions ought to be cut

    The debate has a familiar ring. The Bush Administration, mandated to curb power plant emissions of mercury, last month unveiled two schemes for reducing the potent neurotoxin. Environmentalists countered that the Environmental Protection Agency's (EPA's) proposals go easy on industry and would do too little too late, “needlessly putting another generation of children at risk of mercury exposure,” says Michael Shore of Environmental Defense in New York City.

    Rhetoric aside, much of the underlying science is still uncertain. Recent studies do suggest that in some locations cutting emissions can help wildlife—and thus presumably human health—within years. But how general these results are, or what the exact magnitude of benefit from the new regulations is, remains unclear. “There's a fundamental disagreement about what the overall benefits will be,” says geochemist David Krabbenhoft of the U.S. Geological Survey in Middleton, Wisconsin.

    Mercury can clearly damage the brain, and fetuses are particularly vulnerable. Children who were continually exposed in the womb tend to have developmental delays and learning deficits. The primary route of exposure is through eating fish, which bioaccumulate mercury from their prey. Between 1995 and 1997, EPA ruled that all municipal and medical incinerators—major sources of the toxin entering the food chain—cut their emissions by 90% to 94%.

    The net result is hard to quantify because of a lack of long-term monitoring. But findings released in November are encouraging. This 10-year study of the Florida Everglades showed that mercury levels have declined by as much as 75% in fish and wading birds at half the sample sites. “The system responded more quickly than we would have dared hope,” says project coordinator Thomas Atkeson of the Florida Department of Environmental Protection in Tallahassee. Experts caution, however, that the unique hydrogeology of the Everglades raises questions about the relevance for other regions.

    Up in the air.

    New rules to regulate mercury from power plants may stumble over the question of how far the toxicant travels.

    CREDIT: CORBIS

    Left unregulated were power plants, which now account for some 40% of overall mercury emissions in the United States. As part of a legal settlement in 1994, EPA agreed to study the hazard of these emissions. In December 2000, the agency categorized mercury as “a hazardous air pollutant” and determined that power plants should be regulated. It also agreed to propose ways to do so by December 2003. That is a tricky task. Scientists are uncertain about important details, from the idiosyncratic chemistry of coal combustion to the myriad reactions that determine when mercury falls from the sky and how toxic it becomes.

    Legally, because mercury is categorized as a toxic air pollutant, EPA must propose a rule that requires every power plant to meet a certain emissions standard, as it did with incinerators. Under one of EPA's new proposals, every coal-fired plant would be allowed to emit no more mercury than the cleanest 12% of plants do today. That would reduce mercury emissions by 29% by 2007, the agency calculates.

    EPA prefers a second option, however, which cuts mercury further but takes longer to do so. This plan is a trading scheme, which sets a two-stage cap on overall emissions and cuts them 70% by 2018. Plants that emit less than their allocated amount of mercury may sell pollution credits to those releasing more. Robert Wayland of EPA's Office of Air Quality and Planning Standards says that the cap level and timeline are intended to maximize environmental benefits while not causing “huge disruptions in the coal industry.” He predicts it will lead to even cleaner emissions because it does not lock industry into using today's technology.

    The “cap and trade” rule is modeled on the successful reduction of acid rain (Science, 6 November 1998, p. 1024). But many scientists say that mercury may behave differently from those air pollutants, most crucially in how far it travels from power plants. Models produce a wide range of results, and some predict that up to 50% of mercury emissions are deposited locally. That raises the concern that if particular plants do not reduce emissions, nearby communities will remain polluted. EPA acknowledges that these so-called hot spots could conceivably occur but notes that states can implement tighter restrictions.

    Another worry is that the 15-year deadline would prolong exposure to mercury. Researchers had once assumed that regardless of emissions cuts, fish would remain contaminated for decades because soil and lake sediments contain mercury from 150 years' worth of pollution. Recent research, in addition to the Everglades sampling, suggests more immediate results. An effort called METAALICUS shows that mercury isotopes recently added to an experimental lake in Ontario are much more rapidly converted to a biologically active form—methylated by sulfate-reducing bacteria—than is mercury that's been in the sediment for years. That suggests that cutting emissions could clean up lake waters relatively quickly.

    Even so, it's difficult to establish with any precision what biological benefits will result from a particular cut in mercury emissions. “A whole host of factors can mask relationships between deposition and bioaccumulation in food webs,” says aquatic toxicologist James Wiener of the University of Wisconsin, La Crosse. “It becomes very messy and complicated.”

    Citing such uncertainty, EPA did not calculate benefits to human health from cleaner fish when it proposed its rules. Instead, the agency looked at the better-understood health benefits from reducing sulfur dioxide, nitrogen oxides, and particulate matter, which would also drop along with mercury emissions. EPA will collect comments until at least early March and hold a public meeting before deciding which rule to finalize in December 2004.

  13. CLIMATE CHANGE

    Sea Change in the Atlantic

    1. Richard A. Kerr

    Greenhouse warming may be accelerating the cycling of water through the Atlantic Ocean, with uncertain effects on everything from polar ice to European climate

    SAN FRANCISCO, CALIFORNIA—All eyes are on the global thermometer these days: 2003 was the third-warmest year on record, probably one of the warmest in the past millennium. But there's more to global climate than temperature. At the fall meeting of the American Geophysical Union here last month, a group of physical oceanographers announced that a grand compilation of oceanic observations that nearly spans the Atlantic Ocean from pole to pole reveals a decades-long shift in the cycling of water through the climate system. The hydrologic engine that pumps fresh water from Atlantic tropical seas to high latitudes has been cranked up, possibly by global warming.

    The resulting surge of fresh water into polar latitudes could ultimately disrupt the climate system, with diverse effects such as a counterintuitive chilling of the far North Atlantic region or the melting of Arctic ice. The study “is another piece of evidence that there really are large-scale changes going on,” says physical oceanographer Sydney Levitus of the National Oceanic and Atmospheric Administration in Silver Spring, Maryland. The question now is whether the changes persist and how dire any ramifications will be.

    Oceanographers made no new observations to gauge the speed of the Atlantic's hydrologic cycle. Levitus and his colleagues originally gathered millions of temperature and salinity measurements made during the past century from around the world (Science, 24 March 2000, p. 2126). Physical oceanographer Ruth Curry of the Woods Hole Oceanographic Institution in Massachusetts has further checked the Levitus data from the Atlantic for quality and consistency. “You can slice through the Atlantic from pole to pole to see how things have changed,” she says. That's what she and physical oceanographers Robert Dickson of the Centre for Environment, Fisheries, and Aquaculture Science in Lowestoft, U.K., and Igor Yashayaev of the Bedford Institute of Oceanography in Dartmouth, Nova Scotia, have done. As they reported in the 18/25 December 2003 issue of Nature, they compared the change in Atlantic conditions between two 14-year periods centered in 1962 and 1992.

    A changing ocean.

    The Atlantic's shallow, tropical waters have become more saline (reds and yellows) while its high latitudes became fresher (blues and greens).

    CREDITS: (TOP TO BOTTOM) CRAIG DICKSON/WOODS HOLE OCEANOGRAPHIC INSTITUTION; R. CURRY ET AL., NATURE 426, 826 (2003)

    Curry and her colleagues found changes of “remarkable amplitude” over the 30-year interval. The tropical ocean shallower than about 1000 meters had become warmer and saltier, while waters at high latitudes to the north and south had become fresher, right down to the bottom. That's a pattern that makes physical sense, they say. The tropics warmed enough to account roughly for the additional evaporation—about an extra 2 meters of water evaporated during the past 40 years—needed to increase the salinity of tropical waters as observed. That extra evaporated water would have eventually traveled poleward and precipitated into the sea or fallen onto the land and run into the sea, helping explain the polar freshening. Based on more limited surveys, a similar pattern of increasing salinity in the tropics and freshening of higher latitudes has been reported in the Pacific.

    To Curry and her colleagues, it's looking as if something has accelerated the world's cycle of evaporation and precipitation by 5% or 10%, and that something may well be global warming. Climate models of the intensifying greenhouse predict such an acceleration, the expected pattern is showing up in both the Pacific and Atlantic, and it's been going on as greenhouse warming has been kicking in, they note. If so, the consequences could be as broad and enduring as those of greenhouse warming. For example, add too much fresh water too quickly to the surface of the far northern North Atlantic, and the fresh water could slow or even stop the Gulf Stream's delivery of warm water to the far north Atlantic that helps moderate climate there (Science, 27 September 2002, p. 2202). Such shutdowns during the last ice age were nearly catastrophic, with Greenland chilling 10°C in a decade, although the slow freshening that Curry and her colleagues have observed would likely have less dramatic effects. And the extra evaporated water leaving the tropics carries large amounts of heat that could eventually help melt polar ice or shift storm tracks.

    Oceanographers are welcoming the new analysis. “It's a very appealing story,” says physical oceanographer Harry Bryden of the Southampton Oceanography Centre in the United Kingdom. “It clearly shows there is this consistent pattern” of changing salinity, but “whether it's a long-term trend or not remains to be seen.” Experience has made him leery of snap decisions about the ocean. Climate modeling had suggested that a freshening in the southern Indian Ocean between the 1960s and 1987 had been induced by anthropogenic climate change. But a new survey by Bryden showed that the freshening had since reversed and the southern Indian Ocean had gotten saltier since the 1980s (Science, 27 June, p. 2086). As a result, Bryden is “skeptical these days about ‘trends,’” he says.

    Given the limited observations, oceanographers are answering the oft-heard call for more data. Curry is extending her database to the Pacific. Levitus is continuing to forage around the world for neglected data sets, in hopes of extending the useful record back in time. And ocean surveys continue, by both oceanographers and robots. So perhaps more attention to the water cycle will help us understand what a warmer world will be like.

Log in to view full text