News this Week

Science  26 Jan 2001:
Vol. 291, Issue 5504, pp. 566
  1. CLIMATE CHANGE

    It's Official: Humans Are Behind Most of Global Warming

    1. Richard A. Kerr

    Expert opinion just got much more certain that humans are driving the planetary fever of recent decades. Eschewing its vagueness of 5 years earlier about glimpsing “a discernible human influence on global climate,” the United Nations-sponsored Intergovernmental Panel on Climate Change (IPCC) officially declared early this week that “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.” Not the sun, not natural climate fluctuations, not some bug in a computer model, but carbon dioxide and other heat-trapping gases that humans are pumping into the atmosphere. The panel—whose report represents the consensus of hundreds of participating scientists and was just approved by 100 participating governments in Shanghai—was vaguer than ever, though, about how bad things could get by the end of the century. At a minimum, the world will warm more than twice as much in the coming century as it did in the past one, the panel concluded, but it could warm 10 times as much.

    The warming outlook is founded on the improved scientific understanding since the last IPCC report in 1995. “There have been substantial increases in insight over the past decade,” says atmospheric scientist and U.S. IPCC delegate Daniel Albritton of the National Oceanic and Atmospheric Administration in Boulder, Colorado. “I found it pretty impressive.” Computer models now do a better job at calculating how much of past warming might be due to natural climate fluctuation and how much warming there might be in the future. Albritton and others are particularly impressed with the millennium-long temperature records extracted from tree rings and other climate proxies. With this long perspective, the Northern Hemisphere warming of the 20th century “is likely to have been the largest of any century during the past 1000 years,” the report finds, and “is unlikely to be entirely natural in origin.”

    While uncertainties have narrowed about what's causing the warming, projecting it into the future seems more uncertain than ever. In the 1995 report, researchers combined projections of how much greenhouse gas humanity might produce with model estimates of climate sensitivity—that is, how much various increases in greenhouse gases should warm the climate. The range of possible warming by 2100 ran from 1.0°C to a hefty 3.5°C in 1995.

    In the new report, the projected range of warming starts from a still modest 1.4°C but rises to a staggering 5.8°C. While estimates of climate sensitivity haven't changed much, projections of possible global pollution levels in 2100 have. In this go-round, IPCC members considered scenarios in which countries drastically cut emissions of sulfurous pollution, which forms a cooling haze over large parts of the world. Without a protective umbrella, the greenhouse would sizzle.

    Although only the upper end of a range of possibilities, the 5.8°C number is prompting headlines proclaiming seemingly inevitable climate disaster. Albritton views the broadened range of possibilities as a recognition of the obvious. “You can't forecast what technology or the human race is going to do 100 years from now,” he notes. This socioeconomic uncertainty is currently as large as the uncertainty still inherent in climate models, the report notes.

    Negotiators at last November's climate talks at The Hague (Science, 1 December 2000, p. 1663) were aware of the gist of the IPCC report—drafts of which were widely leaked last year—but negotiations on reining in greenhouse gases broke down anyway. Many problems remain to be taken up again when talks reconvene in May in Berlin, but a looming obstacle is the stance, which has yet to be spelled out, of President Bush and his Administration. Atmospheric physicist Michael Oppenheimer of Environmental Defense in New York City thinks the report could make a difference. It shows that “there's been a climate change, and there are going to be bigger changes in the future,” he says. “It's hard to see how the new Administration could fail to take it seriously.”

  2. QUANTUM OPTICS

    Atomic Squeeze Play Stops Light Cold

    1. David Voss

    Last year, physicists made headlines by slowing light down to the speed of a leisurely bicycle ride. Now, pushing the experiment to its logical conclusion, they have slammed on the brakes. In papers in Physical Review Letters and Nature, scientists report that they have used atomic gases to grab light pulses, squeeze them into a smaller space, imprint them on atoms, and read them out again after a delay. The researchers speculate that such sleight-of-light tricks might one day be useful in the still theoretical field of quantum information processing.

    “This is very brilliant stuff,” says Steve Harris of Stanford University, a physicist who first measured the optical slowing effect 5 years ago. “You can compress this kilometers-long pulse down and store it completely, and all of the information is preserved.”

    Light travels at about 300 million meters per second, a fact hammered into the heads of beginning science students. But that's in a vacuum, and it is only half the story. In fact, light has two different velocities—the phase velocity and the group velocity—and they can be very different. Phase velocity is the speed of a theoretical, pure light wave of a single frequency. Group velocity, by contrast, measures how fast a real signal moves through real matter.

    Last year, researchers led by Lene Hau of Harvard University slowed light to a crawl by exploiting the way special kinds of atomic matter can play around with the group velocity of a laser pulse. They started with a gas of sodium atoms, chilled down to nanokelvin temperatures, of the sort used to study Bose-Einstein condensates (Science, 22 December 1995, p. 1902). Normally the sodium vapor is opaque to laser pulses, but the researchers canceled out the absorption by tweaking the atomic energy levels with another laser. Such “electromagnetically induced transparency” (EIT), which Harris and colleagues discovered nearly a decade ago, suddenly makes the gas transparent to the laser light. It also causes the light pulses to slow down by factors of millions and shrinks them by seven orders of magnitude, like a stretched-out Slinky toy dropped into a tank of molasses. Last year, Mikhail Lukin, Susan Yelin, and Michael Fleischhauer of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, worked out a theory for how this EIT effect could be used to trap, store, and release light.

    In their new experiments, Hau's group prepped the sodium for EIT by hitting the atoms with a “coupling laser.” Then they fired in another laser pulse, which slowed and scrunched down inside the vapor. When the pulse had fully entered the atomic soup, they turned off the coupling beam. “The light pulse comes to a grinding halt,” Hau says. “All the information in the pulse gets stored in the atoms, and we can park it there for a while.” When they switched the coupling laser on again, the vapor became transparent again, and the atomic spins regenerated a perfect copy of the original laser pulse. With this atomic Silly Putty, Hau's team could store light pulses for up to 1 millisecond and spit them out again.

    Meanwhile, another group at CfA has stored and reemitted light by very different methods. Rather than supercold sodium atoms, Ron Walsworth, Mikhail Lukin, and their colleagues used a warm rubidium vapor to catch a laser pulse and then read it out again. In contrast to Hau's tailor-made equipment, the CfA group cobbled theirs together from components they were using in other experiments. “We basically did this with atomic-clock technology,” Lukin says. Their simpler apparatus stored light for up to half a millisecond before releasing it.

    Both sets of researchers stress that they haven't actually trapped photons like butterflies in a jar. The information contained in the laser pulse, they point out, is converted into atomic states that sit around until the control beam tells the light to emerge. Then energy from the control beam is converted into an outgoing pulse identical to the input pulse.

    Many researchers are excited by the prospect of using the technique as a kind of coherent optical storage device—a sort of quantum hologram. Or it might lead to a quantum Internet, with light beams coherently ferrying information from atom cloud to atom cloud. But there is a long way to go before anyone will be saving e-mail in frozen light.

  3. NOBEL PRIZE

    Researcher Overlooked for 2000 Nobel

    1. Laura Helmuth

    When the Nobel Foundation announces its list of prize winners, there are often grumbles that somebody's seminal work was overlooked. Last year's award of the medicine prize has provoked something more: an open letter to the award committee signed by more than 250 neuroscientists.

    The award went to three researchers for their work on how nerve cells exchange signals, and the Nobel Foundation's announcement pointed out the relevance of such work for treating Parkinson's disease and other neurological disorders. The problem, say those who signed the letter, is that the person who discovered the underlying neurotransmitter deficit in Parkinson's disease—and designed the treatment still in use today—wasn't included in the award or even mentioned in the accompanying announcement.

    “Everyone was surprised” that neurologist Oleh Hornykiewicz didn't receive the Nobel Prize last year when the committee recognized contributions to the study of neurotransmitters, says neuroscientist John Hardy of the Mayo Clinic in Jacksonville, Florida. Hornykiewicz's work “fundamentally changed how neuropharmacology is practiced,” he asserts. His colleagues who signed the letter—which will appear in an upcoming issue of Parkinsonism & Related Disorders—apparently agree: The letter congratulates the year 2000 award winners but states “that one prominent scientist … should have been included in this Award.”

    “We want to set the record straight,” says neurologist Donald Calne of the University of British Columbia in Vancouver. Calne and others emphasize that the open letter is “not intended to slight” any of this year's winners, and they acknowledge that the committee that picked the awardees could only honor three people. (A spokesperson for the foundation said that they receive complaints every few years but are barred from discussing the selection committees' deliberations until 50 years have passed.)

    Hornykiewicz, a still-active professor emeritus at the Brain Research Institute of Vienna University Medical School, reported in 1960 that the neurotransmitter dopamine is depleted in the brains of people with Parkinson's disease. He analyzed post-mortem brains of people with a variety of neurological disorders and discovered that only Parkinson's correlated with low dopamine levels. “More or less immediately” after that finding, Hornykiewicz says, he proposed that replenishing dopamine could benefit Parkinson's patients. He convinced a neurosurgeon colleague to administer a dopamine precursor to Parkinson's patients “and saw spectacular results,” he says— results they reported in a 1961 article.

    Giving Parkinson's patients dopamine precursors is “still the best medication we have,” says one drafter of the open letter, neurologist Ali Rajput of the University of Saskatchewan in Saskatoon. What's more, says Hardy, Hornykiewicz's approach to Parkinson's inspired similar neurotransmitter-based therapies for depression, schizophrenia, epilepsy, and other disorders.

    Many of the letter writers and signatories are concerned that the Nobel announcement seems to attribute Hornykiewicz's insights to Arvid Carlsson of Göteborg University in Sweden. Carlsson's work, starting in the late 1950s, set the stage. He discovered that dopamine is a neurotransmitter and found that animals with movement disorders improved when treated with dopamine. But in describing Carlsson's accomplishments, the Nobel Foundation also states: “His research has led to the realization that Parkinson's disease is caused by a lack of dopamine in certain parts of the brain and that an efficient remedy (L-dopa) for this disease could be developed.”

    While expressing the “utmost respect and admiration” for Carlsson, Calne and others contend that the statement is “not absolutely wrong, but easy to misunderstand.” Although the open letter doesn't make it explicit, some signatories suggest that the 2000 Nobel Prize should have focused more narrowly on the impact of neurotransmission research on treatments for neurological disease, and Carlsson and Hornykiewicz should have shared the prize.

    Hornykiewicz says he's “very grateful” for the support. “That is one of the things that really count in the life of a researcher—the recognition of colleagues,” he says.

  4. NEUROSCIENCE

    Glia Tell Neurons to Build Synapses

    1. Laura Helmuth

    After decades of neglect by researchers more interested in know-it-all neurons, brain cells classified as “glia” are getting some respect. Although glia account for 90% of the cells in the adult human brain, they've been written off as simple scaffolding that supports neurons, as sources of nutrition, or as a waste-disposal mechanism for sopping up extra ions and neurotransmitter molecules. But a new study on page 657 shows that glia play a more important role in neuron-to-neuron communication: They tell neurons to start talking to one another.

    Talk to me.

    Neurons grown near glia build more synapses (aglow).

    CREDIT: E. ULLIAN

    Before neurons can send and receive messages, they have to establish connections called synapses, points of near-contact where neurons swap chemical signals. How these synapses form is “a major question in neurobiology,” says Robert Malenka, a neuroscientist at Stanford University who was not involved in the research. In the new work, Stanford's Ben Barres and his colleagues report that neurons can't build synapses very efficiently on their own. Young neurons contain all the raw materials necessary to do the job, but the neurons don't start construction until getting the go-ahead from nearby glial cells known as astrocytes.

    The first indication that glia boost synaptic communication came in 1997, when Barres and then-postdoc Frank Pfrieger reported in Science that the synapses of neurons grown with astrocytes in the cell culture were 10 times as active as those of neurons grown alone. They suspected that the glia somehow either turned up the volume on the neuron transmitting the signal or increased the sensitivity of the neuron receiving it. No one expected to find that glia controlled the number of synapses, says Barres—although, as neurobiologist Charles Stevens of the Salk Institute for Biological Studies in La Jolla, California, points out, the result makes a lot of sense. “Once you know it's true,” Stevens says, “you believe it instantly.”

    Postdoc Erik Ullian and others in Barres's lab set out to pin down astrocytes' power over synapses by performing dozens of experiments on retinal ganglion cells. Unlike other types of neurons, these cells can be raised on a diet of growth factors and other nutrients and don't require glia for survival. In study after study, these neurons grown with glia nearby—even if the glia didn't touch the neurons—were seven times more responsive to various kinds of stimulation than were those unexposed to glia.

    The reason for this improved communication became apparent when the researchers stained various proteins that neurons use to build synapses. Although both types of neurons contained plenty of these proteins, the glia-exposed neurons clustered the proteins into seven times as many synapses as did the isolated neurons. The team confirmed this count by tallying synapses under the microscope. Synapses looked just the same on both types of neurons—their sizes, shapes, and numbers of neurotransmitter-containing vacuoles didn't differ—but the synapses were seven times as numerous in the presence of glia.

    “Every time we used a new technique, we saw the same sevenfold increase,” says Barres. “Eventually, we had to believe it.” And the glia aren't just telling neurons to build synapses, the researchers report. They're also helping neurons maintain the synapses they've established. If glia are removed, the study shows, neurons start to shed synapses within a few days.

    These cell culture findings fit nicely into the picture of how neurons develop in vivo, says Ullian. Neurons send out dendrites and axons to the appropriate parts of the brain very early in development, but they don't form most of their synapses with other neurons until many days later—at the same time that astrocytes mature.

    The team hasn't identified the signal astrocytes send that triggers synapse formation. Once they do, says Stevens, that signal might help decode how neurons register memory. Researchers are convinced that learning forges stronger connections between neurons. Two main theories of how this happens are now being debated, Stevens says: When neurons form a new memory, they might build more synapses with their companions, or they might strengthen existing synapses. Possibly, Stevens says, whatever glial message is at work in this study might also signal neurons to solidify memories.

  5. GERMANY

    National Centers Urged to Team Up, Compete

    1. Robert Koenig

    FRANKFURTGermany's 16 national research centers—a sprawling, $2-billion-a-year array of labs ranging from the DESY synchrotron in Hamburg to the Max Delbrück Center for Molecular Medicine in Berlin—are too insular, according to a new report from the nation's top scientific evaluative body. It urges the government to foster cooperation—as well as healthy competition—among the centers and between them and outside labs by following a U.S.-style funding model that emphasizes research programs that cut across many institutions rather than block grants to individual facilities.

    The latest evaluation, released Monday by the Science Council, adds the final piece of a comprehensive review of the entire German research system. Done by a 14-member panel of German and international experts, the new study finds that the research centers and their governing organization—the Helmholtz Association—suffer from inadequate networking and “too few incentives for competition.” It also recommends that the centers broaden their research agendas and bolster ties with university researchers.

    Helmholtz's 9300 scientists constitute Germany's biggest scientific workforce outside the university system. The nation's federal and state research ministries spend about $1.5 billion a year on the centers, with grants from other German and European research agencies and industry adding another $500 million. Helmholtz's president—Detlev Ganten, who heads the Max Delbrück Center—says the association has already been moving in the direction of more center-spanning research and is conducting “intensive negotiations” with federal and state research ministries over how to manage a revised funding system. He says that the centers already pool 5% of their budgets for competitive grants in six strategic areas.

    Ganten warns that the suggested changes would require more predictable budgeting. (The centers have received relatively small increases in recent years and are still awaiting their budget for this year.) “For long-term research programs, we need longer term budgeting and more flexibility,” he says. “We don't want politically ‘guided' research. We want absolute academic freedom within the categories of research that are agreed upon.”

    One significant reform already under way involves the GBF biotechnology center in Braunschweig. Its new director, mouse-mutant researcher Rudi Balling, is shifting the center's focus from 1980s-era biotech projects such as bacterial fermentation to studies on the genetic basis of infectious diseases. GBF is also planning to work more closely with other biomedical research centers, including Max Delbrück, Heidelberg's DKFZ cancer research center, and Munich's GSF environmental health research center. “National research institutes can't be islands or ivory towers,” says Balling. “They have to become more competitive and more useful for other German researchers.”

    National research centers in the physical sciences—such as the GSI heavy-ion research group in Darmstadt and the DESY particle physics center in Hamburg—say their equipment is already being used extensively by scientists at universities and other German research institutions. GSI's scientific director, physicist Walter F. Henning, says that about 900 of the 1000 users of the heavy-ion accelerator come from outside the center, mainly from European universities.

    Although Henning thinks that program-oriented financing is a good idea, he worries that Germany may not have enough experts on the federal payroll to make the system work. “Program-oriented research is the way to go,” Henning says, “but administering it effectively requires a structure that doesn't yet exist in Germany.”

  6. PLANT BIOLOGY

    Xylem May Direct Water Where It's Needed

    1. Kathryn Brown*
    1. Kathryn Brown is a writer in Alexandria, Virginia.

    If plants are nature's architecture, the xylem is a lowly piece of plumbing. For decades, researchers have seen the xylem as a column of dead tissue, like a worn pipe, that sits inside plant stems passively supplying water to thirsty leaves. But a surprising new study suggests that the xylem is far more active than scientists have suspected.

    In a paper published online today by Science (www.sciencexpress.org), Harvard University plant biologist N. Michele Holbrook, with postdocs Maciej Zwieniecki and Peter Melcher, reports that gels in key xylem membranes constantly shrink and swell. With this motion, the xylem actually adjusts the flow of mineral-rich water coursing toward leaves. “Researchers assume the xylem is a bunch of inert tubes, but it's not,” remarks Holbrook. “It's actually a very sophisticated system for solving a plant's water-flow problems.”

    “This is the first good evidence I know of that the xylem regulates water transport in plants,” says John Boyer, a plant biologist at the University of Delaware, Lewes. Bioengineer William Pickard of Washington University in St. Louis adds: “This idea just came out of nowhere, and it's an excellent paper.”

    The xylem evolved millions of years ago, helping some primitive plants develop into higher vascular varieties—including angiosperms and conifers—that can survive on dry land. For much of the 20th century, researchers assumed the xylem had just two modes: on and off, or working and broken. More recently, they realized that the xylem frequently repairs breaks in its water column.

    Last year, while researching xylem repair, the Holbrook team stumbled across the inspiration for their new study: a 1978 paper by Harvard biologist Martin Zimmerman. In that paper, Zimmerman's team noted that when they pumped tap water—full of everyday salts—into the xylem, it flowed much faster than deionized water. But why· The paper left the question open.

    It's the pits.

    Water flowing up the xylem must cross ever-changing pit membranes like this one to reach thirsty leaves.

    CREDIT: M. ZWIENIECKIET AL.

    Maybe, Holbrook's group reasoned, the added salts somehow alter the xylem. To test that idea, the team cut segments of stems from laurel (Laurus nobilis). They tied a stem's downstream end to a tube that emptied into a cylindrical balance, measured over time. Then they fixed the upstream end to a small, pressurized tank. Using this setup, the researchers pumped water through the stem, steadily increasing the amount of potassium chloride (KCl). Sure enough, by the time the KCl concentration had increased from 0 to 50 mM, the water was flowing up to 2.5 times more quickly. Repeating the experiment with the salts NaCl, KNO3, and CaCl2, they found similar jumps in water flow.

    What's more, the fast flow rates held when the researchers tested these solutions on 18 other angiosperms, five conifers, and three ferns. By contrast, when the team tried deionized water, the xylem's flow rate dropped considerably. Finally, they documented the same phenomenon in vivo, monitoring xylem uptake of salty versus deionized water in the split stem of a tobacco plant. “These changes were clearly due to some mechanical property of the xylem and the way it conducts water,” Zwieniecki says.

    Here they had a hint from industry: Engineers have shown that hydrogels, jellylike substances that can shrink and expand, influence the flow of water through a material. And plants are full of hydrogels, in the form of pectins that glue cell walls together. Could pectins regulate the xylem's water flow· They injected the xylem with solutions of varying pH and polarity, factors known to activate hydrogels. Low pH and nonpolar solvents did, indeed, spur immediate increases in xylem flow rate—a similar effect, the researchers say, to the xylem's uptake of salty water from soil.

    Further experiments localized this activity to the xylem's “pit” membranes—a sievelike mesh of cellulose fibers and pectins. Water flowing up the xylem must pass through these membranes. As a plant soaks up soil minerals, the researchers suggest, the pectins can either swell or shrink. When pectins swell, pores in the membranes are squeezed, slowing water flow to a trickle. But when pectins shrink, the pores can open wide, and water flushes across the xylem membrane toward thirsty leaves above.

    Now Holbrook's team wants to figure out how, exactly, plants put the xylem's water-control system to work. Zwieniecki suggests that the xylem preferentially waters branches or leaves most in need of a drink. The membrane mechanics may also help the xylem deal with drought. But the scientists are ready to be surprised—again. “It had never occurred to me that the xylem could have these inner controls,” remarks Pickard. “There must be a lot more to learn here.”

  7. NATIONAL SCIENCE FOUNDATION

    Transition Rumor Targets Colwell

    1. Jeffrey Mervis

    It was a classic Washington rumor. The incoming Bush Administration had told Rita Colwell, the director of the National Science Foundation (NSF), to hit the road. The supposed evidence· The head of the transition team for NSF, Richard Russell, had held a brief, get-acquainted meeting with Colwell that, according to some sources, “was a disaster.” Russell, it was noted, has been a staffer on the House Science Committee, whose chair, Representative James Sensenbrenner (R-WS), had sparred publicly with Colwell and last year drafted a reauthorization bill with language intended to curb some of her powers. The message allegedly was being conveyed by former Energy Secretary James Watkins, who has been advising the new Administration on science and technology issues.

    With the scientific community already nervous about the new president's commitment to basic research, the rumor spread last week like wildfire. No matter that Colwell is in the midst of a 6-year term that runs until 2004, that she had told colleagues the meeting went well, and that transition officials deny that any mention of Colwell's tenure was ever raised. Another complication is that the outgoing Clinton Administration had explicitly exempted Colwell and other presidentially chosen agency heads with “term appointments” from the need to submit their resignation—a move that makes it easier for the new president to clean house—and that Colwell has said repeatedly that she hopes to complete her term. In addition, there is little evidence that the new Administration so far has focused on science policy at all, much less on who should lead a low-profile agency like NSF.

    Indeed, it may have been the absence of real news that caused things to snowball in the 48 hours preceding last weekend's inauguration ceremonies. Members of the National Science Board, NSF's presidentially appointed oversight body, contacted friends in high places to trumpet the danger of “politicizing” NSF by replacing its director in midterm. Although the board issued no public statement, Watkins, who sources say was “extremely upset” by rumors of his involvement, sent its 24 members an e-mail applauding them “for taking such a strong, timely position.” Scientific societies began collecting signatures on a letter that urges the new president to maintain the “independence of the director's office” as the best way to protect the “integrity of basic research.”

    By Monday, the fire seemed to be subsiding. “Dr. Colwell is enthusiastically looking forward to completing her term,” says her spokesperson, Curt Suplee. However, because it's impossible to disprove, and because nobody has stepped forward to claim responsibility for starting it, the rumor may continue to smolder at least until the new Administration signals its intentions toward NSF.

  8. CLINICAL MEDICINE

    FDA to Release Data on Gene Therapy Trials

    1. Jocelyn Kaiser

    Moving to allay public concerns over the risks of gene therapy experiments, the U.S. Food and Drug Administration (FDA) last week proposed publicly releasing much of the safety and protocol data from clinical trials that it now keeps confidential. The agency wants to apply the same policy to animal-to-human transplants, another controversial experimental procedure.

    Several gene therapy researchers praised the decision. “We think public fears should be assuaged, and one way to do it is to make the information available,” says Inder Verma of the Salk Institute for Biological Studies in La Jolla, California, president of the American Society of Gene Therapy. Phil Noguchi, director of the cellular and gene therapy division at FDA's Center for Biologics Evaluation and Research, agrees that the proposed rules are important symbols: “It's the perception of something being hidden that's the scary part.” Biotech industry officials, however, are not pleased; they worry that releasing clinical data could stifle drug development and that the public may misinterpret the safety reports.

    The changes come in response to the 1999 death of 18-year-old Jesse Gelsinger in a gene therapy trial at the University of Pennsylvania in Philadelphia. The incident triggered a flurry of reports and congressional hearings on whether safety problems from this and other trials were being fully disclosed by sponsors, whether academic or commercial (Science, 12 May 2000, p. 951). It also revealed the confusion over current government reporting requirements.

    Under the proposed rule, FDA would make public much of the information that sponsors now submit in confidence to the agency on their gene therapy clinical trials, including preclinical toxicity data, protocols, informed consent forms, ongoing reports of adverse events, and records of any FDA investigations. Under FDA rules, for example, companies must report within 7 to 15 days serious events that are unexpected and possibly related to the therapy. Companies themselves would remove personal and confidential business information from these documents, which FDA would then post on the Internet.

    The Biotechnology Industry Organization (BIO) says the FDA proposal sets a “troubling precedent” by carving out an exception to its confidentiality policy. BIO officials are concerned that confidential patient and business data may inadvertently be released. What's more, says BIO bioethics counsel Michael Werner, releasing data on all adverse events before they can be investigated could “be misleading or misunderstood or taken out of context” by patients and the public, as many of these problems are related to a patient's underlying disease and not the therapy. But Noguchi disagrees, noting that the only events that sponsors have to report immediately are those possibly related to treatment; the rest are summarized in an annual report.

    Federal officials say this new body of information will “complement” the work of the Recombinant DNA Advisory Committee (RAC), which advises the director of the National Institutes of Health (NIH) on the ethics and safety of gene therapy trials. The RAC already releases protocols and safety reports on NIH-funded trials (www4.od.nih.gov/oba/rdna.htm). (Those few investigators with no direct or indirect NIH funding can submit information voluntarily to the RAC but would be obliged to follow the FDA rule.) The RAC now wants to analyze those adverse event reports for trends and recently proposed establishing a new working group to do so. Amy Patterson of the NIH Office of Biotechnology Activities, which runs the RAC, explains that the FDA proposal will satisfy the public's desire for access to safety information right away, while the RAC will continue to provide an open forum for analyzing the reports.

    The rules also cover the rapidly evolving field of animal organ and tissue transplants. FDA plans to release data from such xenotransplantation trials, while the Department of Health and Human Services (HHS) is finalizing more stringent guidelines for trials as part of a broader effort to reduce the risk of introducing new viruses into the population (Science, 30 January 1998, p. 648). A new HHS xenotransplant advisory committee—similar to the RAC—will hold its first meeting in late February.

  9. HIGH-ENERGY PHYSICS

    New Collider Sees Hints of Quark-Gluon Plasma

    1. Charles Seife

    STONY BROOK, NEW YORKEver since the Relativistic Heavy Ion Collider (RHIC) was turned on last June, physicists have been eagerly awaiting news from the newest, biggest particle accelerator on the block. The wait is over. The first results, presented at an international particle physics conference here last week,* hinted that scientists have finally managed to coax atomic nuclei to melt—creating a state of matter that hasn't existed since the big bang.

    Nuclear shrapnel.

    In RHIC's STAR experiment, particles spray away from colliding gold nuclei, viewed face-on. Asymmetric explosions may bear witness to quark-gluon plasmas.

    CREDIT: BROOKHAVEN NATIONAL LABORATORY

    Inside RHIC's tunnels at Brookhaven National Laboratory in Upton, New York, gold nuclei accelerate to more than 99.99% of the speed of light and smash into each other head on. By analyzing the showers of particles that fly off the colliding nuclei, physicists are attempting to figure out how matter behaves when so much energy is poured into so small a space. Last year, scientists at CERN in Geneva implied that their collider had slammed nuclei together so hard that the individual particles that make up the atom melted into a liquid melange of the particles' components (Science, 11 February 2000, p. 949). When RHIC started up, physicists hoped that its data would show evidence of such a quark-gluon plasma. So far, the most tantalizing hints have come from what scientists don't see.

    At low energies, a nucleus behaves something like a clump of hard wax pellets. Slam two into each other, and particles shoot in all directions, caroming off one another like hard billiard balls. By studying jets of particles spraying from the sides of these collisions, physicists can figure out what took place during the collision. At RHIC's higher energies, something different is happening. “The distribution of fast- moving particles is lower than one would predict,” says Yale physicist John Harris, spokesperson for STAR, one of RHIC's four experiments. There seem to be fewer high-energy particles coming off the sides of the collisions than expected.

    Just as someone counting wax pellets might explain such a deficit by saying that the wax had melted at high energies, particle physicists suspect that the particles in the nuclei might be melting into a sticky quark- gluon plasma that slows down particles shooting out the sides—quenching the jets. “It's a very exciting observation. It hasn't been seen before,” says Tim Hallman, a physicist at Brookhaven working on STAR. “It's early enough that people are guarded, but it matches predictions pretty well of when you make a transition to the quark-gluon plasma.”

    Another line of evidence for a quark- gluon plasma has to do with how the wreckage of the collisions sprays away. Most often, the two colliding gold nuclei don't slam directly head on. Instead, the nuclei—flattened to pancakes by the extreme relativistic speeds at which they are traveling—strike each other off center, colliding only in an almond-shaped region where the disks overlap. To scientists' surprise, particles scattered off in an almond-shaped distribution, rather than evenly. Calculations showed that it would be very hard to preserve the almond shape if the subatomic particles were intact, but easier if the particles had broken down into a soup of components. “It seems to imply that something weird is happening,” says Jim Thomas, a physicist at Lawrence Berkeley National Laboratory in California who is working on the STAR experiment at RHIC. “But more than that wouldn't be prudent to say.”

    Although the evidence is suggestive, nobody is willing to claim that RHIC has actually spotted a quark-gluon plasma. “It's a consistent picture if the quark-gluon plasma is being formed,” says CERN physicist Carlos Lourenco. But Lourenco warns that the RHIC measurements don't show a sharp, well-defined transition between ordinary matter and a quark-gluon plasma: “What we're looking for is big—to see a phase transition.”

    That might happen during RHIC's next run, scheduled to begin in May, which will last longer, reach higher energies, and employ more sophisticated detectors. In the meantime, particle physicists are simply saying that interesting things are happening in RHIC's tunnels—not bad for a first run. “Something is going on that we don't understand,” says Columbia University's Bill Zajc, spokesperson for RHIC's PHENIX experiment. “We expected to open up a new frontier, but this is too easy,” he adds—and that “has some people a little concerned.”

    • * Quark Matter 2001, 15–20 January.

  10. CLINICAL RESEARCH

    Company Plans to Bank Human DNA Profiles

    1. Eliot Marshall

    For years biotech companies have promised a new type of medicine with drugs tailor-made to a person's genetic profile. But proponents of this concept, called pharmacogenomics, know it can't move forward until several obstacles, both scientific and ethical, are cleared. Key among those is people's fear that their genetic data won't be kept confidential. Last week, a for-profit group—the First Genetic Trust Inc. of Deerfield Park, Illinois—announced a plan intended to alleviate those concerns.

    The company hopes to act as an intermediary between patients and researchers. Individuals would let the company store their genetic information in its confidential database for use in clinical research, and the company would communicate with them over the Internet to ensure that informed consent is given for any use of the data. First Genetic Trust has teamed up with the Memorial Sloan-Kettering Cancer Center in New York City to test the scheme.

    CEO Arthur Holden says his idea is to create “a structure with Swiss bank- grade security” to hold confidential deposits of genetic information about research subjects and other patients. Holden is also chair of The SNP Consortium, a nonprofit outfit financed in part by pharmaceutical companies to collect data on variations in the human genome that could be used to track down disease genes (Science, 16 April 1999, p. 406). The new project, Holden says, raised about $14 million last year from two venture capital firms—Venrock Associates, a Rockefeller family group in New York City, and Arch Venture Partners of Chicago. IBM is also a “strategic partner.”

    Initially, the trust would retain just DNA information, says a spokesperson, but later it might also keep records used in clinical care. The trust would give patients detailed information about the risks and benefits of the research projects they're being asked to join. It would also enable researchers to contact patients much later and obtain consent for follow-up research studies or arrange for follow-up medical treatment.

    Holden insists that the patients would control access to their own data. The company would use the Internet to stay in touch, he says—updating patients on research findings, seeking specific consent for new uses of the data, possibly contacting patients with new requests, and distributing coded files to scientists. Initially, the research sponsors—mainly pharmaceutical companies—would pay for the cost of this record keeping, says company spokesperson Mary Prescott. She argues that sponsors of clinical trials would be glad to turn the responsibility for managing patient consent and confidentiality to a third party.

    The concept will get its first test at Sloan-Kettering, where Kenneth Offit, chief of clinical genetics, is drawing up a research protocol. Offit was not available for comment, but according to one company executive, the protocol involves genetic counseling for about 50 women who carry BRCA1 or BRCA2 genetic mutations and are at high risk for breast cancer. First Genetic Trust is also negotiating to be part of several other large research projects elsewhere.

    Creating a private institution to act as a third-party broker of genetic information is “an interesting concept,” says Mark Sobel, past president of the American Society for Investigative Pathology. Sobel, a pathologist at the National Cancer Institute who has been monitoring federal genetic privacy policies for several years, says he likes the third-party data trust if it relieves researchers of paperwork. But Sobel warns that the scheme must pass muster before local ethics panels, known as Institutional Review Boards, and must offer patients “more than just a check-off box” in seeking their consent.

    Robert Gellman, a Washington, D.C., consultant on privacy issues, is more skeptical of the plan, saying he sees “nothing but problems.” Noting that the trust's policies are undefined, Gellman worries that, once they are spelled out, they could turn out to be more complex than the present system of controlling access to medical data, which does not require researchers who are using anonymous data to seek the consent of each subject.

  11. CANCER RESEARCH

    U.K. Cancer Funders May Unite

    1. John Pickrell

    CAMBRIDGE, U.K.—A giant funding agency akin to the U.S. National Cancer Institute may be in store for British cancer research. Trustees of the two largest private cancer charities in Britain—the Imperial Cancer Research Fund (ICRF) and the Cancer Research Campaign (CRC)—met last week to discuss future collaboration plans, including a full merger.

    The organizations stress that the talks are at an “exploratory” stage, and that several levels of collaboration short of a full merger are also being considered— including launching joint-venture initiatives and avoiding duplication of effort. A merger would improve efficiency, says ICRF director-general Paul Nurse: “Facilities are very expensive, and the best solution may be to pool our resources.” In addition, one national organization would be better able to train young researchers and would give cancer research a stronger voice, Nurse says.

    Currently, there's little overlap in what the charities do: The ICRF, with an annual research expenditure of $96 million, conducts research at its main institute in London and at its own clinical research units throughout the U.K., while the CRC acts mainly as a granting agency that spends about $94 million a year on research projects across the country.

    Scientists welcomed the possible merger. “There has always been a balance between competition and duplication, and a major effect of any merger will be increased coordination,” says David Lane, head of a research unit into the molecular basis of human cancer at the University of Dundee, who gets funding from the CRC. But Lane worries that a combined charity may not rake in as many donations as two separate ones. “So long as the science itself is done in a collaborative manner, it doesn't matter if there are one or two or however many organizations. … The aim must be to make a merged body that is more than the sum of its parts,” says Sir Walter Bodmer, an oncologist at the University of Oxford and former director-general of the ICRF.

    A preliminary report about the road ahead will be presented to the organizations' councils later this month. A final decision is not expected until later this year.

  12. ECOLOGY

    A Roaring Debate Over Ocean Noise

    1. David Malakoff

    An unusual whale stranding and controversy over a new U.S. Navy sonar system have increased interest in studying how noise affects marine life

    Marine mammal researchers Ken Balcomb and Diane Claridge awoke to a disturbing sight one morning last March: a beached whale lying in the shallows off their seaside home on Abaco Island in the Bahamas. The stunned scientists quickly recognized the 6-meter-long cetacean as one of their study subjects, a rare Cuvier's beaked whale that they had tracked just weeks earlier as it cruised the cobalt canyons offshore. But now, the whale “was very confused,” recalls Balcomb. “It kept making big left turns as we tried to move it to deeper water.”

    Within days, the scientists had documented 16 nearly simultaneous groundings—including eight beaked whales that died—along a 100-kilometer arc in the region. “We realized that something highly unusual was happening,” says Balcomb, who has spent the past decade working in the Bahamas, where typically just a few whales strand each year. Other researchers soon arrived to help inspect the carcasses and carve off tissue samples—including whole heads—that might offer clues to the mass stranding.

    Those remains are now at the center of an increasingly raucous debate over the threat that humanmade noise poses to whales and other sea life. Many researchers believe that the samples—especially ear tissues taken from the dead whales—will provide the first solid proof of what they have long suspected: that the pinging noises produced by some sonars can deafen and daze some kinds of whales, leaving them vulnerable to stranding and shark attack. If the researchers are right, the finding could disrupt routine naval operations and complicate the U.S. military's plans to field a powerful sonar system that is the subject of a major new study. It could also put pressure on shipping firms and oil and gas drillers, whose activities produce different kinds of potentially problematic noises.

    Even if the evidence is inconclusive, the Bahamas incident has helped turn up the volume on marine noise research. Funding for the field is rising, with an international group of researchers launching bids to understand how marine mammals respond to sounds of different frequencies and durations. They also aim to tally up the many noise sources—from volcanoes to jet skis—responsible for the growing racket beneath the waves. “Interest is up, and there are a lot of unanswered research questions,” says biologist Roger Gentry, who tracks acoustic research for the National Marine Fisheries Service (NMFS) in Washington, D.C.

    Echoing the past

    The Bahamas stranding is not the first event that has focused attention on marine noise. In the 1970s, researchers discovered that seismic surveys—which use pulses of reflected sound to map oil and gas deposits—were disturbing ringed seals and bowhead whales in Alaska. Noting that the animals were protected from harassment under the Marine Mammal Protection Act (MMPA), some environmentalists asked the U.S. government to intercede. Although noise was not an issue when the MMPA was passed in 1972, U.S. officials have since struggled to determine which kinds of sound, which can range from the steady whine of an outboard motor to explosive blasts used to battle-test new warships, require a government permit to “take”— harass or kill—whales, dolphins, and seals.

    The debate became particularly heated in the early 1990s, when some marine biologists opposed plans to use seabed transmitters off California and Hawaii to produce rumbling, low-frequency pulses that would help geoscientists measure global ocean temperatures. The bass beat produced by the Acoustic Thermometry of Ocean Climate (ATOC) project, they worried, might drown out the love songs of breeding humpback whales and disturb other leviathans. Although the conflict caused headaches for earth scientists, it proved an opportunity for whale researchers to undertake some of the first major studies of whale responses to low-frequency noise. The ATOC project moved forward in 1996 after studies suggested that the sounds had no significant short-term impacts on large whales, and last month NMFS signaled its readiness to give researchers a new 5-year permit to operate the sound source. Still, “ATOC became a lightning rod” that energized the field, says biologist Bob Hoffman, including a 1994 report from the National Academy of Sciences that stressed the uncertainty about the long-term environmental impacts of low-frequency marine noise. Until recently Hoffman served as the Marine Mammal Commission's (MMC's) chief scientist.

    The next year a new controversy sprang up around plans by the U.S. Navy to build a powerful new ship-towed sonar system to track increasingly quiet submarines. The multibillion-dollar system, known as SURTASS LFA, would also fire far-traveling low-frequency sounds into the ocean, then listen for the echoes off distant submarines. Environmentalists, led by the Natural Resources Defense Council and the Humane Society of the United States, have opposed deployment, saying the Navy knows too little about its potential impacts on sea life. They also attacked another Navy proposal, known as LWAD, that aims to use a combination of sound-producing technologies in near-shore combat.

    Navy officials say they are aware of the issues, noting an increase in research funding and agreements to alter some practices to protect wildlife, such as delaying “ship shock” explosions if wildlife is spotted in the vicinity. “We know we are a major source of noise … and are taking steps to mitigate potential problems,” Elsie Munsell, then a top Navy environmental official, told the MMC last year.

    Once again, the debate has proved to be a boon for cetacean researchers. The Navy in recent years has provided a group of scientists with ample funds, ships, and state-of-the-art equipment to conduct what team member Peter Tyack of the Woods Hole Oceanographic Institution (WHOI) in Massachusetts calls “some of the first large-scale, meaningful, controlled experiments.” Acoustics engineers and whale biologists, for instance, bathed blue, fin, gray, and humpback whales in different parts of the Pacific with SURTASS LFA signals, then observed their reactions.

    As with ATOC, however, the results were rarely clear-cut or widely applicable to many species or ocean environments. Some kinds of whales shifted course to avoid the signals, for instance, while others seemed to ignore them. And none of the observed behavioral effects was permanent. Tyack and three WHOI colleagues reported last year, for instance, that male humpbacks on their Hawaiian breeding grounds sang, on average, 29% longer when the sonar was operating, but returned to normal solos shortly after the signals ceased. The longer songs probably compensated for interference from the sonar, they speculated.

    But the controversy continues. This week the Navy was expected to issue an environmental impact statement that concludes SURTASS LFA will pose little threat to marine mammals if operated, as planned, away from important cetacean habitats. Critics, however, say the analysis is incomplete and misleading, and they urge the government to withhold the “takings” permit needed to proceed. The major problem, say environmentalists, is that researchers still don't know enough about the system's long-term impacts on animal feeding and breeding habits.

    Bahamas mystery

    The Bahamas strandings have put the Navy under even more scrutiny. A small Navy fleet in nearby waters had conducted antisubmarine exercises using standard tactical sonars, which operate at higher frequencies than SURTASS LFA, just before the strandings began. The timing raised suspicions that the whales had been disoriented by “barotrauma,” or pressure injuries to sensitive ear and brain tissues caused by sound waves. Since the 1970s there have been at least three reports of unusual strandings of beaked whales in close proximity to military exercises, but researchers collected little physical evidence that might confirm a link. In the Bahamas, however, researchers noted that many of the stranded whales were bleeding from their ears. And dissections and computerized tomography scans of the salvaged tissues by WHOI's Darlene Ketten, an expert on marine mammal hearing, found hemorrhaging and other telltale signs of barotrauma.

    Ketten's final report on the injuries won't be ready until later this year, in part because beaked whale ear bones are so dense that they require months to decalcify for histological analysis, which can reveal the microscopic shearing and compression injuries indicative of barotrauma. But last November Navy acoustics experts assembled a computer model that lent more credence to the theory. It showed that a “surface duct,” or layer of water with salinity and temperature properties that transmit sound especially well, may have spread or concentrated the sonar pings in unusual ways. Whales within the sound field, researchers speculate, may have been injured as the sound energy reverberated and reflected off the ocean floor. The question now “is whether this was an unusual event created by a unique set of environmental conditions, or something more general,” says Hoffman. Other researchers wonder whether beaked whales, which routinely dive to extreme depths when feeding, might be particularly sensitive to sound injuries due to their anatomy or behavior.

    The Navy—already the world's leading supporter of marine noise research at about $10 million per year—is funding an array of new studies to find answers. The Office of Naval Research (ONR), for instance, has nearly doubled its spending in the field over the last 5 years to about $6.5 million annually. This year, its agenda includes research to pinpoint the level at which noises begin to temporarily degrade the hearing of some marine mammals. It is also beginning to build a three-dimensional computer model that can predict how some kinds of whales will react to sounds. “We want to be able to swim a simulated whale through a sound field and have it behave realistically,” says ONR's Bob Gisiner.

    In a related effort, the Pentagon's environmental research program wants WHOI's Tyack to help it improve efforts to track whales by the sounds that they make, partly so they can move potentially harmful activities away from the animals. In research directly related to the Bahamas stranding, Tyack also hopes to place on the back of a beaked whale a new sensor that gives scientists detailed information about a whale's underwater behavior (see sidebar on p. 577). Other Navy agencies hope to equip ships with computer software that will warn captains when they are operating in whale-rich waters.

    Air-gun assault

    Other government agencies are also pursuing sound studies. At the National Oceanic and Atmospheric Administration (NOAA), researchers plan to work with the shipping industry on building quieter vessels; they would also like to compile an accurate marine “sound budget” of all the noise in the sea. Chris Fox, a marine geophysicist at NOAA's Pacific Marine Environmental Laboratory in Newport Beach, Oregon, is hoping the agency will fund an examination of old Navy records that might show how ocean noise has changed over decades and that it will build a multimillion-dollar listening network to put such data in a global context. A National Academy of Sciences panel will also start assembling an “ambient noise” research agenda, with recommendations expected next year.

    Preliminary NOAA studies using listening posts moored along the mid-Atlantic ridge have already produced some surprising results, Fox notes. The hydrophone arrays, designed to listen for underwater earthquakes and volcanoes, have sometimes been overwhelmed by low-frequency pulses produced thousands of kilometers away by oil exploration ships using pressurized air-gun arrays. “A single seismic survey vessel can sonify the entire North Atlantic,” he says.

    Such seismic studies also interest the Interior Department's Minerals Management Service (MMS). It has a small research program aimed at understanding how the spread of oil drilling into the deep waters of the Gulf of Mexico might affect sperm and other whales living in the area. In part, MMS is reacting to past studies, done off Europe and Australia, that have shown that some whales avoid exploration activities.

    There is also a move to make companies a bigger part of future studies. Jack Caldwell, an oil industry researcher at Core Laboratories in Houston, Texas, helped organize a scientific panel that later this year will offer recommendations on possible areas of support. The goal, Caldwell said at an MMC meeting last year, is to produce “nuts-and-bolts, practical information,” such as exactly which species of marine mammals are likely to be sensitive to air guns, and where they are likely to be found.

    In the meantime, Balcomb continues to document the aftermath of the Bahamas stranding. Before the beachings, he notes, his team had spotted about 50 of the unusual Cuvier's beaked whales in the study area. But since “that unforgettable morning” last March, he says, researchers have seen just one.

  13. ECOLOGY

    New Sensors Provide a Chance to Listen to the Leviathan

    1. David Malakoff

    One of the biggest challenges in studying the impact of noise on marine mammals is observing how the animals behave when they are under water and out of sight. A new sensor, designed by researchers at the Woods Hole Oceanographic Institution (WHOI)in Massachusetts, allows scientists to tag along as whales and other sea-dwelling mammals forage in the deep sea. The sensor, about the size of a computer mouse, allows scientists to record “what an animal hears and what it says, coupled with exactly how it is moving,” says WHOI's Peter Tyack, who developed the device with engineer Mark Johnson.

    A preliminary test last year aboard a NATO vessel in the Ligurian Sea offered a provocative preview of what information the sensor can provide. Tyack's team used a 5-meter-long carbon-fiber pole to attach the suction-cupped device to a surfacing sperm whale. Then the researchers activated a low-powered sonar during two of the animal's long feeding dives. After recovering the sensor, which detaches and floats to the surface after a few hours, the researchers got an “unusually information-rich look” at the whale's behavior, says Tyack.

    According to data from one dive, the whale was about 150 meters down when it moved into the sonar's sound field. For the next minute, it suspended its dive, rolled over onto its back, turned toward the sonar, and then issued a “trumpet” call.

    Tyack and other researchers don't know yet what such behavior means. But they are excited by the chance, for the first time, to observe conduct long hidden from a scientist's prying eyes. “We went into this wondering if we could even get the [sensor] on a sperm whale and ended up with some real surprising results,” says Tyack.

  14. AMERICAN ASTRONOMICAL SOCIETY

    Celestial Zoo Gains Some Exotic Specimens

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    SAN DIEGO—In the second week of the new millennium, nearly 3000 scientists and educators gathered here for the 197th biannual meeting of the American Astronomical Society, held in conjunction with the American Association of Physics Teachers. Just a few kilometers from the city's world-famous zoo, speakers added bizarre new members to the cosmic bestiary. (For an earlier report from the meeting, see Science, 19 January, p. 409.)

    Staring Into a Black Hole's Abyss

    According to astrophysical theory, a black hole has no surface—just an immaterial “event horizon” that acts as a one-way passage from our universe to oblivion. Matter and radiation crossing this point of no return are lost forever. Now, two major space observatories may have found the first evidence that event horizons indeed exist. According to Ramesh Narayan of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, the results confirm one of the most remarkable predictions of general relativity.

    The first glimpse of the abyss comes from the Hubble Space Telescope. Joseph Dolan of NASA's Goddard Space Flight Center in Greenbelt, Maryland, spent 8 years analyzing old data from the High Speed Photometer on board Hubble. In far-ultraviolet observations of Cygnus X-1, a binary system consisting of a normal star and a black hole, Dolan noticed millisecond flickerings that grew quicker and fainter with time. These “dying pulse trains” behave just as you would expect from a blob of gas spiraling into a black hole, Dolan says: brightening and dimming faster and faster as the gas's ever- tightening orbit swings it toward and away from the observer many times a second, and finally fading out as the gas disappears beyond the black hole's event horizon. If instead the gas blob were slamming into the surface of another type of compact object, such as a neutron star or a white dwarf, the last radiation pulse would be the brightest.

    Dolan admits that the dying pulse trains—if that's what they are—are almost completely masked by random brightness fluctuations in the steady emission of Cygnus X-1. “They are needles in a haystack,” he says. “We need to follow up on these observations with ultraviolet or x-ray instruments. If we find more of them, that would be a smoking gun.”

    Meanwhile, what looks like stronger evidence comes from NASA's Chandra X-ray Observatory. Narayan, together with his CfA colleagues Michael Garcia, Stephen Murray, and Jeffrey McClintock, used the observatory to study binary star systems containing normal stars paired with massive compact objects. In such a system, gas flowing from the normal star onto its compact companion heats up enough to emit x-rays. Sometimes, enough gas goes down the drain to create bright x-ray outbursts, but most of the time it trickles in, producing x-rays at a steady, low level. From the orbital speed of the normal star, the astronomers can calculate how massive the companion is—and thus whether it is a neutron star or a black hole.

    Chandra showed that, while in their quiescent state of x-ray emission, binaries with black holes are only about 1% as luminous as binaries containing neutron stars. Apparently, the gas disappears into a black hole leaving hardly a trace. In the case of a neutron star, on the other hand, it slams onto the surface, releasing large amounts of additional energy. “It's like watching a river that either just disappears over the edge of a waterfall, where it can't be seen anymore, or runs into a dam, creating a lot of turbulence,” Narayan says.

    “This is the only serious claim so far for the observation of an event horizon,” says Roeland van der Marel of the Space Telescope Science Institute in Baltimore, Maryland. “It's certainly an interesting piece of the [black hole] puzzle.” But Craig Wheeler of the University of Texas, Austin, warns that the interpretation of the data depends on assumptions about the nature of the gas flow that may not be correct. Of the two main models for accretion flows, he says, Garcia and colleagues based their work on the less popular one. “It seems this would change something, given that the underlying nature of the flow is so different,” Wheeler says.

    Telescope Pair Spots Hefty Stellar Cradle

    By linking two sets of radio telescopes, astronomers have gotten their first look at the accretion disk that surrounds a massive protostar—a place where stars and planets are being born. Some 150 billion kilometers across, the new disk is only slightly larger than disks previously found around less massive protostars, but it is tens to hundreds of times more massive, says Debra Shepherd of the National Radio Astronomy Observatory (NRAO) in Socorro, New Mexico. Comparing these bulky baby stars with normal-sized ones, Shepherd says, will help theorists predict how a disk feeds its growing star and whether the leftovers congeal into planets and other orbiting bodies. The observations are “a boon to theorists,” agrees Alain Lecavelier des Etangs of the Institute of Astrophysics in Paris.

    Astronomers have long been eager to peer into the inner sanctums of rare massive protostars. Although many smaller protostars lie within viewing range, the nearest massive protostar is thousands of light-years away—too far for ordinary telescopes to resolve its disk. But last year, NRAO linked the 27 dishes of its Very Large Array (VLA) radio telescope to another 25-meter antenna in Pie Town, 50 kilometers away. In effect, the link created a much larger instrument able to “see” in extremely sharp detail. Right now, says Marc Claussen of the NRAO, a collaborator on the project, “in terms of the combination of sensitivity and [angular] resolution, the Pie Town link is unsurpassed.”

    Using the new setup, Shepherd examined a massive protostar known as G192.16-3.82. This giant, some 6000 light-years away in the constellation Orion, is probably just 200,000 years old and weighs 8 to 10 times as much as the sun. The mass of the protostar's accretion disk is about 20 solar masses. The new observations also hint that the blob of radio emission hides a nearby companion protostar. “In our models, we had to add this companion to match the observations,” Shepherd says. The presence of the neighboring star may be the cause of a tilt that the astronomers observe in the disk around the massive star.

    The researchers expect to gather many more high-resolution glimpses of nascent solar systems in the future, after the VLA is linked to nine new antennas in what will be known as the Expanded Very Large Array.

    Radical Theory Takes a Test

    In one of the meeting's most provocative talks, Margaret Burbidge, an astronomer at the University of California, San Diego, presented evidence supporting a theory that, if correct, would turn cosmology inside out. Although the theory, a rival to the big bang model of the origin of matter, still has few friends among mainstream astronomers, one of them is now putting it to the test.

    Burbidge's evidence comes from quasars—pointlike sources of radiation that may include optical light, radio waves, and x-rays. The wavelength of their radiation has generally been stretched by more than 100%. If, as most astronomers believe, these redshifts are a result of the quasars' rapid outward motion as the universe expands, they indicate that quasars are billions of light-years away from Earth. The widely accepted view is that quasars are the bright nuclei of remote active galaxies, probably powered by supermassive black holes.

    Burbidge is not so sure. At the meeting, she described how she and two collaborators—Halton Arp of the Max Planck Institute of Astrophysics in Garching, Germany, and Yaoquan Chu of the Beijing Astronomical Observatory—had found a pair of quasars flanking a galaxy known as Arp 220. The galaxy is only 250 million light-years away; the redshifts of the quasars, on the other hand, indicate that they are about 6 billion light-years away.

    A chance alignment· Burbidge doesn't think so. So far, she says, 11 nearby active galaxies with high-redshift “paired quasars” have turned up since she discovered the first one 4 years ago (Science, 22 November 1996, p. 1305). Burbidge thinks that in each case, the paired quasars and the associated galaxy might be equally close to observers on Earth. “The evidence is accumulating,” she says, that redshift is a shaky measuring rod.

    Arp goes further. He thinks the quasars originated inside the galaxies, as clumps of new matter created billions of years after the big bang. Arp and a handful of other cosmological dissidents believe that matter is still coming into being in some parts of the universe, including the cores of active galaxies. The newly created matter is flung out in two opposite directions, just like the radio-emitting jets of high-energy particles that stream out of many active galaxies. The high redshift of the ejected matter, they say, may be due to its youth (an idea developed by the Indian astrophysicist Jayant Narlikar) or to relativistic effects.

    James Moran of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, is putting Arp's iconoclastic ideas to a simple test. If the ejection theory is correct, he points out, the paired quasars near a galaxy should move across the sky. Using 1998 data from the National Radio Astronomy Observatory's Very Long Baseline Array, Moran is trying to detect the proper motion of one of the quasars apparently associated with a galaxy known as M 106 or NGC 4258, located in the Big Dipper, 25 million light-years from Earth. “We should have results later this year,” Moran says, although he reckons that “a null result will probably not satisfy” the true believers.

    At 78, Burbidge—a former director of the Royal Greenwich Observatory in England and former president of the American Association for the Advancement of Science—can afford to do unfashionable research. Still, few of her colleagues are willing to follow her down a path that would throw measurements of cosmic distances into turmoil. “She has earned the right to do whatever she thinks best,” says an influential U.S. astronomer who asked not to be named. “But 99% of the astronomical community is pretty sure that quasar redshifts are due to the expansion of the universe and tell us distances.”

    Largest Structure in the Universe

    Astronomers at the Cerro Tololo Inter-American Observatory in Chile have spotted the largest coherent structure in the universe: a supercluster of galaxies measuring almost 600 million light-years across. The real surprise, though, is the supercluster's enormous distance—some 6.5 billion light-years away, where the universe appears as it was 6.5 billion years ago. Theorists have trouble explaining how such huge structures could have formed so early in cosmic history.

    Astronomers began piecing together the galactic aggregation about 10 years ago, says Gerard Williger of the National Optical Astronomy Observatories and NASA's Goddard Space Flight Center in Greenbelt, Maryland. Luis Campusano of the University of Chile and Roger Clowes of the University of Central Lancashire, England, had discovered a large group of quasars in that part of the sky. Most astrophysicists believe that quasars are the bright cores of very distant active galaxies. The new observations indicate that the region occupied by the quasar group contains about three times as many distant galaxies as anyone expected to find.

    “In fact, we've put some galaxy flesh on the quasar skeleton,” Williger says. Most of the galaxies are too distant and dim to be seen directly, but they leave a telltale absorption fingerprint in the light from still more distant quasars that passes through the galaxies' extended gaseous halos on its way to Earth.

    Just how big a headache the supercluster will give theorists depends on how much matter is locked up in it, says Piero Rosati of the European Southern Observatory in Germany. “You have to prove there's mass collapsed here; otherwise it wouldn't be really relevant,” Rosati says. Rosati and his colleagues have detected other distant, large superclusters by their x-ray emission, a method that gives a direct estimate for the mass of the cluster, he says. Similar techniques might show that the new supercluster is less fearsome than it appears. “If this is just a bunch of galaxies, it may not be that important,” Rosati says. “Sheets and superstructures of galaxies are everywhere in the universe.”

  15. CANCER RESEARCH

    Anti-Inflammatories Inhibit Cancer Growth--But How?

    1. Jean Marx

    A debate has emerged about how NSAIDs—nonsteroidal anti-inflammatory drugs—protect against colon and other cancers

    “Prescribing aspirin for cancer” sounds like an exercise in medical futility. But such a treatment may not be pointless after all: Beginning about 15 years ago, evidence began accumulating from both animal work and epidemiological studies on humans indicating that aspirin and related drugs, known as NSAIDs, hinder the development of colon cancer and perhaps other cancers as well.

    “NSAIDs protect against cancer—no ifs, ands, or buts,” says gastroenterologist Andrew Dannenberg of the Weill Medical College of Cornell University. But, he adds, because aspirin and older NSAIDs can cause potentially dangerous gastrointestinal bleeding, there was “a reluctance to push forward with this [idea] for people with low to moderate [colon cancer] risk.” Within the past 2 to 3 years, however, the availability of a new generation of more specific—and therefore safer—NSAIDs has touched off a spate of clinical studies aimed at determining whether these drugs can be used to prevent or treat cancer. Early results suggest they can.

    But researchers are deeply divided on whether they exert this potentially beneficial effect by blocking a single enzyme or by stimulating programmed cell death by other routes. The issue is important, because figuring out the mechanism could aid the design of better chemopreventive drugs. Consequently, on 8 January, the National Cancer Institute (NCI) brought many of the leading researchers in the field together in Rockville, Maryland, for a workshop aimed at examining the evidence on both sides.

    The COX-2 explanation

    The surge of interest in using NSAIDs to combat cancer is the outgrowth of the decade-old discovery that the body has two versions of an enzyme called cyclooxygenase. Aspirin and all the other NSAIDs then in use inhibit both versions of the enzyme, but it soon became clear that only one of them, COX-2, is important for inflammation. This enzyme converts a long-chained fatty acid called arachidonic acid to prostaglandins, which in turn trigger inflammatory reactions in the body. The other cyclooxygenase, COX-1, also makes prostaglandins, but they are needed to maintain the stomach lining and normal kidney function. Thus, researchers surmised, inhibition of COX-1 likely accounts for such NSAID side effects as gastrointestinal bleeding. This conclusion led to a new generation of drugs that inhibit only COX-2 (Science, 22 May 1998, p. 1191) and that are now the focus of several cancer trials.

    Last spring, the results of the first of these trials was published in The New England Journal of Medicine. In this study, a team including researchers at the M. D. Anderson Cancer Center in Houston, St. Marks Hospital in London, NCI, and G. D. Searle in Skokie, Illinois, gave a new NSAID called celecoxib to patients with a rare hereditary condition, familial adenomatous polyposis (FAP). FAP is caused by loss or inactivation of the APC tumor suppressor gene, which leads to the development of numerous precancerous growths, or polyps. Some of these inevitably progress to full-fledged colon cancers. Celecoxib treatment reduced the number of polyps by nearly 30%. NCI's Ernest Hawk says that several more trials of similar new NSAIDs are getting under way for both colon and other cancers.

    As described at the NCIworkshop, work in several labs has suggested that the drugs' antitumor effects stem directly from their inhibition of COX-2. For example, Raymond DuBois of Vanderbilt University Medical Center in Nashville, Tennessee, and his colleagues found that more than 80% of human colon cancers show much higher expression of COX-2 than the normal cells of the colon lining, suggesting that this overexpression contributes to tumor growth. Work with knockout mice also fingered COX-2. When researchers inactivate the APC tumor suppressor genes of mice, the animals develop numerous polyps in their intestines, as do humans with FAP. By knocking out the COX-2 gene in the FAP mice, a team of Merck researchers significantly reduced polyp formation. Treating the animals with a selective COX-2 inhibitor had the same effect.

    Several other teams have found that COX-2 overexpression may contribute to the development of other cancers as well, including those of the lung, breast, skin, and esophagus. Indeed, new work presented at the NCI workshop by Timothy Hla of the University of Connecticut Health Center in Farmington suggests that COX-2 activity may be all that it takes to induce breast cancer.

    Hla and his colleagues attached the COX-2 gene to a regulatory sequence that turns on the gene specifically in breast tissue and then introduced this hybrid gene into mice. The researchers found that the animals, particularly those that had had several litters, began developing metastatic breast cancer. “The data are consistent with the conclusion that COX-2 expression alone is sufficient to induce mammary carcinogenesis,” Hla told the workshop participants.

    Other findings point to two mechanisms by which COX-2 may foster cancer development. DuBois's team found that it appears to inhibit the form of cell suicide known as apoptosis or programmed cell death, which should otherwise remove damaged or mutated cells. In addition, DuBois's group and a team led by Jaime Masferrer at Pharmacia in St. Louis have evidence that COX-2 activity leads to angiogenesis—the formation of new blood vessels needed to nourish a growing tumor. They also showed that COX-2 inhibitors block this effect.

    Alternative routes

    Not so fast, said some other workshop participants, who described ways that the inhibitors might stimulate apoptosis without involving COX-2 at all. For example, I. Bernard Weinstein of Columbia University College of Physicians and Surgeons in New York City has been working with a derivative of the NSAID sulindac produced by Cell Pathways Inc. of Horsham, Pennsylvania. The derivative, called Exisulind, does not inhibit COX-2 yet still has antitumor activity, Weinstein says. His team has shown that it inhibits the growth of human prostate cancer cells in lab cultures and also when transplanted into mice. The data suggest that Exisulind does this by inhibiting an enzyme that breaks down an internal cellular messenger called cyclic GMP, Weinstein said at the workshop. As a result, cyclic GMP builds up in the cell, touching off a series of reactions culminating in cell death.

    Work by Kenneth Kinzler, Bert Vogelstein, and their colleagues at the Johns Hopkins University School of Medicine suggests that sulindac itself may stimulate apoptosis through two pathways that don't involve COX-2. In one pathway, they reported, sulindac and other NSAIDs decrease the concentration of an antiapoptosis protein called Bcl-XL.In the other, the drugs counteract the activity of a protein called PPARδ, which protects against apoptosis. Here the NSAIDs may be directly countering the effect of the known cancer-causing mutation, the loss of the APC gene, because PPARδ concentrations go up when that happens.

    Finally, Richard Gaynor of the University of Texas Southwestern Medical Center in Dallas reported that NSAIDs interfere with still another cell signaling pathway. This one involves a protein called NF-κB, a so-called transcription factor that regulates the expression of a number of genes, including some that protect against cell death. NF-κB is held in the cytoplasm by another protein called I-κB until an appropriate signal calls for its release. This occurs when an enzyme adds a phosphate group to I-κB, causing it to break down and release NF-κB. Gaynor's team found that sulindac and aspirin both block the activity of the enzyme that phosphorylates I-κB, preventing NF-κB from being set free to exert its antiapoptotic and other effects. “I think this is going to be very complicated and that there will be multiple targets for these [NSAID] agents,” Gaynor says.

    The COX-2 proponents aren't convinced. They note that most of the experiments pointing to alternate targets involved treatment of cultured cells with what they describe as “industrial strength” NSAID concentrations—tens or even hundreds of times higher than a patient would be exposed to. “A human being never sees anything at that level. It would be incompatible with life,” DuBois says. Dannenberg, who organized the workshop with NCI's Victor Fung and Daniel Hwang of Louisiana State University in Baton Rouge, shares that concern. Dosage is a “key issue,” he says. “You see all these interesting COX-2-independent effects in [cell culture]. But are they relevant for understanding effects in a human or animal·”

    Kinzler concedes that the concentration issue needs to be addressed, although he suspects that because the drugs are given orally, their concentrations may well be higher in the intestine than in the bloodstream, where their concentrations have typically been measured. He also points out that, so far at least, the best evidence that NSAIDs are chemoprotective comes from studies of their effects on colon cancer. “It's a tough question as to what [NSAIDs] do biologically,” Kinzler says. But, he adds, the drugs could be working through both COX-2-dependent and -independent pathways, as the two are “not mutually exclusive.”

    However this debate turns out, say Dannenberg and others, at the very least, the studies finding non-COX-2-related effects of NSAIDs are identifying potential new targets for drugs that can be used to prevent or treat cancer. “A good interpretation of the day is that there are some other legitimate targets [in addition to COX-2] that need to be developed further,” says COX-2 camp member DuBois.

  16. AMERICAN GEOPHYSICAL UNION

    Predicting Icelandic Fire and Shakes

    1. Richard A. Kerr

    SAN FRANCISCO—When 9000 geophysicists gathered here last month for the annual fall meeting of the American Geophysical Union, they heard the usual potpourri of earth and planetary sciences. Subjects included predicting eruptions and earthquakes in Iceland, piggyback monitoring of atmospheric electrical circuits, and odd annual pulsations of the solid earth. (For earlier reports from the meeting, see Science, 19 January, p. 422.)

    Geophysicists have had a decidedly mixed record predicting the future. Successful warnings of volcanic eruptions, such as the 1991 Mount Pinatubo catastrophe, did not save 14 volcanologists from sudden death in five incidents over 10 years. Most seismologists have given up predicting earthquakes after failing to find any reliable warning signs preceding quakes. But the news out of Iceland is decidedly upbeat. Last year, Icelandic geophysicists issued successful and surprisingly specific predictions for the eruption of the Hekla volcano in February and a magnitude 6.6 earthquake in June. The successes, although not unqualified, point up the virtues of patient monitoring.

    Icelandic researchers had been waiting 9 years since the last small eruption of Hekla 120 kilometers east of Reykjavík, geophysicist Kristjan Agustsson of the Icelandic Meteorological Office (IMO) in Reykjavík told those attending the meeting. Although the volcano had been erupting about every 10 years, “we almost believed Hekla was unpredictable” in the short term, says seismologist Páll Einarsson of the University of Iceland. But seismometers had caught a burst of small earthquakes in the hour before the previous eruption in 1991, so when tiny quakes were observed beneath the normally quiescent volcano shortly after 5:00 p.m. on 26 February, workers at IMO and the university took note. By 5:29 p.m., the intensifying swarm prompted the IMO to warn the Civil Defense of Iceland of a “possible imminent” eruption.

    IMO predictors weren't finished. About half an hour before the 1991 eruption, a strainmeter in a borehole about 15 kilometers from the volcano recorded an increasing crustal squeeze as magma pushed its way upward from a chamber 4 kilometers beneath the volcano. After nearly identical compression began on the same instrument at about 5:45 p.m., IMO issued a prediction at 5:53 that “an eruption was certainly to be expected within 20 minutes.” The eruption began at 6:17.

    “They really nailed it,” says volcano seismologist David Hill of the U.S. Geological Survey in Menlo Park, California. The warning from the rising magma dike should be broadly applicable, says geophysicist Alan Linde of the Carnegie Institution of Washington's Department of Terrestrial Magnetism, who helped operate the strainmeter. “Will we see something like this before eruptions at other volcanoes·” he asks. “I'm optimistic we will. You can't escape physics.”

    Although geophysicists don't yet understand the far more complex physics of a quake, Icelandic researchers have made some progress with the South Iceland Seismic Zone. A broad, 70-kilometer-long region on the southwest coast between Hekla and Reykjavík, the zone contains about 10 parallel, north-south-trending faults. They tend to fail in a sequence of magnitude 6 to 7 quakes, starting in the east and progressing westward as the rupture of one fault transfers stress to its neighbor (Science, 22 October 1999, p. 656). From historical records, researchers knew that two narrow north-south segments of the zone had not failed in 300 years and that the last sequence swept through the zone almost 100 years ago. Since the 1980s, Icelandic researchers have agreed that the zone in general and those two gaps in particular were due for renewed activity.

    That long-term forecast panned out on 17 June when a magnitude 6.6 quake struck the easternmost “seismic gap.” But hopes for a short-term prediction, based on extensive monitoring systems installed in the zone, were frustrated. “We didn't recognize any precursors before that earthquake,” seismologist Ragnar Stefánsson of the IMO told the meeting attendees. However, all eyes did turn to the west, where the next gap lay. By 20 June, it was obvious that the small earthquakes triggered in that gap by the mainshock 15 kilometers to the east were continuing rather than petering out as normal aftershocks do. IMO notified national and local civil defense authorities that another earthquake of similar size would probably strike soon, most likely in the nearby gap. Twenty-four hours later, the predicted quake hit. There were no deaths or injuries, but “people have been very grateful,” says Stefansson, having been alerted to secure their breakables.

    In hindsight, says Stefansson, “there are some signs that increase our optimism we will be able to detect precursors before even the first earthquake in a sequence occurs.” In the months to hours before the first quake, shifting patterns and orientations of microearthquakes, a pulse of radon gas in the ground, and changing water levels in a borehole all pointed toward an impending quake, he says, conceding that “this is always easier to see afterwards.” Although many seismologists have been repeatedly disappointed by similarly promising observations, geodesist Roger Bilham of the University of Colorado, Boulder, is encouraged. “If you persist in looking where you expect an earthquake,” he says, “you'll see precursors. This [event] is what we need to renew interest in short-term prediction.”

  17. AMERICAN GEOPHYSICAL UNION

    Atmosphere's Power Grid Exposed

    1. Robert Irion

    SAN FRANCISCO—When 9000 geophysicists gathered here last month for the annual fall meeting of the American Geophysical Union, they heard the usual potpourri of earth and planetary sciences. Subjects included predicting eruptions and earthquakes in Iceland, piggyback monitoring of atmospheric electrical circuits, and odd annual pulsations of the solid earth. (For earlier reports from the meeting, see Science, 19 January, p. 422.)

    Storms from the sun spark an electromagnetic frenzy in Earth's upper atmosphere. Some of that intense activity appears as curtains of shimmering auroras, but the rest of the electrical currents are invisible. Now, thanks to a surprising use of an array of costly communication satellites, researchers have devised a way to map the electrical power flowing through the planet's ionosphere. The new technique could lead to better warnings of surges that can zap utility lines. “This is a great piece of work,” says space physicist William Lotko of Dartmouth College in Hanover, New Hampshire. “It's at the intersection of science, technology, business, and society.”

    Colorful current.

    This aurora (bottom) traces invisible currents that pulse above Earth's poles. Scientists have now mapped electric power flowing into the atmosphere (top).

    CREDITS: (AURORA) NASA; (MAP) BRIAN ANDERSON AND COLIN WATERS, JHU/APL

    Earth's magnetic field carves a comet-shaped cavity in the solar wind, a constant stream of charged particles from the sun. When the sun acts up, changes in the wind's intensity can ripple this vast bubble “like a wind sock,” says atmospheric physicist Brian Anderson of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland. Those clashes drive millions of amperes of currents over Earth; most of that energy funnels toward the poles.

    To calculate the strength and direction of those currents, researchers must measure both the electric and magnetic fields far overhead. Combining the two fields yields a map of the electric power, which usually streams down toward the planet along narrow, curved bands. A network of a dozen auroral radars, called SuperDARN, tracks the electric fields near the North Pole. Magnetic fields at high latitudes, however, have been elusive, because few scientific satellites travel in polar orbits that carry them through those regions.

    The solution came from an unexpected source: the Iridium network, a $5 billion constellation of 72 satellites that provided global telephone service to customers of Motorola. The operating company declared bankruptcy last year, when the pricey service failed to attract enough users. In December, however, a new company revived Iridium with a 2-year, $72 million contract from the Defense Department. That's good news for Anderson and his colleague Colin Waters of the University of Newcastle in New South Wales, Australia, who figured out how to use Iridium data to chart the polar magnetic fields.

    Each satellite orbits from pole to pole at an altitude of 780 kilometers and carries a magnetometer to help engineers monitor conditions in orbit. By combining the hourly magnetic field readings with the SuperDARN data, Anderson and Waters derived the first dynamic maps of the electrical power flowing into and out of the ionosphere. Typical power levels are 40 billion watts for the region northward of 60 degrees latitude, although they flicker wildly depending on the sun's mood. Much of the power focuses within “hot spots” that ebb and flow from hour to hour. “We can see the currents generated throughout an entire solar event, and we can handle the peak intensities,” Anderson says. “Before, it was like trying to measure a hurricane with an anemometer that only goes to 10 miles [16 km] per hour.”

    Anderson and Waters are pushing for even faster data transfer to create new power maps every 10 minutes or so. That capacity could lead to a system to warn electric utilities about the movements of intense hot spots. Severe electrical fluxes in the atmosphere induce currents in the ground and along high-power transmission lines, Lotko notes. With just a short warning, he says, utility companies could prevent dangerous surges.

    Other scientists hope to gain new insights into the interactions between the sun and Earth's protective magnetic sheath. “This shows that data from commercial and military satellites can give the space science community a huge boost,” says space physicist Lie Zhu of Utah State University in Logan. “We should push the government hard in that direction.”

  18. AMERICAN GEOPHYSICAL UNION

    Earth's Breathing Lessons

    1. Richard A. Kerr

    SAN FRANCISCO—When 9000 geophysicists gathered here last month for the annual fall meeting of the American Geophysical Union, they heard the usual potpourri of earth and planetary sciences. Subjects included predicting eruptions and earthquakes in Iceland, piggyback monitoring of atmospheric electrical circuits, and odd annual pulsations of the solid earth. (For earlier reports from the meeting, see Science, 19 January, p. 422.)

    Earth is never still. Any large earthquake sets it ringing like a bell for hours and days. The wind, apparently, causes it to “hum” continually at a high pitch. Now, some suspect that Earth is also “breathing,” compressing its crust and extending it once each year. This cycle is most evident in Japan, geophysicists told the meeting, where it may be responsible for that country's “earthquake season.” Elsewhere, it may lead some volcanoes to erupt almost solely between September and December.

    Detecting heavy breathing takes some extremely sensitive instrumentation. Geodesist Makato Murakami of the Geographical Survey Institute in Tsukuba, Japan, and volcano seismologist Stephen McNutt of the University of Alaska, Fairbanks, reported that a network of 50 Global Positioning System (GPS) stations across northeast Japan is detecting 30 millimeters of east-west crustal shortening a year as the diving Pacific tectonic plate snags on the bottom of the island nation. But the compression isn't steady: The squeeze builds 15% faster than average in the fall and 15% slower in the spring. Although GPS instruments must weather all sorts of environmental insults, such as heating by the sun, Murakami and McNutt believe they have eliminated all the likely culprits besides real breathing of Earth.

    Geodesist Kosuke Heki of Japan's National Astronomical Observatory in Mizusawa told the meeting attendees that he has independent data showing similar seasonal changes in northern Japan. He analyzed observations of a strainmeter installed in a 150-meter-long tunnel dug into granite bedrock in northeast Japan and found much the same rhythmic squeezing and release as seen by GPS.

    GPS and strainmeters aren't the only things that seem to pick up planetary breathing. McNutt reported identifying four volcanoes—Pavlof in Alaska, Oshima and Miyake-jima in Japan, and Villarica in Chile—that have a strong preference for erupting between September and December, a preference significant at better than the 99% level in the case of Pavlof. At the well-studied Pavlof, an autumnal squeeze is presumably forcing magma out of the volcano's magma chamber late in the year, says McNutt, like toothpaste squeezed out of a tube, and easing the pressure in the spring to discourage eruptions then. And in 1999, seismologists Masakazu Ohtake and Hisashi Nakahara of Tohuko University in Sendai, Japan, confirmed that since 684 A.D. the great earthquakes of central Japan have without exception struck from August to February. The right kind of squeeze at the right time might alternately encourage or discourage quakes, notes Heki.

    Geophysicists have traditionally shied away from making such connections. “In the past, we've tended to dismiss things without an obvious physical mechanism,” says volcano seismologist David Hill of the U.S. Geological Survey in Menlo Park, California. But after California's 1992 Landers earthquake reached out hundreds of kilometers to trigger quakes by some still mysterious means, Hill, for one, became more receptive. At the meeting, he says, “I was struck that there may be something” to Earth breathing and eruptions or earthquakes.

    Geophysicists will first have to confirm the reality of breathing in Japan and search elsewhere, then pursue the matter of mechanism. McNutt had originally suggested that an added load of wind-blown coastal water might be forcing magma out of Pavlof. In Japan, Heki sees signs that the weight of a winter's snowfall on the western mountains of northern Japan may suffice. McNutt allows that the snow load may play a role, but he wonders if that would be enough. Perhaps more than one mechanism, he says, is needed to unsettle Earth.

  19. AUSTRALIA

    Engineered Mouse Virus Spurs Bioweapon Fears

    1. Elizabeth Finkel*
    1. Elizabeth Finkel writes from Melbourne.

    Efforts to find a better pest-control agent lead to a potentially deadly surprise for a team of Australian researchers

    MELBOURNE, AUSTRALIAThe surprising virulence of a virus genetically altered to reduce rodent infestations in Australia has raised alarm over whether such research could be hijacked to produce biological weapons. In an unusual twist, those sounding the alarm are not environmental activists but the scientists themselves. Despite their warning, released with the research results this month in an electronic version of the Journal of Virology, it's not clear whether the unexpected result, which turned a vector into a potent killer, could be duplicated in viruses that affect humans. But scientists say it should serve as a warning to the community to be more aware of the potentially harmful consequences of their work.

    The goal of the research was certainly benign. The scientists were attempting to sterilize rodents by using a virus to trigger an antibody attack against mouse egg proteins. But when the researchers attempted to beef up this virus by incorporating the immune system hormone interleukin-4 (IL-4) into its genetic payload, the virus turned into a killer, wiping out all the animals. Even vaccination offered little protection.

    Handle with care.

    CRC's Ron Jackson would like to know if his findings are peculiar to the mousepox virus.

    CREDIT: KAREN MOBBS/CRC BIOLOGICAL CONTROL OF PEST ANIMALS

    The potency of this modified virus startled officials at the Co-operative Research Center (CRC) for the Biological Control of Pest Animals in Canberra, whose scientists had teamed up with the John Curtin School of Medical Research at the Australian National University in Canberra. So on 16 January the CRC issued a press release timed to accompany the article that pleaded for stronger measures to combat the threat of biowarfare arising from such good intentions. Not surprisingly, the press release triggered sensational warnings in the Australian media and elsewhere.

    “This is the public's worst fears about GMOs [genetically modified organisms] come true,” says CRC director Bob Seamark, who led the research. Adds Annabelle Duncan, a former deputy head of the United Nations team of inspectors in Iraq and now chief of molecular science at the Commonwealth Scientific and Industrial Research Organization (CSIRO), “This shows that something we had thought was hard—increasing the pathogenicity of a virus—is easy.”

    The focus of all this attention is a mousepox virus that had been engineered to carry the mouse egg shell protein ZP3, or zona pellucida 3, as a mouse contraceptive. The foreign protein triggers an antibody response that within several months destroyed eggs in female mice, at least in one laboratory strain of mice. But when Seamark's group tried the technique on a second strain of mice, known as Black 6, it was ineffective.

    To boost the antibody response, Seamark called on the immunological expertise of Ian Ramshaw's group at the Australian National University. In previous work, they had shown that adding IL-4 to the genetic payload of the vaccinia virus increased the antibody-producing response in mice and toned down the effectiveness of virus-clearing killer T cells.

    Hoping to take advantage of this shift, Seamark's team inserted the IL-4 gene into the mousepox virus. They expected only to strengthen the antibody response in the resistant Black 6 strain, but instead the virus overwhelmed the mice, proliferating out of control and destroying their livers. Even mice that had been vaccinated against the virus fared poorly, with half dying immediately and the remainder developing a chronic abscess at the site of infection. “This was a shock to us,” says Seamark, who worries about the close relationship between mousepox and smallpox, once a scourge to humans. “We [also] had shown that a commonly used technology could overwhelm resistance and render vaccination useless.”

    The team spent the next 18 months confirming the data and debating whether to go public with them. In the end, disclosure won out over concerns about educating future bioterrorists and alarming the public. “We need the public to trust us if we are going to seek their approval to release pest-control viruses down the track,” says Seamark, the driving force behind a consensus conference on GMOs held 2 years ago in Canberra (Science, 5 March 1999, p. 1427). But any intentional release, he hastens to add, won't involve viruses carrying the IL-4 gene. “These are confined to the high-security lab,” he says.

    Such fears may be overrated, says Ron Jackson, the CRC virologist who carried out much of the work. Jackson suspects that the findings may be peculiar to the mousepox virus, which naturally carries other proteins, such as interferon γ binding protein, that weaken the antiviral response. He notes that the result did not occur with the vaccinia virus in Ramshaw's lab.

    At the same time, he and other scientists point out that many viruses employ immune system modifiers as part of their arsenal, including the human Epstein-Barr virus and primate cytomegaloviruses. “I wouldn't say that mousepox is an exception,” says immunologist Chris Burrell of the University of Adelaide in South Australia. “The more we look, the more we find viruses that carry these types of genes.”

    The implications of this finding are of intense interest to organizations such as the Federation of American Scientists, which has formed a working group to develop a protocol that would add verification powers to the currently toothless international convention on biological weapons. “Until now, we considered genetically engineered organisms little threat compared to the naturally occurring ones,” says microbiologist Mark Wheelis of the University of California, Davis, a member of the working group. “But with genomics and proteomics, we're going to see a lot more of this sort of thing.”

Log in to view full text