News this Week

Science  26 Sep 1997:
Vol. 277, Issue 5334, pp. 1916

    The Right Climate for Assessment

    1. Richard A. Kerr

    As nations debate limits on greenhouse emissions, Robert Watson takes charge of the world's leading scientific program of climate change assessment. Can he make the IPCC work better?

    Robert Watson's energetic approach to science is legendary. Once he flew from Washington, D.C., to Jakarta for lunch—just lunch—to release a report on biodiversity. Then there was his trip to Amsterdam. Arriving too late for breakfast, he gave a half-hour talk to an international meeting on changes in land use and left before lunch was served. For almost 20 years, Watson has traveled the United States and abroad nagging, cajoling, and prodding scientists into refining the way they reach consensus on a range of environmental issues—among them stratospheric ozone depletion, greenhouse warming, biodiversity, and sustainable development.

    The aim of all this frenetic activity is to help policy-makers sort through the babel of conflicting scientific opinion on important policy issues. Working at NASA, the White House, and now the World Bank, Watson has helped convert the subtle global threat of ozone depletion into a model of scientific assessment and political response. He's injected an integrative approach to environmental problems at the White House, and he's begun to heighten environmental awareness at the World Bank, where he is now director of the environmental department. Next week, he also takes the helm of the biggest scientific assessment of them all, the Intergovernmental Panel on Climate Change (IPCC), as it embarks on its next major report.

    The post puts Watson smack in the middle of one of the hottest scientific debates of all—a debate in which IPCC has already played an influential role. Its 1995 report, which pointed to signs of a human-induced “discernible change” in global climate, was a key input into a political process that will culminate in December, when representatives of more than 160 nations will meet in Kyoto, Japan, to adopt a binding agreement on greenhouse-gas emissions. IPCC's next report, due in 2000, will try to refine the scientific consensus further, pushing assessments toward the edge of—but not into—policy-making. It's an enormous job, but one that plays to his strengths as a hard-driving, consensus-building science manager.

    The IPCC has its roots in the early days of assessing the cause and likely extent of ozone depletion. Nine years after its creation by the United Nations Environment Program (UNEP) and the World Meteorological Organization (WMO), it has evolved into an exercise involving 2500 scientists around the globe. Every 5 years, a 3-year effort culminates in three major reports—on the science of climate change, its impacts, and possible responses. Each report consists of about a dozen chapters put together by lead authors from around the world. The authors draw on scores of equally international contributors and incorporate comments from hundreds of reviewer scientists. The first two reports came out in 1990 and 1995, and the next cycle is just beginning. Watson assumes the chair from Bert Bolin, professor emeritus in meteorology at Stockholm University in Sweden.

    Watson faces the job of defending and fine-tuning a process that some scientists view as already near perfection—but a few regard as fatally flawed. “The IPCC is a grand experiment,” says atmospheric scientist Michael Oppenheimer of the Environmental Defense Fund in New York City. “It's a tough question: How do you provide a process that draws technical conclusions and is completely faithful to the science, but is still usable by governments? There is no simple answer.”

    The exhaustive and exhausting process has had its critics lately, prompting Watson to offer that “if there's any perception it's not the best, we're going to change it.” His goal for the climate assessment project, on which many trillions of dollars in economic activity may turn, is unassailable “transparency and credibility.”

    Who do you trust?

    Like the IPCC, Watson got his start when stratospheric ozone depletion was still a theoretical threat. His Ph.D. from London University in 1973 was on the gas-phase kinetics of chlorine, bromine, and fluorine, the very chemistry that within a couple of years would become central to the ozone controversy. But Watson admits he “did all this work without a clue it had any relevance to the atmosphere.” A series of postdoc positions soon introduced him to the social relevance of gas-phase chemistry, and 5 years at the Jet Propulsion Laboratory in Pasadena, California, involved him in the evaluation of laboratory kinetics data for modeling atmospheric chemical processes. The experience, he says wryly, taught him the process of figuring out “which of these 23 papers do you trust?”

    Arriving at NASA headquarters in 1980 as program scientist for the Upper Atmosphere Research Program, Watson found the overall assessment of ozone science to be just as fragmented. In a 2-year period, at least five major assessments were carried out—by the U.S. National Academy of Sciences, NASA, the European Union, UNEP, and the British government. Watson was bewildered: “I said, ‘This is ridiculous.’ Both the scientists and the policy-makers were looking at the differences between the reports instead of the similarities. Policy-making was already complicated enough; we realized we needed an international umbrella.” By 1981, Watson had brought in the WMO, and eventually the series of international ozone reports—which continues today—was co-sponsored by the two major international environmental organizations—UNEP and WMO.

    Watson's take-home lesson from his ozone days is simple: “I believe the [international] IPCC process is much, much more powerful than the single-agency approach. The most important thing when you have an assessment process is that it has to be credible to all stakeholders. They may not all agree with the outcome, but if they're all part of designing the process in the beginning, they'll be more willing to let the chips fall where they may.”

    IPCC had already targeted developing countries for greater inclusion, but Watson plans to go farther. “I believe industry and business and environmental NGOs [nongovernmental organizations] have to be absolutely and fully mainstream and integrated into IPCC,” he says. “It has to be a very inclusive process. [An assessment] shouldn't be one view of the world.”

    The central—and most contentious—issue that has faced this diverse group from the start is credibility. In the first phase of an environmental problem, says Daniel Albritton of the National Oceanic and Atmospheric Administration's (NOAA's) Aeronomy Laboratory in Boulder, Colorado, who with Watson has been a science adviser to U.S. ozone and climate negotiators, scientists must focus on the question: “Is this issue for real?” Ozone researchers could answer the question authoritatively in the mid-1980s after they detected a global downward trend in ozone and a clear link between the Antarctic ozone hole and pollutant chlorofluorocarbons (CFCs). The discussion then shifted to weighing options for ameliorating the problem, and the result was a series of protocols that have led to a stabilization and then to a detectable decline in CFC concentrations (see graph).

    Cause and effect.

    Watson has helped with international assessments of ozone science, fueling agreements that have already halted the rise of many ozone-destroying chemicals.


    For global warming, the reality question is still under some debate. IPCC has maintained that the best estimate for greenhouse warming due to a doubling of carbon dioxide is about 2 degrees Celsius, enough to cause significant warming in the next century. But after 20 years of study, the possible range of warming for a carbon dioxide doubling still runs from a modest 1.5°C to a catastrophic 4.5°C. In its 1995 assessment, IPCC for the first time reported signs of a detectable human influence on climate, but that link between this century's warming and the greenhouse effect will remain tentative until the 2000 report and perhaps beyond.

    In spite of IPCC's efforts to include diverse opinions in developing this consensus, some critics have complained that it has given short shrift to minority opinions and fostered an exclusiveness that tends to enshrine conventional thinking. One of the most visible critics of the IPCC process and product is climatologist Patrick Michaels of the University of Virginia in Charlottesville, who sees no prospect of disastrous climate change.

    One of the IPCC's biggest problems, says Michaels, is the disproportionate number of government laboratory scientists with the travel funds and the time away from academic responsibilities to participate in IPCC's intensive and often far-flung gatherings. Participants also tend to be from organizations “whose budgets are predicated on global climate change” being a major threat, he says. “I don't know how they're going to get around this.” At the same time, Michaels gives Watson points for trying to be fair. “Watson is a very straight shooter. I think he feels IPCC should have paid more attention in its early iterations to people who had, in fact, been specifically excluded.”

    While defending the IPCC participants as a fair sampling of the best and the brightest, Watson plans several changes to increase the openness of the IPCC process and address complaints by greenhouse skeptics that the IPCC altered its 1995 report after its final review by colleagues and government representatives. Critics like S. Fred Singer of the Washington, D.C.-based Science and Environmental Policy Project argue, for example, that the lead author of one chapter dropped some discussions of the uncertainties involved in recognizing greenhouse warming. Watson admits that IPCC erred in that case, but he says “there was a lot of smoke and very little fire” and “nothing was done wrongly.” Rather, he says, the typical 6-week interval between a working group meeting and a plenary meeting was telescoped into 2 weeks to meet a tight deadline, and what was sacrificed was a review of changes mandated by those at the plenary. However, Watson says the process left a perception of impropriety that “I want to avoid in the future. We must follow our rules of procedure.”

    Watson hopes another change—the insertion of independent “editors” into the review process—will fend off charges of clubbish behavior. Although lead authors were required to document why they incorporated or rejected each comment from reviewers, theirs was the sole authority. “That oversight process probably was not adequate,” says Kevin Trenberth of the National Center for Atmospheric Research in Boulder, himself a lead author in the last report. In the next round, Watson plans to use editorial boards whose members would intervene between chapter lead authors and reviewers “so no one can say the authors can easily blow off comments they don't like.”

    Although IPCC itself has reached no consensus on the exact magnitude of future greenhouse warming, negotiators at Kyoto will consider an agreement that would commit industrialized nations to binding limitations on greenhouse-gas emissions. While meeting such commitments has been the focus of IPCC's working group III, so far its work has not been well received. Some want more emphasis on adapting to climate change rather than minimizing it, and even those who favor emission limits suspect that Watson may take too narrow a view. “There's a whole lot of things that could be done,” says a scientist involved in IPCC, “but I don't think people are thinking about that, and I'm not sure Bob Watson thinks about it, either. He seems to be hung up on setting binding [emissions] targets.” Watson thinks he is more balanced than that, but he admits that the working group on options needs some restructuring, in particular to integrate the technology, economics, and social science of responding to climate change.

    Right man for the job

    Integrating the diverse scientific viewpoints involved in discussing climate change should come naturally to Watson: It's been central to his whole career. He worked every angle of the ozone problem as a program manager at NASA, a leader in international assessment, and a scientific adviser to U.S. negotiators, says policy analyst Henry Lambright of Syracuse University in New York. Lambright calls Watson a “bureaucratic entrepreneur,” someone who manages his part of the bureaucracy aggressively for a clear goal, in Watson's case to produce science that could create a consensus on ozone depletion. “He seized the moment,” says Lambright. “He was a network builder. He traveled all over the place; he's a tremendously energetic guy, works hard, and is quite savvy. He pushed his position to the utmost; he was a catalyst for a much bigger process.”

    While pursuing the ozone problem at NASA, Watson “had the right issue and the right organization at the right time,” says Lambright, who has studied the science-policy process of the time. But that may not have been the case when Watson went to the White House's Office of Science and Technology Policy (OSTP) as associate director for environment. There his efforts to coordinate federal agencies' response to the greenhouse problem, global change, and other environmental problems met with far less success.

    With Watson's arrival, OSTP informed Congress that the new Clinton Administration “would go beyond basic research on the physical science issues [of global change] to societal and health issues,” says Lambright. Watson “tried to move science into policy quickly, which was very entrepreneurial … but people got upset with a White House figure telling agencies what to do.”

    Watson admits that his approach at OSTP didn't fly all that well, but “if I had to do it again, I would,” he says. Watson feels that his assessment efforts at OSTP “went quite well.” In areas as diverse as ozone depletion and oxygenated motor vehicle fuels, “the science input was welcome at the table” where consensus was hammered out.

    Coordinating the science at federal agencies was another matter, however. “People may feel I pushed too far in trying to be holistic,” says Watson. His holistic viewpoint on natural resources—integrating global warming, biodiversity, and toxins, for example—seemed to some researchers to be more like benign neglect of global warming (Science, 22 September 1995, p. 1665). And agencies were reluctant to spend the time needed to coordinate their activities if their budgets weren't rising, too.

    As Watson takes on leadership of the biggest scientific assessment of all time, his former boss at NASA, Shelby Tilford of Orbital Sciences Corp. in Dulles, Virginia, hopes he will remain focused and energetic. “I think it's an enormous task,” he cautions. “This is a much, much more complicated issue than CFCs. The whole process is going to be much more difficult. The main thing is for Bob to keep the scientific integrity without [the debate] becoming politicized too early. I wish him a lot of luck.”


    South Wants Place at Table in New Collaborative Effort

    1. Pallava Bagla
    1. Pallava Bagla is a science writer in New Dehli.

    HYDERABAD, INDIA—A new international initiative to combat malaria in Africa has triggered a mixed reaction from researchers in developing countries. While scientists on the front lines in the battle against malaria hail the prospect of additional funding for a disease that claims 3 million lives a year, many wonder if the Multilateral Initiative on Malaria (MIM) can deliver on its promise. At a global meeting on malaria research held here last month, leading researchers from the developing world warned that MIM can succeed only if it treats them as equal partners and tries to build up African science as well as promoting high-quality research.

    The idea of a multilateral initiative was first publicly discussed at an international meeting of malaria scientists and public health experts held in January in Dakar, Senegal (Science, 17 January, p. 299). At last month's meeting here, officials from the U.S. National Institutes of Health (NIH), the World Health Organization (WHO), the World Bank, and the Wellcome Trust pledged $2 million to get the project off the ground. In addition, NIH has committed another $2 million for new initiatives in 1997, including creating a repository of malaria reagents for use by the global research community (Science, 29 August, p. 1207). “For the first time, officials from MIM have actually conveyed to the larger scientific community their enthusiasm and their commitment to a genuine partnership in malaria,” noted Barend Mons, a senior adviser on international health research for the Medical Research Council of the Netherlands, at the Hyderabad meeting.

    But getting this initial commitment may turn out to be the easy part. The $2 million raised so far is paltry compared with the requests for funds that have poured in since MIM was announced (see sidebar). And even if MIM's sponsors do succeed in raising additional millions, forging the kinds of partnerships that will be needed to carry out the work will be tough. One major obstacle is the imbalance in resources between North and South. That, in turn, can lead to a phenomenon that Win Kilama, director-general of the National Institute for Medical Research in Dar es Salaam, Tanzania, disparagingly calls “parachute science,” in which Western scientists drop in to skim off results from local trials. Then there are the stumbling blocks within the developing world—excessive bureaucracy, meager funding, a poor infrastructure, and a shortage of trained native persons—that dilute the effectiveness of outside efforts.

    Crossing borders, saving lives.

    One successful, long-term collaboration between malaria researchers in the North and South involves Richard Carter of the University of Edinburgh, Peter David of the Pasteur Institute, and Kamini Mendis of the University of Colombo, Sri Lanka. At the same time, Tanzania's Andrew Kitua works on an African test of a synthetic vaccine.


    Next month MIM will take the first step toward overcoming these problems when it convenes a panel to sort through “letters of interest” from researchers. The goal, says Tore Godal, director of WHO's special program for research and training in tropical diseases (TDR), is to “to promote, coordinate, and fund collaborative research in Africa” that will lead to “sustainable development of malaria research and control.” Organizers say that it could be extended to other regions if the resources materialize.

    Godal's language is intended to address two major concerns. The first, by European funding partners, was that NIH might end up calling the shots by controlling the funding decisions. “This coordinated collaboration from a single pot of money will just not do,” says Mons. NIH director Harold Varmus says no such approach was ever planned. “We are receptive to any funding arrangement that will advance science and health in Africa,” he told Science.

    The reference to “sustainable development” is meant to reassure researchers in developing countries who recognize that much of their science lags behind that of the West but who want to avoid being pawns in another top-down fight against malaria. “There is a great danger in the homogeneity that may develop as a consequence of this MIM,” says Kamini Mendis, a professor of parasitology and chief of the Malaria Research Unit at the University of Colombo, Sri Lanka. “Any loss in diversity of thinking and ideas could be big. In the long run, the greatest challenge for the MIM is building research capacity in Africa. One must find ways of encouraging original thinking within the proposed structure for increased scientific activity there.”

    Mendis knows what it takes to succeed in the world of big-time collaborations. For 15 years she has teamed with Richard Carter, a geneticist at the University of Edinburgh in the United Kingdom, and Peter David, a molecular biologist at the Pasteur Institute in Paris, to study natural simian malaria models for testing vaccine candidates. They have paid special attention to the parasite most common in Sri Lanka, Plasmodium vivax. Kick-started with a grant from the TDR, the collaboration has produced some 60 papers, including those showing the advantages of natural host-parasite pairs over artificial hosts, such as rodents, in studying the disease. The collaboration has also identified the only candidate vaccine for blocking the transmission of P. vivax.

    The secret to success, says Mendis, is “a common vision of the scientific questions to be answered” and “very high scientific standards … not an easy task given that there is hardly any peer group in Colombo.” For Carter, the key ingredient is less tangible: “At both the personal and the intellectual level, everything just clicked.”

    However, such fertile ground is rare in North-South malaria collaborations. Vector biologist Vinod Prakash Sharma, who is also director of the Delhi-based Malaria Research Centre of India, says that the norm is government red tape so formidable that he has “simply stopped writing collaborative projects anymore.” A 3-year wait for clearance from the Indian Council of Medical Research and the Ministry of Health, he says, is “time enough to kill the enthusiasm of any partner from the developed world.”

    Even if Western scientists retain their enthusiasm, it isn't easy for them to find suitable collaborators. In all of Africa, estimates Andrew Kitua, scientific director of the Ifakara Center in Tanzania, “there may not be more than 10 malaria researchers who can collaborate as co-equals” with Northern scientists. Even a well-funded lab is hard pressed to make a difference in local capacity building.

    With an annual budget of about $6 million and a staff of 500, the tropical disease research laboratories and hospital operated by Britain's Medical Research Council in The Gambia are supposed to be the most sophisticated of their kind in Africa. But only about a quarter of their scientific roster is filled by native Africans, admits Brian Greenwood, a professor of communicable diseases at the London School of Hygiene and Tropical Medicine, who stepped down in 1995 after heading the operation for 15 years. The labs, he notes, have produced only one or two local Ph.D.s a year over the last decade. “We had certain high academic standards to maintain,” says Greenwood.

    Equally formidable are the barriers to South-South cooperation. “There is a certain gap and lack of effective communication between the Anglophones and Francophones, even though they may be neighboring countries in Africa,” says John LaMontagne of the U.S. National Institute of Allergy and Infectious Diseases. “Spoken language complicates the issue even further.”

    For MIM to flourish, say non-Western researchers, it must shore up the inadequate scientific infrastructure at the same time it wages war on the disease itself. “If malaria is to be really tackled,” says Kitua, “then both donor agencies and local African governments have to invest heavily in real capacity-building exercises that take place in the local African setting and not in America or France.”

    Of course, training programs need participants to work. “I have been having problems in recruiting local Tanzanians for the Ifakara Center,” Kitua confesses. “Communication systems are rather unreliable, and getting access to scientific literature is very difficult,” adds Kitua, whose center recently acquired a communications satellite terminal from WHO.

    The importance of such links is the impetus for two new regional networks created to strengthen South-South ties that are not now part of MIM. One, called the African Malaria Testing Network, hopes to monitor drug resistance to malaria in Tanzania, Kenya, and Uganda. Another tripartite arrangement, between Thailand, Burma, and Sri Lanka, hopes to train young malaria researchers through a mutual exchange program. “This would strengthen regional dialogues,” says Mendis.

    The challenge for MIM is to combine all these elements into a comprehensive package that advances malaria research and improves science in the developing world. And nobody expects overnight success. “Traditionally, these funding agencies have never worked together,” says one Australian researcher who requested anonymity. “So you cannot expect them to start working in close concordance after only two meetings.” Kilama is also cautious about the pending flow of dollars. “The MIM was long overdue and it is very welcome,” he says. “But there might be strings attached to the money. I am enthusiastic, but let's wait and see.”


    MIM Gets Down to Business

    1. Pallava Bagla

    NEW DELHI, INDIA—Despite concerns about how the new Multilateral Initiative on Malaria (MIM) will be managed and what impact it will have on the developing world (see main text), researchers haven't been shy about asking for their share of the money. That leaves MIM organizers with an old but familiar problem of separating the wheat from the chaff.

    Last spring, an invitation from the U.S. National Institutes of Health and the European Commission for “letters of interest” attracted 134 responses seeking a total of $130 million. With only $2 million pledged, organizers have formed a task force that will meet next month in Geneva to begin the process of elimination.

    The panel's primary job will be to identify ideas that can be turned into first-rate proposals to build research capacity, and to encourage labs with similar ideas to join forces on a common application. “It is clear to anyone who has read the letters that they are simply statements of ideas and are not really comprehensive proposals,” says Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases. “Therefore, I think it is very likely that the review committee will request formal proposals from some of the applicants. It may also recommend that some investigators might benefit from submitting a joint proposal.”

    This new round of proposals will likely be reviewed in February, says Tore Godal, director of the special program for research and training in tropical diseases of the World Health Organization, which has assembled the task force. By that time, he says, additional funding may be available from both current and new partners


    NIH Case Ends With Mysteries Unsolved

    1. Jocelyn Kaiser

    When they surfaced 2 years ago, the allegations were explosive: A pregnant scientist in a lab at the National Institutes of Health (NIH) said she had been poisoned by a radioactive isotope, and 26 of her co-workers were subsequently found to have been contaminated as well. Last week, an investigation into this bizarre affair drew to a close, leaving many questions unanswered. In a decision issued on 17 September, the Nuclear Regulatory Commission (NRC) concluded that the radiation exposure of the scientist, Maryann Wenli Ma, and others was “deliberate.” But it could not identify the perpetrator and offered no motive.

    NRC did, however, absolve NIH of blame and denied Ma's and her husband's request that NIH be stripped of its license to use radioactive materials. Ma's attorneys say they may ask the NRC or Congress to review the decision. “Obviously, we're very distressed by this decision,” says Debra Katz of the Washington law firm Bernabei and Katz, which represented Ma and her husband, Bill Wenling Zheng, who worked in the same lab as Ma.

    It was Zheng who first discovered, during a routine check on 29 June 1995, that Ma had been exposed to phosphorus-32 (P-32)—a tracer widely used in biomedical labs. She was eventually found to have been exposed to between 8 and 12.7 rems—well above the NRC annual limit of 5 rems—and her 17-week-old fetus to 5.1 to 8.1 rems. Ma said she believed the source was a lunch of Chinese food leftovers stored in a lab refrigerator. A subsequent investigation found that 26 others, including Zheng, had received much smaller exposures when they drank from a water cooler apparently spiked with P-32 (Science, 28 July 1995, p. 483).

    In October 1995, Ma and Zheng filed a petition charging NIH with lax safety procedures, claiming that Ma had been given inadequate medical care, and calling for the suspension or revocation of NIH's license to use radioisotopes. The petition also claimed that before the contamination occurred, the couple's lab chief, molecular pharmacologist John Weinstein, had wanted Ma to abort her fetus so that having a child wouldn't interfere with her research.

    The NRC investigation, conducted jointly with the Federal Bureau of Investigation, the NIH police, and the inspector general of the Department of Health and Human Services, concludes that there were two “very significant” violations, according to a cover letter to NIH: Ma's contamination and that of one other employee who received up to 2.5 times the recommended dose for the public. The NRC director's decision also found that Ma and the 26 other employees were “deliberately contaminated with P-32,” and it “presumes” the poisoning was done by an NIH employee with NIH materials. The investigation could not determine how Ma was poisoned, however; tests indicated that the Chinese leftovers were not the source. The investigators also found that the evidence did not support, and in many instances contradicted, Ma's and Zheng's allegations against Weinstein.

    As for NIH, the decision says that although it broke several rules—such as failing to report Ma's exposure within 30 days—its actions did not contribute to the poisoning and could not have prevented it. NIH did get its wrist slapped last year when NRC fined it $2500 for inadequate radiation security, but the lapses weren't connected to the Ma case. The letter to NIH says the agency has since “made significant efforts” to improve its safety procedures, so no further sanctions are needed.

  5. NASA

    Station Costs Pinch Other Programs

    1. Andrew Lawler

    Last week, in a room jammed with reporters, lights, and cameras, House lawmakers argued over the safety of U.S. astronauts working aboard the cramped and ailing Mir space station. A few hours later, on the other side of the Capitol, a drama involving another space station—the international platform slated for its first launch next June—held the attention of a different set of legislators. The second hearing received little media coverage, but its subject matter could be much more significant for researchers than the fate of Mir.

    The hearing focused on a growing overrun in the cost of building the space station, combined with a continued threat that Russia may not be able to meet its commitment to provide substantial hardware for the new station. The overruns, encountered by Boeing Co. in its role as general contractor, mean that NASA will need $330 million more in next year's budget to keep the program on track, NASA Administrator Dan Goldin told the science, technology, and space panel of the Senate Commerce Committee on 18 September. That's not counting $100 million extra the agency must spend to be ready in case Russia were to drop out. But Congress is unlikely to raise NASA's current budget of $13.7 billion when it finalizes the agency's 1998 budget this month.

    That political fact of life, says one NASA official, means that “we will have to absorb the vast majority” of the overrun. The only alternative, says Goldin, is to delay space station construction. But any extension would mean retaining the workforce for a longer period of time, an even more expensive proposition.

    Agency officials are working on a plan that would spread the pain and keep Goldin's promise to Senator Jay Rockefeller (D-WV) to avoid “serious damage” to science. Most of the money, which would supplement the station's annual $2.1 billion budget, would come from civil service salaries, construction of facilities, travel, and safety and reliability accounts that benefit all NASA programs, including science. The $2.5 billion account—called mission support—can be pared down, Goldin says, without damaging the quality of agency programs. “It's a place where I love to turn the screws tighter and tighter,” he adds. And he promised a skeptical panel that Boeing has the overruns under control.

    Other NASA officials say other strategies include altering the agency's grant cycle to push some payments into the next fiscal year. They say the administrative changes—which would be for 1 year only—should not affect space and earth science research already under way, although they might impose some additional paperwork burdens. The boom likely will fall harder on life and microgravity scientists, who depend directly on the station budget for construction of the facilities they need to do research.

    The details of the plan will not be made public until late next month, after Congress has completed work on the 1998 budget. But the $430 million increase puts further pressure on the agency's strained budget, which already is on the decline. In February, Goldin warned that the budget could drop to $13.4 billion in 1999—$300 million less than the 1997 level—and $13.2 billion for 2000 and beyond, in keeping with the Administration's pledge to erase the deficit by 2002. That means the fiscal pressure isn't likely to ease until long after Mir is a distant memory.


    Ocean Floor Is Laid Bare by New Satellite Data

    1. Dana Mackenzie
    1. Dana Mackenzie is a writer in Santa Cruz, CA.

    Just a few kilometers of water hide the ocean floor from view, yet its features are less familiar than those of the moon. Ships mapping the depths miss huge tracts of ocean floor; satellite measurements of gravity variations give only indirect clues to bottom topography. Now, a team of geophysicists has tried to remedy the shortcomings of both approaches by combining them. Using ship soundings to correct new and recently declassified satellite data, they have produced the most detailed global map of the ocean floor so far: a 68-million-pixel panorama described on page 1956 of this issue.

    Some researchers are hailing the new database as our best view yet of this remote landscape, offering more than twice the resolution of the best previous global map. Others fault it for still failing to meet a cartographer's standard of literal accuracy. Either way, it won't be ignored, predicts one of the developers of the map, geophysicist Walter Smith of the National Oceanic and Atmospheric Administration (NOAA) in Silver Spring, Maryland.

    With the new map, commercial fishers may find new locations to hunt for such fish as orange roughy, which congregate around underwater volcanoes (seamounts); oceanographers may be able to improve their models of the ocean circulation; and geophysicists may have to refine their views of the sea-floor spreading process that takes place along midocean ridges. The map may even have geopolitical uses: The view of the continental shelves it offers may allow some countries to define larger claims of territorial waters. “For a user who is aware of the drawbacks, it's a very useful database,” says Steven Cande, a marine geophysicist at the Scripps Institution of Oceanography in La Jolla, California.

    Nearly all experts in ocean-floor topography warn, however, that because the new map relies so heavily on satellite data, it should not be interpreted too literally. “The map looks stunning and does a great job of pinpointing the location and trends of underwater features—[but not] their amplitudes,” says Andrew Goodwillie, a geophysicist at Scripps. “It can't be used for shallow-water navigation, because it is an estimate,” adds Bill Haxby of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, who produced similar but lower resolution gravity maps of the ocean in the mid-1980s with Smith's NOAA collaborator, David Sandwell.

    The only way to obtain precise depths in the open ocean is with traditional bathymetry, in which a ship measures the distance to the ocean floor by bouncing sound waves off the bottom. Unfortunately, a ship can take soundings only in a narrow strip. “There are places as large as the state of Oklahoma where no sounding data are available,” says Smith. Here the contours of the ocean floor have to be drawn based on geological guesswork. “It's like drawing a map of a city, where you know a lot about one street, then nothing for 10 miles, then a lot about another street,” says David Monahan, a geophysicist at the Canadian Hydrographic Service. “If there are buildings on one street and buildings on the other, you assume there are buildings in between.”

    A better likeness.

    Compared to a traditional chart, “estimated topography” is a closer match to actual soundings along a track near Tahiti (top).


    A satellite, by contrast, covers a wide swath, but cannot sense the bottom of the ocean at all. Instead, it bounces microwaves off the surface to measure its shape, and the resulting pattern of lumps and bulges reflects what is underneath. A seamount, for example, exerts a small but measurable gravitational pull on the water around it, creating a bump 2 or 3 meters high that is easily detectable by a satellite. Smith and Sandwell were also able to take advantage of what Haxby calls a “quantum leap” in gravity mapping: new data from the European Space Agency's ERS-1 satellite and from Geosat, a U.S. Navy satellite. The Geosat data were collected from 1985 to 1986 but not fully declassified until 1995, after the ERS-1 data were released.

    Unfortunately, “gravity is not bathymetry,” as Goodwillie puts it. The satellite data can't reveal features smaller than about 12 kilometers across; converting the gravity data to depth runs into nonlinear complications in shallow water; and local variations in the density of the ocean floor can produce gravity anomalies mimicking those produced by seamounts. Sediments pose a special problem for satellite bathymetry. Because the basalt of oceanic crust is denser than sediment, a buried seamount or fracture zone will still show up in the gravitational field. So a geophysicist who relies on satellite data alone runs the risk of predicting a mountain where there isn't even a molehill.

    To address these problems, Smith and Sandwell calibrated satellite measurements against ship measurements wherever possible. “We twisted arms all over the international community to get data,” Smith says. When they knew both the ship depth soundings and the satellite gravity measurements at a certain place, they constructed a mathematical “transfer function” to convert gravity data into topography. They could then apply the same transfer function to the satellite gravity data over nearby regions that had not been covered by ship. The result was a “predicted bathymetry” for the whole region, with a resolution as fine as 1.1 kilometers at high latitudes.

    In areas like the midocean ridges, where there is little sediment, Cande calls the map “spectacular.” Among other things, it reveals discontinuities that apparently migrate along the ridges—a process not expected in traditional theory. Moreover, when Smith and Sandwell compared their predicted bathymetry near the Foundation Seamounts, southeast of Tahiti, to a hand-drawn bathymetric chart, they found that their map is better at capturing the texture of the ocean floor. A comparison with actual soundings from a ship survey this year showed, however, that some of the predicted depths were off by hundreds of meters.

    Still, Smith points out that, satellite mapping has two great advantages: speed and uniformity of coverage. A committee convened by the U.S. Navy, he says, estimated that it would take more than a century of survey time by a state-of-the-art ship, at a cost approaching $1 billion, to fully map the oceans, says Smith, “There is some value in covering the world in 1 year for [Geosat's cost of] $60 million.”


    Gene Mutation Provides More Meat on the Hoof

    1. Steven Dickman
    1. Steven Dickman is a writer in Cambridge, Massachusetts.

    Belgium is not exactly known for wide-open spaces and sprawling ranchlands. So farmers there had to learn to do more with less. Over the last 30 years, they have bred a strain of cattle—the mighty Belgian Blue—that gives 20% more meat per animal on roughly the same food intake as ordinary animals. Indeed, the cattle develop such bulging muscles that in extreme cases they have trouble walking and the calves are so big they have to be delivered by cesarean section. Now, three research groups have independently uncovered the genetic cause of this “double-muscling” trait, a discovery that may lead to meatier strains, not just of cattle, but of other agriculturally important animals as well.

    In the September issue of Nature Genetics, a pan-European team led by Michel Georges of the University of Liège in Belgium reports that double muscling is caused by a mutation in the bovine version of a recently discovered gene that makes a protein called myostatin. The other two groups, one co-led by Tim Smith of the U.S. Department of Agriculture (USDA) lab in Clay Center, Nebraska, and the other by Sejin Lee of Johns Hopkins University, also found that the myostatin gene is mutated in Belgian Blues and have linked mutations in the gene to double muscling in a second breed of cattle, the Piedmontese, as well. [The Smith team's results are in the September issue of Genome Research, and Lee's are in press in the Proceedings of the National Academy of Sciences (PNAS).]

    Schwarzenegger gene?

    A mutated myostatin gene causes the heavy muscling of this Belgian Blue bull. The left-hand mouse of this pair also shows the effects of inactivating the gene.


    Discovered just 4 months ago in mice by Lee and his graduate student Alexandra McPherron, myostatin normally serves to limit skeletal muscle growth. Apparently, the mutations block its activity and the animal's muscles grow larger—but without harming meat quality. While some other cattle breeds are also abnormally well muscled, presumably because of as-yet-undiscovered mutations, the muscle fibers in those animals are thicker than normal, toughening the meat. In contrast, the muscles of animals with myostatin mutations have larger numbers of normal-size fibers. Indeed, says Smith, meat from the Belgian Blue is “so tender even round steaks fall apart on the grill.” Nevertheless, the meat is lower in fat than that from ordinary breeds.

    Given those effects of myostatin mutations, it's not surprising the gene is attracting attention from agricultural scientists. “This is the first gene identified in cattle that controls a combination of muscle size and tenderness,” says molecular geneticist Mike Bishop of ABS Global, a biotech firm in Madison, Wisconsin. He notes that beef palatability, as well as yield, might be improved by introducing myostatin gene mutations into cattle or by finding drugs that turn down the gene's activity. Such strategies might also lead to meatier pigs, chickens, and turkeys, as the Lee team found that the myostatin gene has relatives in these and other farm animals.

    The meandering cow path to this discovery started in Belgium in the 1950s, Georges says. Cow breeders there, who were under economic pressure from cheaper imports and high production costs, wanted to increase their yields and began to select for the double-muscling trait, which had been reported as early as 1807. Before long, nearly every beef cow in Belgium was a purebred double-muscled animal.

    Beginning in the late 1980s, Georges's team spearheaded an effort to isolate the cause of double muscling. “We were so convinced that any gene … that had such a spectacular effect on muscular development had to be a very important gene for animal agriculture,” he recalls. By 1995, Georges and his colleagues mapped the gene to a region of cow chromosome 2, but then the effort stalled because they still had a lot of DNA to search through.

    A break came in May of this year, however, when Lee and McPherron described the myostatin gene and showed that when it is missing in mice, the animals grow into muscle-bound hulks two to three times the size of normal animals. That publication launched a race to find the equivalent bovine gene, as the implications for cattle—if such a gene existed—were obvious.

    The group led by Georges—which included researchers from Germany, Spain, and France—used a neat trick involving a third species, humans. The full human gene has not yet been published—it will be in Lee's PNAS paper—but the researchers found that a database of human ESTs (expressed sequence tags) contained sequences similar to those of the mouse gene. Although ESTs are short—100 or 200 bases long—Georges and his colleagues found enough overlapping ones to piece together most of the human gene. After cloning it and mapping its chromosomal location, they played a hunch and compared the site of the human gene with a map of bovine chromosome 2.

    The effort paid off. “We then realized,” says Georges, “that the position of the myostatin gene on the human map coincided exactly with the position of the double-muscling gene on our bovine genome map.” From there, they cloned and sequenced the bovine myostatin genes from both double-muscled and normal cattle.

    The sequences revealed that the gene from the double-muscled animals carries an inactivating mutation—an 11-base pair deletion—that results in “virtually complete truncation” of the active region of the protein, Georges says. That lifts the normal repression of muscle growth by myostatin and opens the way for extra brawn.

    The Lee team used a similar approach to come up with the Belgian Blue gene. The researchers then guessed that double-muscled Piedmontese cattle would also have a mutated myostatin gene, and when they cloned it, that's what they found. Smith, working with John Bass's team at AgResearch in Ruakura, New Zealand, took a somewhat different tack, using the mouse gene to first find the gene in normal bovine DNA and then in Belgian Blues and Piedmontese, where they, too, found mutations. Similar mutations could also add bulk to other farm animals, for the Lee team has found the gene in all nine species they examined, including mammals, such as the pig, and birds, including chickens and turkeys.

    The myostatin work may help to identify other genes that influence muscle growth. Piedmontese cattle don't develop the extreme double muscling of Belgian Blues, even though the mutation that the Smith and Lee teams found in their gene is probably sufficient to inactivate the protein. That suggests that the lesser amount of double muscling in Piedmontese cattle is due to other genes that make up for the loss of myostatin.

    Despite the interest in using the myostatin gene to improve beef production, researchers warn that it may be a difficult task. One possibility is to use either conventional breeding or genetic engineering to introduce the Belgian Blue mutation into other breeds. So far, however, U.S. breeders have only rarely attempted to do this, even by conventional breeding. This is partly for practical reasons. The need to deliver calves by cesarean section is a serious handicap in the United States, where cattle herds are larger and roam over much wider areas than they do in Belgium.

    That problem might be overcome if researchers can find a less extreme myostatin mutation or identify another gene with a less drastic influence on muscle mass, allowing the calves to be delivered naturally. But there are also worries about whether the public would accept genetically engineered beef. The cattle industry has until now shied away from funding research into transgenic animals for human consumption. “They perceive it as too sensitive and risky an area,” Smith says.

    Another possibility would be to find some drug that can turn down myostatin activity in animals with the normal gene. And then there may be other genes that can be manipulated. Researchers in at least four countries are mapping the cattle genome, and reproductive physiologist Vernon Pursel of the USDA research labs in Beltsville, Maryland, says “we are getting to the point where there will be a number of genes” like myostatin identified in the near future. Extra helpings of tasty meat at essentially no cost could prove hard to resist.


    Did Satellites Spot a Brightening Sun?

    1. Richard A. Kerr

    In the debate over whether greenhouse warming has arrived and just how bad it will get, the sun has been a relatively minor player. But even a tiny dimming of the sun—the climate system's sole energy source—could greatly slow any warming due to greenhouse gases, while a slight brightening could worsen what might already be a bad situation. Unfortunately, the longest running direct observations of the sun have been too short to say whether its brightness actually varies over the decades needed to influence climate. But by splicing together separate satellite records, an atmospheric physicist has constructed a record long enough to suggest a striking trend: a strong recent brightening.

    On page 1963 of this issue of Science, Richard Willson presents his analysis of observations by three satellite-borne sensors that together have monitored solar brightness since 1978. Willson, from the Altadena (California) branch of Columbia University's Center for Climate Systems Research, finds enough brightening to make the sun a major player in climate change, if the change signals a long-term trend. But his finding is controversial. While some analyses of the same data being prepared for publication support Willson's finding, others do not.

    The central question is the reliability of one of the three records. Willson's analysis, which used a less sophisticated sensor to tie together an interrupted record, “seems quite reasonable,” says Lee Kyle of NASA's Goddard Space Flight Center in Greenbelt, Maryland, whose instrument produced the linking data set. “I think Willson is correct in saying the best evidence shows an increase. How strong that evidence is, is another matter.” Some say it is not strong at all. “I think we are not able to do it at this point,” says Claus Frohlich of the World Radiation Center in Davos, Switzerland. “We just don't know.”

    To identify a long-term trend in solar brightness, or total solar irradiance (TSI), researchers need a record that spans at least one solar cycle—the 11-year cycle over which sunspots spread across the face of the sun and then vanish, with a corresponding rise and fall in the sun's brightness. The orbiting Active Cavity Radiometer Irradiance Monitor (ACRIM I) provided part of the necessary record from 1980 to 1989, showing that TSI fell 0.08% during the declining solar activity of an 11-year sunspot cycle. That's a sizable change, but too brief to overcome the climate system's inertia.

    The space shuttle was supposed to launch a second instrument while the first was still operating, so that researchers could compare the readings from the two identical instruments and correct them to construct a seamless record. But the Challenger accident delayed the Upper Atmosphere Research Satellite launch by several years and opened a 2-year gap between ACRIM I, whose satellite failed in 1989, and the arrival of ACRIM II, which was finally launched in 1991.

    Solar ups and downs.

    The long-running ERB sensor on Nimbus 7 bridges the gap between the ACRIM sensors on UARS.


    Lacking such a comparison, Willson and other researchers have bridged the gap with a less capable instrument, the Earth Radiation Budget (ERB) experiment on the Nimbus 7 spacecraft. When Willson combined the records, he found a brightening of 0.036% per decade from 1986 to 1996. That brightening, if sustained for many decades, would lead to solar warming in a league with greenhouse warming in the next century. The current best estimate for greenhouse warming at the end of the next century is 2.0 degrees Celsius, while such a solar brightening sustained for 100 years might produce a warming of about 0.4°C, says Willson.

    But the strategy of relying on ERB to bridge the two ACRIM records leaves room for doubt. ERB cannot monitor how much its collecting surface has been degraded by the harsh solar glare, as ACRIMs can. However, ERB made less frequent measurements, probably minimizing its degradation during the 2-year gap, say its operators, Kyle and Douglas Hoyt, who is now at Hughes STX in Greenbelt. They did correct the ERB record after finding a jump in measured TSI that they attributed to a one-time shift in the sensitivity of the instrument.

    Yet the results conflict with some other studies. Solar physicist Judith Lean of the Naval Research Laboratory in Washington, D.C., has made indirect estimates of long-term brightness changes based on the shifting balance between dark sunspots and relatively bright areas on the sun, called faculae and network. That is the process that explains much of the brightness variation within a solar cycle. Based on past sunspot records, she estimated that TSI has increased over the past 300 years at an average rate of only 0.008% per decade, less than one-quarter Willson's observed rate (Science, 8 March 1996, p. 1360). In fact, Lean inferred that the brightening of the sun over the past few decades has been negligible.

    “That doesn't mean [Willson's finding] is wrong,” Lean says, but it does mean that something other than changes in sunspots and faculae would be needed to produce the greater variability. But Willson is not worried about being forced to think about new mechanisms. Lean's dark-bright mechanism “works in a part of the solar cycle,” says Willson, “but it doesn't work as well during solar maximum or minimum.” To him, that suggests something apart from Lean's mechanism is also varying brightness within a solar cycle.

    An imperfect sun

    Sunspots help modulate the sun's brightness.


    More worrying to some researchers are two studies, each of which claims to have found additional jumps in the ERB record that Kyle and Hoyt missed. The two studies—published within the past 2 years by Robert Lee of NASA's Langley Research Center in Hampton, Virginia, and his colleagues and by Gary Chapman of California State University in Northridge and his colleagues—use multiple proxies for TSI, such as solar radio emissions and the area encompassed by sunspots, faculae, and network. They then searched the ERB record for jumps that did not appear in the proxy records. Each group, although using a different mix of proxies and different data sets, identified similar spurious discontinuities in the ERB record at about the same times during the gap. If these two discontinuities are used to correct the ERB record, Willson's brightening trend would fade to near zero.

    “I can get either result, depending on how I do it,” says solar physicist Dick White of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. His initial analysis of the ACRIM and ERB data with Werner Mende of the Free University of Berlin gave “basically the same result as Dick [Willson] got,” says White.

    Lee's proxies had not seemed directly enough linked to TSI to warrant the additional corrections, says White, but 2 weeks ago at a meeting, Lean and Chapman made him aware for the first time of the full implications of Chapman's 1996 paper. White was impressed by the more direct connection between Chapman's proxies and TSI. Now, after using the proxies to correct the ERB record, White thinks “the final conclusion will be that the TSI has changed by less than” a fifth of the value reported by Willson.

    But Willson, who became aware of the implications of the Chapman paper onlylast week, isn't persuaded. “ACRIM data are fundamental physical measurements,” he says. Correlating proxies to TSI “is a statistical construct, not physics. I don't think this kind of analysis can give you precise insight into a subtle trend like this.” The proxy indices have not been measured as precisely as TSI has, says Willson, and the physical relation of TSI to the kinds of solar activity reflected in the indices is not well understood. “When people tell me these statistical indices are better than the observations, I just can't see it. This is a classic difference between experimentalists and theoreticians,” he says.

    A middle ground in the debate may be emerging, however. After analyzing the combined ACRIM/ERB data and including the additional corrections, Frohlich finds no brightening. But that doesn't make him a critic of Willson, either: “I'm not saying one or the other is correct; we're just doing things differently.” What is needed, says Frohlich, is ACRIM-type instruments that could span two solar minima. But that means researchers will have to wait at least another decade before deciphering the sun's role in global change.


    Martian Magnetic Whisper Detected

    1. Richard A. Kerr

    Planetary scientists knew that Mars was no magnetic powerhouse, but for decades they have been frustrated in their efforts either to write it off as magnetically inert like Venus or active like Mercury and Earth. Last week, the Mars Global Surveyor provided the long-sought answer during one of its first low passes over the planet.

    “It looks like strong evidence for a planetary magnetic field,” says space physicist Mario Acuña of NASA's Goddard Space Flight Center in Greenbelt, Maryland, who is the principal investigator for Surveyor's magnetometer. Previous missions to Mars carried magnetometers that weren't sensitive enough to pick up the field, met with disasters like the 1993 loss of Mars Observer, or, like the 1989 Russian Phobos spacecraft, did not pass close enough to the planet, says Acuña.

    Magnetic after all

    Mars proves to be magnetic but still mysterious.


    But Surveyor's discovery came with a puzzle: At about 1/800 the strength of Earth's field, Mars's magnetism is surprisingly strong. That's about twice as strong as researchers thought it could be based on limits inferred from Russian missions, says Acuña. Planetary physicist David Stevenson of the California Institute of Technology in Pasadena adds that “it's not that easy to get as large a field as the spacecraft has found.” Indeed, it's hard to figure out how Mars could be generating any field, let alone one of the strength that Surveyor has detected.

    Theoreticians assume that Mars is too small to have retained the internal heat needed to drive an Earth-like magnetic dynamo, in which the churning of a molten-iron core produces electrical currents and thus the magnetic field. If Mars ever had an Earth-like dynamo, says Stevenson, it's likely it has turned off. Stevenson has speculated that Mercury's field might be generated thermoelectrically, as in some batteries, if temperature differences across an iron core and rocky mantle could produce a closed electrical circuit. The same process might be at work in Mars, he suggests. Even so, some sort of dynamo would be required to enlarge the internal field into one detectable above the planet.

    Another possibility is that Mars imprinted a field on its crustal rock before the planet's geodynamo wound down, and Surveyor is picking up those imprints. Remnant magnetism has been reported in meteorites from Mars, including ALH84001 with its putative evidence of ancient life. (Indeed, if Mars did have an early, strong field, it might have fended off cosmic rays deleterious to life.) But “it's hard to imagine how you would build up a large, coherent field” from remnant magnetization, says Stevenson. Earth's moon, for example, has remnant magnetism frozen into lavas when they solidified, but it's patchy and doesn't add up to a global field.

    Surveyor will map the field in detail as it settles into orbit around the planet, and its observations could “sort out whether it is a remnant crustal field or a dying dynamo,” says Acuña. “We have a long way to go.”


    Long Afterglows Reveal the Secrets of Distant Fireballs

    1. James Glanz

    HUNTSVILLE, ALABAMA—If you light a bonfire on the beach, the embers will glow for hours after the party is over, and the ashes may smolder for days. For all their mystery and fury, the cosmic blasts that were the focus of the Fourth Huntsville Gamma-Ray Burst Symposium held here last week seem to obey much the same rule. The violent bursts are followed by fading “afterglows” in progressively longer, less energetic wavelengths: x-rays, visible light, and radio waves. Lasting for weeks or months, the afterglows transform the fleeting gamma-ray bursts (GRBs)—which erupt for fractions of a second to minutes in a hard-to-study corner of the spectrum—into a long-lasting phenomenon at easily accessible wavelengths. “Fading counterparts are where all the excitement has been,” says Kevin Hurley of the University of California, Berkeley.

    By now, as astronomers discussed here, study of a handful of afterglows has solidified a picture that, less than a year ago, was just one of several competing hypotheses. Clues ranging from the rate at which the afterglows fade to the “twinkling” of the radio signal all imply that GRBs emerge from shocks within gigantic fireballs billions of light-years from Earth. The small number of afterglows detected so far—just a half-dozen x-ray and two optical afterglows and one in the radio band, among thousands of GRBs—leaves some astronomers feeling cautious. But others think they are closing in on a portrait of these mysterious events. “I think we're beginning to see the light at the end of the tunnel,” says Peter Mészáros of Pennsylvania State University in University Park, who originated the fireball shock theory with Martin Rees of Cambridge University.

    Still smoldering.

    The afterglow of a 28 February gamma-ray burst is still visible on 5 September, near a hazy patch that may be a host galaxy.


    The latest chapter in the GRB story began with last year's launch of the Italian-Dutch BeppoSAX satellite, which carries both x-ray cameras and a gamma-ray detector. BeppoSAX nabbed an x-ray afterglow following a GRB on 28 February, and because x-ray cameras can determine the position of an event much more accurately than a gamma-ray detector can, the x-ray observation in turn guided ground-based telescopes to a fading optical counterpart. It lay near the edge of a fuzzy patch of light, which many interpreted as the burst's distant “host” galaxy. The finding hinted that GRBs originate in the distant universe rather than in the neighborhood of our own galaxy, as a competing theory had it.

    But controversy erupted when several groups suggested that the patch might be fading—which a galaxy wouldn't do—and another group concluded that the pointlike optical counterpart was whipping across the sky so quickly that it must be nearby (Science, 25 April, p. 529). Soon after, the counterpart of another GRB tipped the debate back toward distant sources: Optical spectra of material lying just in front of the x-ray afterglow of an 8 May GRB implied that the source lay billions of light-years away (Science, 23 May, p. 1194).

    But a full resolution had to wait until Earth's motion brought the 28 February burst site from behind the sun again. On 4 September, the orbiting Hubble Space Telescope finally observed it again and found “no significant proper motion and no evidence of fading,” as Andrew Fruchter of the Space Telescope Science Institute in Baltimore reported for his group. As a result, the apparent conflicts “have been reduced to rubble,” says Charles Meegan of NASA's Marshall Space Flight Center here, who was the symposium's organizer. Moreover, the Hubble scientists found that the counterpart was still fading steadily after 6 months—another sign of a distant, energetic fireball rather than a weaker nearby event that would have run out of steam long ago.

    The scale of these fireballs became clear when Dale Frail of the National Radio Astronomy Observatory (NRAO) in Socorro, New Mexico, described his team's follow-up of the 8 May GRB. Frail, Greg Taylor of NRAO, Shri Kulkarni of the California Institute of Technology, and Luciano Nicastro and Marco Feroci of the BeppoSAX GRB team had gone looking for the gradually fading radio emission predicted for a distant fireball. They found the radio signal, but it didn't match the smooth decline they had expected. “One day [the emission] was barely detectable; the next day it was a whopping bright source,” says Frail.

    The radio source was twinkling. As Jeremy Goodman of Princeton University had pointed out, natural striations in the ionized gases of the Milky Way should scatter the radio waves from a point source in a distant universe. The radio source should twinkle as Earth moves in its orbit, just as stars twinkle because of atmospheric motions. As the source grew in size, the twinkling should shut off—just as planets, with their larger apparent sizes, look steady.

    When the twinkling began dying down a couple of weeks later, the apparent size Frail and his colleagues deduced for the radio source implied a fireball about a tenth of a light-year across, moving outward at close to the speed of light. That kind of violent expansion—“extreme, relativistic motion on a scale that is not seen in any other place,” as Tsvi Piran of Hebrew University in Jerusalem puts it—is just what the cosmic fireball theory predicts.

    Now, the afterglows are yielding hints about the settings in which GRBs take place. One clue came with the help of another orbiting sentinel capable of pinpointing some x-ray afterglows: the All Sky Monitor (ASM) on the orbiting Rossi X-ray Timing Explorer. After the ASM determined the position of a 28 August burst, the Japanese x-ray satellite ASCA slued to observe it and found that the x-rays were being absorbed or briefly boosted at some wavelengths, as if the fireball was expanding in a dense, lumpy region of a distant galaxy.

    All of those data are fresh grist for the theorists. Piran and Re'em Sari of Hebrew University have compared the drastic flickering of the GRBs to the smoother behavior of the afterglows to conclude that dozens of shock waves collide within the fireball during its first few minutes, when the gamma-rays are emitted. Others considered what kinds of cataclysmic events could set off such a fireball in the first place. Candidates include the sudden merger of two neutron stars or the collapse of most of a single, rapidly spinning, massive star to form a black hole. The spinning black hole would act as a huge reservoir of energy, which powerful magnetic fields could transfer outward to the surviving material, explained Princeton University's Bohdan Paczynski.

    Still, the meeting belonged to the observers, whose great successes have made such theorizing possible. “The way everybody's pulling together,” said Don Smith, an ASM team member at the Massachusetts Institute of Technology, “it's been real exciting to be part of the chase.”


    ISO Peers Into the Cool Corners of the Universe

    1. Dennis Normile

    KYOTO, JAPAN—Nearly 2000 astronomers gathered here late last month for the 23rd General Assembly of the International Astronomical Union. A special full-day session at the triennial event was devoted to the latest findings from the European Space Agency's Infrared Space Observatory (ISO). Launched in November 1995, ISO has provided unique insights into the cool and dusty corners of the universe, where stars are born and die, and the atoms and molecules necessary for life are created. The spacecraft's useful life was expected to end soon, but because the liquid helium that cools ISO's cryogenic detectors has been used up more slowly than expected, operations have been extended to next spring.

    Boosting the Birth Rate of Stars

    A group of infrared astronomers says it has confirmed its earlier, controversial claims that a frenzy of star formation took place in galaxies two-thirds of the way back to the big bang. Sebastian Oliver of London's Imperial College announced at the meeting that a new survey of distant galaxies carried out with the ISOCAM camera onboard the ISO satellite shows that studies at optical wavelengths had underestimated the rate at which new stars were being born in these galaxies by a factor of 4.

    The group, led by Imperial's Michael Rowan-Robinson, had presented its preliminary findings last November. The researchers had measured the infrared output of a number of galaxies in the same patch of sky where the Hubble Space Telescope made its Deep Field survey of some of the faintest and most distant galaxies ever seen. Comparing the ISO measurements with the Deep Field results showed that these remote galaxies are giving out 10 to 100 times as much infrared radiation as visible light. Because the dust shrouding star-forming regions absorbs light from young stars, then reemits it as infrared radiation, Rowan-Robinson argued that the galaxies were forming stars 10 times faster than the optical observations had implied.

    Researchers at France's Atomic Energy Commission in Saclay, who developed ISOCAM, raised questions about the group's data-reduction techniques, however. So the Imperial College group resurveyed the same galaxies at a different infrared wavelength in July, narrowed the number of galaxies to 11, and scaled back the rate of star formation from the original estimate of 10 times what optical observations indicate to four times. Even at this scaled-back level, these higher rates of star formation could have implications for the current understanding of the number of stars in the universe when star-formation peaked. The bottom line, Rowan-Robinson says, is that “if you only look in the optical [wavelengths], you're missing a part of the story.”

    Even now, not everyone is convinced that the star-formation estimates need to be revised. Lennox Cowie, an astronomer at the University of Hawaii's Institute for Astronomy, says there are still questions about the “quite tricky” interpretation of the ISO data, which can be contaminated by signals from stray cosmic rays. What is more, Cowie's own observations of another set of galaxies point to a different conclusion. He is a member of a team that is using ISO to survey another part of the sky in search of much more primeval galaxies. The survey by this group, led by Yoshiaki Taniguchi of Japan's Tohoku University, also spotted several emission sources in the same age range as the galaxies Rowan-Robinson's team studied. Follow-up optical observations on a few of these with the Keck Telescope in Hawaii convinced Cowie that there was nothing “you have to go into very heavy starburst interpretations to understand.” The discrepancy between the results could mean that such rapid star formation is confined to a few galaxies, Cowie says, or that there is some other explanation for Rowan-Robinson's observations.

    Astronomers may be able to settle the issue soon, says David Elbaz, an astrophysicist at Saclay, because more evidence supporting Rowan-Robinson's claims is on the way in the form of soon-to-be-published results from other ISO surveys. “We can say that in the optical [wavelengths] a large fraction of star formation [evidence] is missing,” Elbaz says. He is not ready to endorse Rowan-Robinson's actual numbers for the star-formation rate, however. Says Cowie: “I just think we don't quite know what the answer is at this point.”

    Picking Brown Dwarfs Out of a Crowd

    A group of ISO researchers is turning up surprising numbers of brown dwarfs, balls of hydrogen and helium that are too small to ignite and sustain the nuclear reactions that make stars shine. Brown dwarfs have long tantalized astronomers: Although they were theoretically predicted 3 decades ago, only a handful have ever been positively identified because they glow so feebly.

    Now, as Thierry Montmerle of France's Atomic Energy Commission in Saclay reported in Kyoto, a group led by Linnart Nordh and Göran Olofsson, both of the Stockholm Observatory in Sweden, carried out a survey of dim stellar objects in four well-known star-forming regions and spotted evidence for between 10 and 30 brown dwarf candidates and doubled the number of known young stars. The size of the brown dwarf collection may allow astronomers to begin estimating how common such bodies may be throughout the universe, and overall, the survey should help astronomers understand star formation in dense interstellar clouds. “It will be a significant input for those who try to model and understand the star-formation process,” Nordh says. It may also have implications for estimates of the total mass of the universe.

    ISO is so adept at spotting brown dwarfs and dim stars because it can observe at the midinfrared wavelengths that escape from the dust clouds shrouding star-forming regions. For this study, the group chose to survey several well-studied regions to try to find objects that previous surveys might have missed, particularly dim young stars and young brown dwarfs. When first formed, brown dwarfs glow from the gravitational contraction, later growing dimmer and harder to detect as they cool.

    To determine whether an object is a dim star or a brown dwarf, the astronomers must determine its mass. They can deduce the mass if they know an object's age and total luminosity over the entire spectrum of wavelengths. The team made the assumption that all the objects in each region they studied were formed at roughly the same time. Getting the luminosity was a bit trickier, as ISOCAM was only surveying the regions at 7 micrometers. To deduce an object's total luminosity, they developed a yardstick by looking at a number of stars with known total luminosities and calculating an average ratio of total luminosity to luminosity at 7 micrometers.

    Astrophysicist Hans Zinnecker of Germany's Astrophysical Institute in Potsdam says this trove of dwarfs and dim objects will help astronomers understand the process through which molecular clouds fragment and form stars. For example, it might contribute to an understanding of the relationship between the mass of a cloud and the mass and number of the resulting stars. There is also the long-standing question of the unidentified “dark matter” of the universe. “This can help us estimate how many of these [brown dwarfs] form and what their contribution might be to the dark matter,” he says.

    Watching Dust Grains Form

    None of us would be here today if it were not for supernovae. These violent explosions of old, burned-out stars are responsible for scattering the heavy elements such as carbon, oxygen, and silicon created in the star's nuclear furnace. The fireball is also thought to play a role in forming basic chemical compounds and interstellar dust, which condense into new solar systems like our own. Now, ISO is getting a clearer picture of how and when supernovae create the ingredients of future worlds.

    By training ISOCAM, ISO's infrared camera, on Cassiopeia A, the youngest supernova remnant in our galaxy, astronomers have, for the first time, identified the composition of dust grains in the remnants of the supernova. They have also detected a new addition to the list of elements found in supernova remnants: the inert gas neon. By tracing the signatures of dust and elements through the different regions of the exploded star, astrophysicists should be able to fine-tune their understanding of the processes that produce them.

    Using a filter specially tuned to pick up the thermal emissions from dust, a group led by Pierre-Olivier Lagage of France's Atomic Energy Commission in Saclay reported a year ago that it had determined that dust was forming in so-called fast-moving knots, globules of nuclear fusion products blown off from the outer layers of the exploded star. Now, as they reported in Kyoto, Lagage and his colleagues have succeeded in determining the composition of individual knots. They have found that the dusty knots are, as expected, rich in silicate, while others contain traces of argon and sulfur, together with the new addition, neon.

    The knots originated in different layers of the star, and they can be distinguished because knots from outer layers move faster than knots from inner layers. Astrophysicists have wondered how much mixing occurs among these layers as the star explodes, because the mixing would affect the processes that generate elements and dust. Lagage's group is now trying to determine whether the composition of the knots varies, which would indicate that the layering of the original star was preserved when it exploded, or whether its layers got churned up, homogenizing the composition of the knots. “We think, at the moment, that there is not a lot of mixing,” Lagage says, but he cautions that this is a very preliminary analysis of the rich data returned by ISOCAM.

    Eli Dwek, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, agrees that the data are a mother lode. “In trying to work backward and see what the composition of the ejecta was at the explosion, the more elements you sample, the more of a picture you get,” he says.


    HIV Suppressed Long After Treatment

    1. Jon Cohen

    BALTIMORE, MARYLAND—“Anecdote” is one of the most damning things you can say about a scientific report. Still, some anecdotes are provocative, and one caused a stir when it was related at an AIDS meeting here last week: An HIV-infected German man drove the virus down to an “undetectable” level with drugs, stopped taking the drugs, and yet, 9 months later, has not had the virus return. This report comes on the heels of a paper published in the 30 August issue of The Lancet describing two other patients who similarly have not seen their HIV rebound after being off drugs for 1 year. Those results have drawn some skepticism, however.

    The description of the German patient, a man in his 20s who lives in Berlin, came at a meeting put together by Robert Gallo, the head of the Institute of Human Virology in Baltimore. Franco Lori of the Research Institute for Genetic and Human Therapy—which is located in both Pavia, Italy, and at Georgetown University in Washington, D.C.—said that when the patient first sought treatment shortly after becoming infected, the polymerase chain reaction (PCR) assay showed that he had 85,000 copies of HIV RNA per milliliter of blood—a solid infection.

    Lori says clinicians in Berlin started the man on three drugs: indinavir, ddI, and hydroxyurea. Indinavir inhibits HIV's protease enzyme; ddI jams the virus's reverse transcriptase enzyme; and hydroxyurea, an anticancer agent, boosts the effects of ddI and also suppresses the immune system. The man's HIV levels quickly dropped to those that the most sensitive PCR assays could not detect. After 27 days, he stopped taking his medication for 3 days, and the virus, as expected, quickly came back. When he restarted the drugs, the HIV again went down to undetectable levels.

    Then, 144 days after beginning treatment, the man developed hepatitis A and was so ill that he could not take any drugs for 3 weeks. But before restarting his medications, his physicians checked the amount of virus in his blood. It was still undetectable—and it has remained so for 9 months. “I hate to draw conclusions too early,” says Lori. “We think the virus is there. It just doesn't rebound.”

    Others at the meeting were equally wary. “It doesn't serve any purpose except for the person who took the drugs,” said Jacques Leibowitch of France's Hôpital Raymond Poincaré. Leibowitch and others also criticized the Lancet paper, which was written by Jorge Vila of France's AFAVIR and colleagues, noting that the two patients described there, who also were using hydroxyurea as part of their treatment, had such low HIV levels to begin with that they may never have been infected in the first place.

    Still, many researchers, Leibowitch included, were intrigued by the possible role played by hydroxyurea, which is not an approved AIDS drug. “There must be something that we need to investigate further,” says Anthony Fauci, head of the U.S. National Institute of Allergy and Infectious Diseases (NIAID). Maybe, says Fauci, the hydroxyurea suppresses the immune system cells that HIV targets. Fauci's lab reported in the August Journal of Infectious Diseases that a different immune suppressor, cyclosporin A, could lower levels of the AIDS virus in infected monkeys. NIAID's Lawrence Deyton also wonders whether hepatitis A might have stimulated the release of immune system chemicals that kept the HIV in check. “There are many things we have to work out,” Lori says

Log in to view full text