News this Week

Science  25 Jul 1997:
Vol. 277, Issue 5325, pp. 466

    Showdown Over Clean Air Science

    1. Jocelyn Kaiser


    Nine years ago, epidemiologist Joel Schwartz stumbled across a disturbing pattern of death. Schwartz, then at the Environmental Protection Agency (EPA), noted that when soot levels in the air of Steubenville, Ohio, rose on any given day in the 1970s and 1980s, the number of fatalities among residents would jump the next day—even when air pollution levels were supposedly safe. Schwartz went on to document the same chilling pattern in four more cities that track soot: Philadelphia; Detroit; St. Louis; and Kingston, Tennessee. Projecting these findings to the entire U.S. population, Schwartz estimated that 60,000 people could be dying each year—more than the annual number of car crash victims—from heart and lung diseases aggravated by tiny airborne particles. At scientific meetings in 1991 and 1992, recalls Schwartz, now at Harvard, the studies got “a tremendous amount of attention.”

    Today, the analysis is provoking a furor. Schwartz's findings and similar studies by other researchers lit the fuse of a political powder keg: a debate over whether industry should take costly steps to reduce the amount of soot and other pollutants released into the atmosphere. Heeding the results from Schwartz and others, on 16 July the EPA unveiled final rules designed to tighten ozone standards and clamp down on particles. The cost of implementing the rules—which EPA estimates at $9.7 billion per year for measures such as installing new equipment on power plants and diesel trucks—has sparked a fierce protest on Capitol Hill from industry groups and many state and local officials. So far, EPA has stood its ground. The agency has refused to scale back the standards, first proposed in November, and President Clinton has said he supports them. But now the bell has sounded for round two of what is shaping up to be the biggest environmental fight of the decade: Congress is about to consider legislation that would quash the standards.

    Opponents argue that the science fails to support the new regulations, which would lower maximum ozone levels by a third and, for the first time, set acceptable airborne levels of fine particles less than 2.5 micrometers in diameter, called PM2.5, which are generated mainly by burning fossil fuels. Although industry groups have sharply criticized the new ozone standards, arguing that the health benefits would be marginal compared to the costs, most of the scientific debate has centered on the limits on particulate matter.

    Critics charge that Schwartz's population studies and others like it do not link individual pollutants to human health effects; instead, they argue, different factors—such as other pollutants and lifestyle factors—may be responsible for the increased death rate. Moreover, scientists have yet to propose a plausible explanation for how fine particles might harm the body (see sidebar on p. 469). Because of such shortcomings, the Air Quality Standards Coalition, representing 500 petroleum, automotive, and other industry and business groups, derides the science as “totally inadequate.” Adds epidemiologist Suresh Moolgavkar of the Fred Hutchinson Cancer Research Center in Seattle, “EPA is espousing a certainty in its language that is simply not justified by the data.”

    But EPA Administrator Carol Browner contends that there are plenty of data to support the rule, even the particularly contentious PM2.5 standard. The evidence comes from more than 60 published health studies that show a link between soot and adverse health effects in scores of cities, says John Bachmann, associate director for science policy in EPA's Office of Air Quality Planning and Standards. EPA acknowledges, however, that many questions remain about how fine particles cause harm. “All of us agree we need way more science,” says Bachmann. However, he says, “We're not supposed to wait until people are dead in the streets.” But many scientists say the problem is not the standard itself, but the levels EPA has chosen. “These studies can't readily lead to a specific number,” says Johns Hopkins University epidemiologist Jonathan Samet. “It all makes sense to regulate PM2.5. The question is, do we have the quantitative information to do it? That's where the debate begins.”

    Fine particle distinctions

    Researchers have been well aware of the dangers of particles ever since several disastrous air pollution episodes in Europe and the United States in the middle of this century, such as a deadly week in London in 1952 when choking soot and sulfur dioxide—at least 10 times today's average levels—killed thousands, mostly children and elderly people with heart or lung ailments. Such incidents spurred controls on pollutants. Since 1971, EPA has ordered limits on levels of particles, which are composed of dust from soils, bits of carbon spewed by diesel vehicles and power plants, sulfates, and gases such as nitrogen oxides and volatile organics that condense onto seed particles. Initially, these rules covered particles up to 50 micrometers in diameter. But after studies showed that coarse particles tend to be safely expelled from the body's upper airways, the agency in 1987 restricted only finer particles, less than 10 micrometers in diameter (PM10).

    By the early 1990s, however, Schwartz's study and dozens like it had convinced many experts that the PM10 standard might not be protective enough, especially for the elderly, children, people with frail immune systems, and other vulnerable groups. In cities in the United States and other countries, death rates and hospital admissions for people suffering from cardiac problems and respiratory problems such as asthma seemed to rise and fall with daily particle levels. For example, a study led by biostatistician Richard Burnett of Health Canada found that in Ontario in the mid-1980s, for every 13 micrograms/meter3 rise in daily levels of sulfates—a surrogate for overall PM2.5—hospital admissions for respiratory and cardiac events shot up 3.7% and 2.8%, respectively.

    View this table:

    Researchers also began to recognize that they needed to focus on finer particles—PM2.5 or smaller—because animal studies using radioactively tagged particles and lung casts made from human cadavers had shown that such tiny particles are most likely to lodge deep in lungs. “The finer particles represent a completely different class of materials than the coarser PM10, and it is logical that they probably have different activities and types of toxicity,” says toxicologist Joseph Mauderly of the Lovelace Respiratory Research Institute in Albuquerque, New Mexico. The PM10 is mostly inert crustal dust, while the combustion-generated fine particles contain the nasty stuff—corrosive acids and metals—that can damage tissues.

    Many experts, however, were skeptical of these red flags. Their main beef was that the daily mortality studies were unable to discern whether air pollution levels were significantly shortening lives or perhaps hastening by hours or days the deaths of very sick people already on the verge of dying. “People believed the studies were picking up a real phenomenon, but the interpretation was unclear,” says Columbia University epidemiologist Patrick Kinney.

    A more convincing set of findings came along in 1993, however, when a Harvard team headed by Douglas Dockery examined soot and other pollutant levels and 1429 deaths that occurred in 8111 adults the team followed for 14 to 16 years in six Eastern U.S. cities (known as the Six Cities study). The researchers interviewed subjects about weight, smoking, and other risk factors, correcting for these lifestyle differences, which had not been possible in earlier studies comparing city death rates. They found that the strongest association between any pollutant and death rates was with fine particles, and that the risk of death was 26% higher in the most polluted city—Steubenville—compared to the cleanest—Portage, Wisconsin. The results supported the findings of the daily studies and raised additional concerns by suggesting that the harmful effects of particles can build up over years.

    A second long-term study 2 years later strengthened the case against airborne particles. Tapping an American Cancer Society (ACS) database of smoking, age, occupation, diet, and other data on over 550,000 volunteers in 151 cities, along with sulfate data and PM2.5 readings for 50 cities, the Harvard group and environmental economist Arden Pope of Brigham Young University in Provo, Utah, found a 17% difference over 8 years in death rates between the cleanest and dirtiest cities. “We're not likely to see a study of this quality and magnitude [again] in our lifetimes,” says Alan Krupnick, an economist with Resources for the Future, a Washington, D.C., think tank. “I think that pushed a lot of people over the edge,” adds Kinney.

    A Natural Resources Defense Council study extrapolated the results and came up with 64,000 annual deaths that were up to 2 years premature. Using this “body count” and its own analyses, EPA estimates that its regulations will prevent 15,000 premature deaths each year and 9000 hospital admissions, for a total estimated cost savings of $19 billion to $104 billion a year—about two to 12 times the estimated cost of compliance.

    Industry chokes on rules

    After a 1993 lawsuit brought by the American Lung Association forced EPA to stick to its mandated 5-year schedule for reviewing the latest evidence of particle health effects, critics of the science behind the new rules launched their assault. “We'd go to meetings and testify at hearings,” says Dockery, “and they'd say, ‘We get different results.’”

    Critics have saved most of their barrage for the mortality studies. Hutchinson's Moolgavkar, for instance, reanalyzed Schwartz's Philadelphia data on behalf of the American Iron and Steel Institute. When Moolgavkar took into account other air pollutants—ozone and nitrogen dioxide—and analyzed them all simultaneously, it was impossible to separate the health effects of particles from those of sulfur dioxide. “It is impossible to say one component is any more responsible than any other,” says Moolgavkar.

    Others point out that the long-term ACS and Six Cities studies captured only a fraction of the total pollution the subjects were exposed to over their lives. “How does that relate to what people are exposed to across their lifetimes? We really don't know,” says Samet of Johns Hopkins, who nonetheless says he believes the link between daily mortality and particles is real. Biostatistician Fred Lipfert, a consultant who has worked for the Electric Power Research Institute in Palo Alto, California, also argues that the Harvard team “kind of just took a first cut at socioeconomic status,” and that a more sedentary lifestyle in, say, Steubenville compared to Portage might account for the differences in mortality that the Six Cities study attributes to fine particles.

    Other concerns center on how EPA estimated the potency of these tiny particles. Because only a few excess deaths and hospitalizations occur when the air contains low levels of particle pollution, the studies lack the statistical power to precisely estimate how dangerous particles are at these levels. So EPA assumed that the health threat increases in a linear fashion with dose, ignoring the possibility that the risk may taper off at lower levels. Adding to the uncertainty, few studies actually measured PM2.5—most used PM10 or a surrogate such as sulfates. “There's very little information on the ratio” between PM10 and PM2.5, says Yale epidemiologist Jan Stolwijk.

    Moreover, without knowing what it is about particles that causes ill health effects, it's impossible to be sure that the regulations are targeting the right source, says toxicologist Roger McClellan, president of the Chemical Industry Institute of Technology in Research Triangle Park, North Carolina. For example, he says, a state might target diesel engines or clamp down on plow dust, when the problem is actually sulfates from power plants. Says McClellan: “We run a real hazard here of putting in place a new standard that we don't know how effective it will be.”

    A lot of hot air?

    EPA scientists disagree, saying they are confident that the science supports their regulations. “We think we've done a totally legitimate, rational analysis of the studies we had,” says the agency's Bachmann. He points to what he calls “overwhelming consistency”—more than 60 of 86 population studies linked health effects to fluctuations in particulate matter levels—and the coherence between deaths, hospitalization, and respiratory disease. Others point to a study published this month in Environmental Health Perspectives by EPA researcher Tracey Woodruff and colleagues at the Centers for Disease Control and Prevention in Atlanta. They found that infants in cities with high particle pollution levels are 25% more likely to die of sudden infant death syndrome than are those in cities with relatively clean air. “It certainly adds support,” says California EPA epidemiologist Bart Ostro.

    Schwartz also takes aim at the argument that pollutants other than particles may be blurring the picture. Cities with only one or two major airborne pollutants—such as Santa Clara, California, which has low air levels of sulfur dioxide and ozone in winter—still show a link between particle levels and health problems, he says. “This whole industry argument that it's all other pollutants is just not supported by the data,” says Schwartz. New York University School of Medicine epidemiologist George Thurston says “it's a valid criticism” that some of the Harvard daily city studies underestimated the effects of other pollutants, but those contributions “just reduce” the estimated danger levels of particles. “It doesn't make [the effects] go away.” Finally, Bachmann says, even if PM2.5 itself is not the bad guy—if sulfates alone are the problem, for example—targeting it should also control whatever pollutant is taking lives.

    Most experts contacted by Science agreed that EPA was justified in setting a standard for PM2.5. “There's enough circumstantial evidence that it does make sense to begin to look at and regulate fine particles as a class,” says Mauderly. At a minimum, Mauderly and others add, setting a standard will force the states to collect data that could help pin down PM2.5 health effects. But they split on just how stringent that standard should be. “We have a tremendous amount of uncertainty as to what the dose-effect relationship is—how dangerous particles might be and under what circumstances,” says Mauderly. “The scientific basis for [EPA's planned levels] is totally lacking,” Stolwijk says. “You have to make several leaps of faith.”

    Yet while the studies “have their limitations,” says environmental health scientist Arthur Upton of the Robert Wood Johnson Medical School in Piscataway, New Jersey, “I'm not aware that we can dismiss their findings as unimportant or irrelevant.” Deciding whether to set a stringent standard, Upton says, “becomes a value judgment. It's not a scientific question. … Do we dismiss the data? Or do we accept them as warning signs and act accordingly?”

    EPA's judgment won't be the final word. The House Commerce Committee is considering a bill that would impose a 4-year moratorium on the standards while EPA does more monitoring and research. Congress may also try to kill the rules through a new law passed last year to shield small businesses from overly burdensome regulations. And the White House announced last month that EPA will conduct another scientific review, starting this year, before it implements the PM2.5 standard. Congress is expected to set aside up to $35 million next year in EPA's budget for research on particles. And Upton is heading a reanalysis of the Six Cities and ACS studies by the Health Effects Institute, an industry- and EPA-funded research organization in Cambridge, Massachusetts. “It's a vexing question, and I wish I were Solomon and knew exactly what the right answer was,” Upton says. “But we'll work on it.”


    Researchers and Lawmakers Clash Over Access to Data

    1. Jocelyn Kaiser

    In one corner of the battleground over new clean air standards (see main text), scientists and policy-makers are skirmishing over an issue close to their hearts and pocketbooks: who “owns” raw data. Industry groups have charged that the authors of a key study on the health effects of airborne particles have resisted sharing data collected with taxpayer money—a reanalysis of which, they argue, might weaken the scientific basis of the standards. The researchers, meanwhile, are reluctant to make the data widely available because it contains confidential information on their subjects.

    The fight could have repercussions that reach far beyond this year's pollution debate: A House committee earlier this month directed the Environmental Protection Agency (EPA) to publicly release raw data from air pollution research it funds. Not everyone is sure that's a good idea. “The implications of this language could be quite significant in terms of setting precedents,” says Anne Sassaman, extramural grants director at the National Institute of Environmental Health Sciences (NIEHS), part of the National Institutes of Health.

    The data-sharing commotion was sparked by a paper from the so-called Six Cities study, in which a Harvard team led by epidemiologist Douglas Dockery followed the health of about 8000 people over 14 to 16 years and found a link between variations in particulate matter (PM) levels and death rates. Besides tapping public databases on weather and PM levels, the researchers interviewed subjects and obtained death records. The NIEHS funded the data collection, while EPA grants paid for the analyses.

    Last January, however, EPA Assistant Administrator for Air and Radiation Mary Nichols urged the Harvard group to share its data. Congress, state governors, and others had requested the raw data, the letter said, and “given the strong interest,” the data “should be made available … as rapidly as possible.” Industry groups appealed to Representative Tom Bliley (R-VA), chair of the House Commerce Committee, who asked EPA and NIEHS to obtain raw data from the Six Cities study and a related Harvard study and hand it over to the committee. Given the importance and cost of the proposed rules, Bliley wrote, “it is important that the public and affected parties have the ability to review all of the underlying data … so they can be confident that EPA is basing its decisions on sound science.”

    Data dust up.

    Congress wants raw data from the Six Cities study, which correlated relative death rates (Portage, Wisconsin, being 1) with particulate levels. St. Louis came out in the middle.


    The agencies said they did not have the data, and Harvard refused to turn them over to EPA. Dockery says that subjects' medical histories and lifestyle habits, as well as death records from state and local agencies, were obtained on condition that the information would be kept confidential. Even if a subject's name were deleted from a file, Dockery says, simply knowing the date of death could be a big enough clue to identify that person, as three of the six cities in the study have populations under 50,000. The Harvard group has, however, allowed at least 18 scientists over the past 15 years to review its data collection at Harvard.

    Last April, Harvard's dean for academic affairs, James H. Ware, offered a second alternative: to share the data with the Health Effects Institute, a research center in Cambridge, Massachusetts, funded by industry and the EPA, which could convene a panel of scientists not affiliated with industry or environmental groups to oversee a reanalysis. EPA agreed, and a nine-scientist panel chaired by Arthur Upton of the Robert Wood Johnson Medical School in Piscataway, New Jersey, is expected to finish its work by June 1999.

    This arrangement hasn't satisfied the industry critics, however. For example, American Petroleum Institute (API) President Charles DiBona told Ware in a 1 May letter that while “we commend” Harvard for “taking this step … we do not believe it goes far enough” and that the data should be available “for review by any professionally qualified investigators who have a legitimate scientific interest,” including API. Bliley lambasted EPA again last week, saying the agency “has so far withheld the facts.”

    Other lawmakers are also not appeased. A report accompanying the House version of the 1998 EPA appropriations bill earmarks $35 million for particle studies that EPA would fund at NIEHS and the Department of Energy, and requires that all the data from these studies “will become available to the public, with proper safeguards” covering such issues as confidentiality, first publication rights, and scientific fraud. The Senate funding bill does not contain such a directive; thus, it may not survive a House-Senate conference later this summer to settle differences in the bills.

    If a data-release requirement were limited to just these studies, and if grantees were to know “up front” about the ground rules, it would not be onerous, says Sheila Newton, director of policy, planning, and evaluation for NIEHS. Some industry groups, such as the 40,000-member Small Business Survival Committee, however, are now lobbying Congress to require that data from all federally funded research be made public. That prospect concerns many researchers, who worry that wholesale release of raw data could lead to “data dredging,” in which hired hands working for industry, environmental groups, or other advocacy groups might analyze it with sub-par methods to get answers favorable to their position. “There's no question that if you put in enough variables in a post hoc analysis, you can make these data or any data say whatever you want,” says Dockery. “I would have deep concerns about giving up some of my data if I knew a priori someone wasn't going to do an honest job of analyzing it, if they had a political agenda,” says one state scientist who asked not to be identified.

    The furor has made EPA realize it needs to clarify its policy on data ownership, says Joe Alexander, acting chief of EPA's Office of Research and Development. Like most other agencies, EPA encourages extramural researchers to share data. But EPA told Bliley's committee that it almost never asks for raw data, except when investigating allegations of scientific fraud, or when data are prepared for approval of products. Alexander says one possibility under consideration is to set up a system like that at NASA, which requires agency-funded scientists to submit all their raw data. That, says Carl Mazza, science adviser in EPA's Air and Radiation Office, “would create a major issue for the way in which the scientific community operates.”


    Puzzling Over a Potential Killer's Modus Operandi

    1. Jocelyn Kaiser

    Experts may clash over the strength of the science behind the new clean air regulations (see main text), but they do agree on one thing: It's still a mystery how airborne particles could trigger a bout of asthma or cause someone to drop dead of a heart attack. A dozen labs are now racing to find a modus operandi.

    This is not the first time that an unknown mechanism has bedeviled researchers trying to assess a potential environmental hazard. But unlike some other alleged risks—such as electromagnetic fields—it's apparent that the more particles one breathes, the greater the danger, says Keith Florig, a science policy expert at Carnegie Mellon University in Pittsburgh. “If you observe a strong enough dose response, that's pretty compelling,” he says.

    The best way to unravel a pollutant's mechanism is to study how it triggers health effects in animals. Until recently, however, researchers had drawn a blank. “I've done lots of studies” exposing healthy rats to diesel soot for nearly their entire lives at particle levels more than 10 times what people typically encounter, and “nothing happens,” says toxicologist Joseph Mauderly of the Lovelace Respiratory Research Institute in Albuquerque, New Mexico.

    But in a parallel to the epidemiological studies that first drew attention to the hazards of airborne particles, toxicologists in the last year or two have begun to find that sickly animals exposed to fine particles get sicker and sometimes die. For example, pulmonary biologist John Godleski of the Harvard School of Public Health in Boston found that rats with chronic bronchitis are especially vulnerable. When he exposed the animals to particles smaller than 2.5 micrometers (PM2.5) strained from Boston air, at levels equivalent to about twice the current EPA daily limit for PM10 for 6 hours per day for three straight days, 37% of the bronchitic animals died; all the healthy rats survived.

    Godleski has also tightened balloons around the coronary arteries of dogs to simulate angina, or cardiac chest pain, then exposed the dogs for 6 hours to PM2.5. At particle concentrations of about 116 μg/m3 and 175 μg/m3, levels often reached in heavily polluted cities, the dogs' hearts developed arrhythmias that are commonly observed in people nearing a fatal heart attack. Godleski says these animal studies could help explain the observation that when particle pollution soars, “a lot of people are dying outside of the hospital. These could very well be sudden deaths” from heart attacks, he says.

    Now that researchers have potential animal models for the health effects, they are trying to sort out whether a particle's chemical composition dictates how dangerous it is, and how it triggers health effects. “Nobody is sure what it is in, or on, or of the particles” that causes health effects, notes toxicologist Judith Zelikoff of New York University School of Medicine. Freshly created particles appear to be more toxic than aged particles, so the culprit may be some reactive chemical group—such as an acid, a metal, an organic compound, or a peroxide—attached to a particle's surface, says Morton Lippman, also at New York University School of Medicine. Others think that ultrafine particles, or those less than 0.1 micrometer in diameter, are the problem, because they are much more potent than larger particles at provoking immune responses in the lungs. “The problem is, none of these hypotheses really seems to be a solid explanation for all the effects,” Mauderly says. “Probably they all contribute.”


    NIH Leads Research Gains As Congress Picks Up Pace

    1. Andrew Lawler,
    2. Eliot Marshall,
    3. Jeffrey Mervis,
    4. Jocelyn Kaiser

    Trying to make up for a slow start, Congress moved ahead last week on several 1998 spending bills in hopes of completing most of its budget work before a monthlong recess in August. Science programs fared relatively well in last week's votes, easing fears that the government's new plan to balance the budget by 2002 would spark a round of cuts to R&D programs.

    The National Institutes of Health (NIH), as expected, is emerging once more as federal science's big winner, with the National Science Foundation (NSF) again enjoying modest growth. The good news is also spreading to other agencies that have suffered spending cuts in the past few years. Magnetic fusion actually won a boost in the Senate, and committees in both houses approved money for U.S. participation in Europe's Large Hadron Collider project. There were a few skirmishes over science funding, however. In separate House debates, members punished NSF for awarding a grant to study why qualified candidates decline to run for Congress and soundly rejected funding for a new windstorm research center that critics labeled pork-barrel spending. At the same time, a Senate panel provided NSF with a $40 million plant genome initiative it had not requested (Science, 27 June, p. 1960).

    What follows is a summary of congressional actions during the past week. House and Senate members will meet in September to work out their differences.

    NIH: 6% and rising. The news continues to be good for biomedical researchers. A House funding panel voted to give NIH a raise of 6% in 1998, which would boost its budget to $13.5 billion. The proposed hike—$764.5 million—is more than twice the increase sought by the president. The biggest winners would be human genome research, with a proposed 12% increase, and the diabetes institute, with a hike of 7.5%. To offer such a raise without breaking a congressional spending limit, the subcommittee cut other programs in its jurisdiction, including education and labor projects championed by the White House. The bill is expected to clear the House Appropriations Committee this week and go to the floor before recess. Meanwhile, a Senate subcommittee chaired by Arlen Specter (R-PA)—who has said he wants a 7.5% raise for NIH—planned to mark up its bill this week.

    NSF: Growth with strings. The first major science spending bill to clear the House would raise NSF's $3.367 billion total request by $120 million. A Senate panel last week was not quite so generous, adding just $10 million. The discrepancy is largely due to $90 million in the House version to fully fund a new South Pole station. But both bodies took pokes at the agency's merit-review process.

    The House made a symbolic cut of $174,000 in NSF's overall research account to protest a grant to two political scientists to study why many qualified candidates don't run for Congress (Science, 4 July, p. 26). Representative Jerry Lewis (R-CA), chair of the House spending panel that oversees NSF, says the move is intended to send a “message” that the grant was a “misstep … in the applications process.” The House stopped short, however, of telling NSF not to fund the study. In a discussion on the House floor, Representative Barney Frank (D-MA) defended the study and accused his colleagues of acting “as a kind of appellate research council.” But Representative George Brown (D-CA) scolded NSF for failing to “prepare” members for what he labeled a topic “of great delicacy.” Joseph Bordogna, acting NSF deputy director, says the proposal passed muster with peer reviewers and that “NSF is confident that the research is being done properly.”

    The Senate panel took the opposite tack: It directed NSF to spend money on something it had not requested. Senator Kit Bond (R-MO), chair of the Senate panel that oversees NSF, hailed the $40 million plant genome initiative as “critical to future food production and human health.” NSF officials say they support the concept but are unhappy with the size of the proposed effort, pointing out that they already fund several million dollars' worth of research on various plant genomes. “It's a rational idea,” says Bordogna, “but the question is how to do it with constrained resources.” The Senate panel also told NSF to spell out what it hopes to accomplish in initiatives relating to computer networking and life in extreme environments, totaling almost half a billion dollars, before it would release the money.

    Department of Defense (DOD): Upward march.The military appears headed for a budget increase this year slightly larger than what the president wanted, and basic research could benefit. On 15 July, the Senate passed a bill that would raise DOD's overall budget to $247.2 billion, about $3.2 billion more than the Administration requested. Within that total, R&D would increase 0.9% to $37.4 billion, and basic research would rise to $1.17 billion, an 8.7% boost. This is about $10 million more than the White House sought. The Senate bill also contains some specific earmarks for medical studies in the Army budget, including $175 million—$69 million more than this year—for “peer-reviewed breast cancer research” and $5 million for prostate cancer diagnostics, added to $38 million appropriated last year. The House Appropriations Committee takes up its version of the bill this week.

    NASA: Better than expected. Science Committee Chair James Sensenbrenner (R-WI) tried to cut $100 million in the agency's funding bill that NASA wanted to make up for Russian delays in building its portion of the international space station. But Lewis, whose panel also oversees NASA, said the move could derail the partnership with Russia and leave NASA with more delays and higher costs in the long run. The cut was rejected by a vote of 227 to 200. Meanwhile, a Senate panel approved $13.5 billion for the agency in 1998; while that is $148 million below the House level, it is the same as NASA's request and higher than most observers expected, given the Senate's tight funding allocation.

    Wind project blown apart. It's not often that a pork-barrel project is so overwhelmingly defeated on the House floor. After a rancorous debate, lawmakers voted by a margin of three to one to kill $60 million earmarked for a windstorm research center and to shift the money to the veterans' medical care account. Led by Representatives Roger Wicker (R-MS) and Carrie Meek (D-FL), House members from regions affected by hurricanes argued that the $180 million facility was needed to study the effects of wind on homes and materials. A Department of Energy (DOE) lab with a large amount of land in a remote part of Idaho was to be the center's home. But opponents argued that it was pork because the Administration did not request the center, it had not been peer reviewed, and the contract to build it would not be competed. “Here we have an uncooked idea, a half-baked idea,” complained Representative John Olver (D-MA).

    Fusion: A glimmer of support. Magnetic fusion fared well in the full Senate, gaining $15 million above its $225 million request and $8 million above its 1997 level. The House funding panel provided the requested amount. Both the Senate and the House panel also provided $55 million for the last year of design work on the International Thermonuclear Experimental Reactor, a proposed $10 billion fusion project. DOE officials feared the House panel planned to cut some or all of the funding, although House staffers deny any move to kill the project. Even so, Energy Secretary Federico Peña says he worked hard to save it. “It would reflect poorly on our nation if the United States did not fulfill its last year of obligation,” he told reporters. Both bodies also set aside $35 million for DOE's participation in Europe's Large Hadron Collider project, although the House panel would first like a detailed report on the U.S.-European agreement.

    Environmental Protection Agency (EPA): Above the bar. Science and technology would receive $600 million in the Senate panel's version. While that's $56 million less than the House has allotted and $15 million below the president's request, it's still 9% higher than its current budget of $552 million. The Senate mark adds $8 million to EPA's request for particulate air-pollution research, a $19 million program that the House boosted by $35 million.


    Chimp Retirement Plan Proposed

    1. Wade Roush

    In their native habitats, chimpanzees are dwindling toward extinction. But in a little-known legacy of the AIDS pandemic, U.S. biomedical research facilities have filled to bursting with the primates—some 1500 in all. In a report released on 16 July, an expert panel organized by the National Research Council has now weighed in on the chimpanzee overpopulation problem, recommending a long-term plan that observers describe as humane but problematic.

    Rejecting as unethical the easiest option—euthanasia—the panel recommends that the National Institutes of Health (NIH) create an autonomous body, the Chimpanzee Management Program (ChiMP), to acquire the approximately 1000 chimps the government already partially or wholly supports and shelter them for the rest of their lives. (The remaining 500 chimps are in the hands of private research labs.)

    View this table:

    Both research-colony administrators and animal-welfare proponents welcome the report's recognition of chimpanzees' unique ethical status as humans' closest animal relatives. “I am glad that, finally, some consideration is being given to this problem,” says Jane Goodall, the famed British ethologist. Neuroscientist Thomas Insel, director of Emory University's Yerkes Regional Primate Research Center in Atlanta, agrees: “These are not rodents. We've got to make sure these animals are well taken care of over the long term.” Insel and other scientists worry, however, that ChiMP could transform research centers such as Yerkes into mere warehouses. And some animal protectionists fear that the proposed federal takeover could actually increase the number of investigators using chimpanzees.

    A chimpanzee baby boom launched by NIH in 1986 to provide animals for the study of AIDS created the current overcrowding. But only a single chimp among some 200 infected with HIV has succumbed to AIDS, making the species such a poor model for the disease that fewer than 20 are now needed each year. The resulting high unemployment rate among chimps bred for AIDS research but never infected translates into a “financial hemorrhage” for the colonies, says Insel. “Our colony of 200 chimpanzees costs about $1 million a year to maintain, and we get less than half that back from sponsored research,” he says.

    The panel confirmed that the colonies are “heading for a crisis unless something is done,” says chair Dani Bolognesi, a virologist at Duke University. As a first step toward preventing that crisis, the panel—created 2 years ago at the request of NIH director Harold Varmus—said the current informal moratorium on chimpanzee breeding should be extended for at least five more years.

    After extensive debate, a majority of the panel members concluded that reducing overcrowding through euthanasia was not an option. Chimpanzees “are not equivalent to humans, but they are different from other laboratory animals,” says panel member Peter Theran, director of the Center for Laboratory Animal Welfare in Boston. “Euthanasia just because you are finished with them is not appropriate.”

    A better alternative, the panel decided, would be to transfer ownership of the approximately 1000 chimpanzees at the five major colonies to the government, which would support them throughout their 30- to 50-year life-spans. The cost shouldn't amount to much more than the $7.3 million NIH currently spends each year to support the colonies, the panel said, and other agencies that use chimps in research—the Department of Defense, the Food and Drug Administration, and the Centers for Disease Control and Prevention—should be asked to chip in. If it can be achieved, such stable, long-term funding for the colonies would be a godsend, says veterinarian Michale Keeling, principal investigator at a 150-chimp colony maintained by the University of Texas's M. D. Anderson Cancer Center in Houston.

    Leaders of animal-welfare groups are worried, though, because the report predicts that dedicated government support for chimpanzees could eliminate the fees of up to $66,000 that colonies charge to conduct research on a chimp. If researchers didn't have to ask granting agencies for funds to cover these fees, they might use chimpanzees far more often. “Government support for the chimpanzees in a permanent retirement situation would be fantastic,” says Eric Kleiman, research director for In Defense of Animals, a California-based group that plans to build a sanctuary for about 140 U.S. Air Force chimps. “But if they are looking to substantially increase the use of chimpanzees in research, we totally oppose that.”

    There are also questions about where the animals would be kept, because not all centers are eager to become retirement communities for the chimps. “We are a scientific program, and my interest would not be in warehousing animals,” says Yerkes's Insel. “The worst thing that could happen would be if NIH … decides that the primate centers should take all of this on.” Alternatively, surplus chimps could be sent to private sanctuaries or to new national sanctuaries, the panel suggested.

    Louis Sibal, director of the NIH's Office of Laboratory Animal Research, says no decisions have been made about what to do with the animals. “People here at NIH are pleased with the report. … Now we've got to powwow to see how [the recommendations] can be implemented.”


    European Parliament Backs New Biopatent Guidelines

    1. Nigel Williams

    Strasbourg, FranceAfter being subjected to one of the most intensive lobbying campaigns they have ever experienced, members of the European Parliament last week approved the outline of legislation that will determine what biotechnology inventions can be patented in the European Union (EU). It backed proposals that would permit the patenting of genes and genetically modified animals under specific conditions, while banning patents on plant and animal varieties and techniques directly related to human germline manipulation or human cloning.

    The vote—by a surprisingly lopsided margin of nearly four to one—was a victory for Europe's biotech industry, which has long argued that a new, continent-wide policy is needed to replace the outdated current law, framed 30 years ago. But the proposals still have a long way to go before they become law, and opponents—mostly environmental and animal-rights lobbyists—have vowed to continue fighting them.

    View this table:

    The new proposals are the latest in a 9-year effort by the European Commission, the EU's executive, to streamline and harmonize Europe's biotechnology patent system. European biotech companies have long argued that the 1973 European Patent Convention needs to be updated. They also complain that inconsistencies among national patent laws can be problematic, because even when the European Patent Office in Munich, Germany, does grant a patent, it must be validated in each country. “This lack of harmonized patent protection has contributed to Europe's biotechnology industry lagging significantly behind the United States and Japan. Patent protection is crucial,” says Catherine Péchère of the European Federation of Pharmaceutical Industries' Associations.

    The commission tried to respond to such concerns 2 years ago when it sent draft legislation to the European Parliament that would have permitted patents on a range of biotechnology inventions throughout the EU. But the proposals provoked a howl of opposition from groups rallying under the banner “no patents on life,” claiming that the rules would have allowed patents on parts of the human body. The Parliament rejected the proposals and sent the commission back to the drawing board.

    This time around, the commission prepared its case much more carefully. Says EU Commissioner Mario Monti, who was chiefly responsible for developing the new version: “In the new proposal, we want to address those concerns and guarantee research and business within clear limits and ensure respect for the integrity of the human body.”

    The revised proposals try to make a clear distinction between a discovery and an invention. A discovery of a gene, for example, would not be patentable by itself, whereas an invention—defined as a technical process with an industrial application—could be patented. Although “an element of the human body in its natural environment” couldn't be patented, the proposals state that “an element isolated from the human body or otherwise produced by means of a technical process shall be patentable even if the structure of that element is identical to that of a natural element.”

    These proposals won a key endorsement last month from the Parliament's legal affairs committee, led by Willi Rothley of the Parliament's Socialist group. But the committee offered several amendments that would strengthen the ban on patenting human genetic code without reference to an industrial application, bar patents for genetic modification of animals unless there is “substantial medical benefit,” prohibit patents on plant or animal varieties, and set up a bioethics committee. All the committee's amendments were acceptable to the commission, says Monti, and many were supported by Parliament.

    The biotech industry and patient groups mounted a fierce lobbying campaign to persuade the Parliament to approve the proposals, arguing that they will help foster the development of new medicines. Its efforts are widely credited with the dramatic shift in parliamentary support. Gordon Adam (Socialist) said during the debate last week that “there has been a barrage of misinformation orchestrated by the Green movement. … Opponents should ponder what it is they are trying to prevent.” This view was echoed by fellow Socialist Kenneth Collins: “The debate is shrouded in misinformation. Biotechnology is a tremendously important sector in the EU. If we reject this directive, patenting will continue, but we'd have less control over it.”

    Not surprisingly, the biotech industry is pleased with last week's vote. “We welcome their approval and the distinction made between discoveries and inventions,” says Péchère. “I'm very pleased,” adds Alastair Kent, president of the European Alliance of Genetic Support Groups. “It's now very clear what can and what cannot be patented. We've learned from the U.S. experience and added more ethical elements. The balance between suffering and medical benefit in granting patents on genetically modified animals is very welcome. Broad patents without much application would also be much less possible,” he says.

    But one amendment, approved by Parliament, has raised concerns within industry. It would require patent applicants to declare the geographical origin or name and address of the human donor of any biological material, and swear that it had been obtained legally or with consent. “This amendment is not realistic and undermines patient confidentiality. We shall be fighting it,” says Péchère.

    The opponents have also not yet given up. They argue that the proposals would stifle research in the public sector, increase the cost of health care, and shift control of genetic resources into the hands of a small number of powerful companies. “People have been hoodwinked by the arguments. The directive is essentially the same as last time, and our root-and-branch objections hold,” says Ian Taylor of Greenpeace. “We shall continue to make clear our opposition.” In last week's parliamentary debate, Nuala Ahern (Green) said: “Genetic resources must not be controlled by a small number of companies. Treatments could become prohibitively expensive. We are moving toward a U.S. model of health care, and if we do so, our citizens will never forgive us.”

    The proposals are now back in the commission's court. It must now redraft them and submit them to the EU's Council of Ministers later this year. After that, they will go back to Parliament.


    Academy Seeks Government Help to Fight Openness Law

    1. Andrew Lawler

    A minor squabble between animal-rights activists and federal officials over a study on the care and use of lab animals has turned into a major debate over how the country's most prestigious scientific advisory body should operate. Next month, the U.S. Supreme Court will be asked to decide whether the National Research Council (NRC)—the operating arm of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine—should abide by government openness rules. The outcome could have profound implications for the future of the academy complex.

    The case pits the Animal Legal Defense Fund against the Department of Health and Human Services and the academy. The animal-rights activists maintain that an NRC panel that updated an animal-care guide under contract to the department should have conducted its work under the Federal Advisory Committee Act (FACA) of 1972. The act stipulates that government advisory panels must meet in open session and make their documents public, and that their membership should be selected to represent a balance of views on any particular issue. Some government officials and the academy maintain that the law would give agencies undue influence over administrative aspects of NRC committees, something they insist legislators did not intend.

    Last year, a lower court agreed with the government and the academy, but in January the U.S. Court of Appeals for the District of Columbia overturned that decision. Its ruling shook the council's leadership, which is now lobbying the Administration to join in its request for a review by the Supreme Court. Petitions must be submitted by 4 August.

    The Justice Department is considering the academy's request. Its decision could have a critical impact on the prospects for the case, say parties on both sides. “If the government does file a petition, it seems likely the Supreme Court would take the case,” says Lucinda Sikes, an attorney at Public Citizen, a nonprofit organization that supports increased government openness. If the government stays on the sidelines, say observers, it is a toss-up whether the court would consider the case.

    At least eight agencies, including NASA, the National Science Foundation, and the Departments of Energy and Health and Human Services, have written to the Justice Department urging it to continue the fight, says William Colglazier, the NRC's executive secretary. Officials in these agencies argue that they would lose an important source of independent advice if the council were forced to follow FACA. Under the current system, agencies fund the NRC but do not have direct control over the membership, schedule, or logistics of a panel. Academy supporters argue that the council's neutrality would be compromised under FACA as agencies became more involved. NRC critics, however, maintain that it is hard to know the degree of independence that now exists when the NRC selects committee members and discusses panel recommendations in secret.

    The Justice Department may be reluctant to push this case, say observers. One reason is that having the Supreme Court take the case could reopen broad legal issues involving FACA that the government might prefer to leave closed. That includes the 1989 decision—Public Citizen v. Department of Justice—in which the high court decided that the American Bar Association need not abide by FACA when advising the president on judicial nominations. “If the Supreme Court takes this, it could open up a Pandora's box for Justice,” says Eric Glitzenstein, a lawyer with Meyer & Glitzenstein in Washington, D.C., which is representing the Animal Legal Defense Fund. He also argued on behalf of Public Citizen in the 1989 case.

    Of course, a decision by the Supreme Court to review the matter would only be the first step in overturning the appellate ruling. “The academy has a tough row to hoe” in making its case, insists Glitzenstein. He notes that the Public Citizen decision repeatedly cites the academy as an example of the kind of quasi-public organization that should abide by FACA, and he says House and Senate reports even mention the academy as falling under the proposed law. But Richard Meserve, a lawyer with the D.C. firm of Covington & Burling representing the academy, points out that the 1989 case did not deal directly with the academy and did not offer a clear definition of a quasi-public organization. Meserve also points to floor speeches and other records that suggest Congress did not want the academy to be ruled by FACA.

    Meserve also argues that the current court's tendency to follow a literal interpretation of the law would favor the academy. FACA applies to groups “utilized” by the government—that is, groups over which the government exerts management and control. Meserve says that because the academy oversees individual committees, they are “insulated” from being utilized by the government in the way the law states. “The fact that federal money pays for the work is insufficient” to apply FACA to the NRC, he maintains. “[The panel] must be managed and controlled by the agency itself.” But Glitzenstein counters that a strict reading of the law could work the other way: Congress failed to exempt the academy from the law as it did with other groups and, therefore, FACA should apply to its activities, too.

    The Supreme Court is likely to decide shortly after it returns in October whether to take the case; if it does, a ruling is anticipated sometime before July 1998. If the academy loses or the court refuses to hear the case, the academy may try to seek legislation exempting it from FACA. Such a request would face an uphill battle, say NRC sources and congressional staffers, given the relative obscurity of the issue in the minds of most members. In the meantime, academy officials are biding their time as the wheels of justice slowly turn.


    High-Speed Materials Design

    1. Robert F. Service


    Xin Di Wu was riding the kind of scientific wave most researchers only dream about. In 1995, as part of a team at Los Alamos National Laboratory in New Mexico, Wu helped discover a way to make one of the most promising high-temperature superconductors—normally a brittle ceramic—into a flexible wire. That development could pave the way for winding the material into coils for superconducting high-field magnets in everything from medical imaging machines to electric motors. But rather than ride that wave, last year Wu backed off and caught another rising swell that he thinks has the potential to grow even larger. “I wasn't looking for a job,” says Wu. “But the opportunity to do something unique got me very excited.”

    Wu landed at a small Silicon Valley start-up called Symyx Technologies, a company that bills itself as the wave of the future for discovering everything from new catalysts to superconductors. The new approach to chemical discovery that Symyx hopes to exploit uses banks of robots and computers to systematically react chemical ingredients in thousands of different combinations at once, then test the products in hopes of finding blockbuster new compounds. It's a vision that has already transformed the way new drugs are discovered, turning the discovery process from a painstaking, one-molecule-at-a-time business to an industry where thousands of new compounds can be created and screened in just days. In less than a decade, this transformation has spawned a host of new drug-discovery companies and attracted hundreds of millions of dollars in investments.

    Now Wu and others are betting that the same high-speed chemistry technique—known as combinatorial chemistry—will pay off in other research-intensive businesses, such as the chemical and electronics industries, that are constantly searching for new compounds that can improve existing products or lead to entirely new applications. Already, there are hints that Wu and his colleagues may have made a smart bet. Last month, the German chemical giant Hoechst AG announced an agreement to pay Symyx $10 million over 2 years, with the possibility of far more to come, for first rights to new catalysts being developed at the company. Large chemical and electronics firms such as DuPont, Kodak, and Lucent Technologies' Bell Labs have started their own exploratory efforts in the field. Others such as Dow Chemical and Engelhard, which produces catalysts, are watching to see whether they should jump in.

    Industry insiders point out, however, that combinatorial chemistry's success in the new arena is no sure bet. Among the many problems: screening arrays of hundreds or thousands of newly created compounds to identify a few potentially useful ones. Perhaps even more worrisome is that new materials rarely command the same high prices as blockbuster drugs, all of which makes research planners cautious. “It's a potentially very useful technique,” says Lawrence Dubois, director of the Defense Sciences Office at the Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia. “But a number of people [in industry] are waiting for the first breakthrough before jumping in.”

    Promises, promises

    Similar cautions greeted the advent of combinatorial chemistry in the late 1980s. The technique made its industrial debut when entrepreneur Alejandro Zaffaroni set up a small company called Affymax in 1988 to turn out enormous collections of small, proteinlike molecules called peptides, which could then be tested for possible use as drugs. Because enzymes in the stomach readily break down peptides, many doubted that the strategy would be useful for making drugs. But when researchers at the University of California, Berkeley, and elsewhere began showing that the same techniques could be used to turn out libraries of more stable small organic molecules, similar to those that make up most therapeutic drugs, the industry embraced combinatorial chemistry. Just 7 years after its founding, Affymax was bought by the British pharmaceutical giant Glaxo Wellcome for $533 million.

    Now Zaffaroni and Berkeley chemist Peter Schultz—co-founders of Symyx—see that history repeating itself. “We're back to 1988 to '89 in combinatorial chemistry in the pharmaceutical industry,” with small start-ups jumping in with both feet and industrial giants wading in gingerly, says Schultz. “If combinatorial chemistry works so well with the few elements used in drugs, [such as oxygen, carbon, and hydrogen], why not apply it to the rest of the periodic table?” says Schultz, who also has a joint appointment at the Lawrence Berkeley National Laboratory (LBNL) in California.

    Why not indeed. Schultz and his colleagues at Berkeley and LBNL first showed 2 years ago that the idea wasn't farfetched when they applied combinatorial techniques to synthesize a collection, or library, of 128 different superconducting compounds (Science, 23 June 1995, p. 1738). None of them beat the industry leaders, but the proof of principle caught people's attention. Since then, they and others have made libraries of catalysts, semiconductors, light-emitting phosphors, polymers, and magnetic materials.

    In all these efforts the strategy is essentially the same: to use robots and other automation devices to rapidly synthesize and screen the activity of different compounds. With the superconductors, for example, Schultz and his colleagues used a series of masks to pattern the deposition of seven different oxides—each made up of oxygen bound to one or two other elements—creating a gridlike array in which each 1 × 2 millimeter rectangle contained a different combination of elements.

    Most of these early efforts have turned out relatively small libraries, containing tens or hundreds of different materials. But at the April Materials Research Society meeting in San Francisco, Wu and several Symyx colleagues reported picking up the pace, creating a library of 26,000 different inorganic phosphors—compounds that make up the heart of televisions and desktop computer displays because they emit light when zapped with electrons. At the same meeting, LBNL's Xiao-Dong Xiang reported making a similar, but smaller, library, which he also described in the 23 June issue of Applied Physics Letters. These are eye-catching efforts, because researchers around the globe are working frantically to create phosphors for flat-screen displays. They will have to emit light with less energy input than current phosphors because electrons can't accelerate to the same speed in the thinner displays.

    Wu and Henry Weinberg, a physical chemist and Symyx's chief technical officer, say their team's initial screens turned up one red-emitting and one blue-emitting phosphor that are as good as or better than those already on the market, although the researchers declined to give further details until they have patented and published their finds. Xiang says his team's screens also turned up several promising phosphors that appear to be more efficient and stable than those found in products today.

    Phosphors, says Weinberg, are an obvious early application of the combinatorial approach to materials science, because the new compounds can be screened simply by shining ultraviolet light on them and seeing which ones shine brightest. At this point, however, the hottest commercial prospect for combinatorial materials may be the production of new catalysts. The reason, says Bob Ezzell, a chemist at The Dow Chemical Co. in Midland, Michigan, is that catalysts are ubiquitous in industrial processing, and a compound that performs new reactions or accomplishes old ones more efficiently “can revolutionize whole areas of chemistry.”

    Until now, finding that magic catalyst has been a tedious process in which researchers strive to understand exactly how the reactions take place so that they can design new catalysts to carry out just the reaction they want. Even when they have a basic idea of what they are looking for, researchers still face a bewildering array of possibilities. Take the class of catalysts known as homogeneous catalysts. These compounds typically consist of a central metal atom connected to organic arms that surround the metal and ensure that it reacts with only the compounds of interest. The chemical makeup of the arms, and therefore the selectivity of the catalyst, can come in untold variations.

    That makes the numbers game an attractive alternative to rational design. “The idea isn't to give up rational design,” says Tom Baker, a combinatorial catalyst developer at Los Alamos. Rather, the idea is to use it to figure out which kinds of catalysts have a better shot of working, then create a library of such materials in hopes that one or more will turn out to have catalytic activity.

    Such efforts are already beginning to show promise. Last year, for example, Amir Hoveyda and his colleagues at Boston University reported in the 18 August issue of Angewandte Chemie, International Edition in English that they had used combinatorial techniques to discover a new catalyst capable of selectively creating single chiral compounds—members of molecular twins that are mirror images of each other—which are useful in synthesizing new drug molecules. “That was a landmark paper,” says Mark Scialdone, a combinatorial chemist at DuPont Central Research and Development Experimental Station in Wilmington, Delaware. “It's getting people to believe that combinatorial chemistry can be useful for finding new catalysts as well as drugs.”

    Not so fast

    Tempering the enthusiasm, however, is a series of knotty scientific problems that confront the new technology. For one, unlike drugs, tiny samples of materials can behave very differently from large bulk quantities that are used in the real world, says Donald Murphy, who heads the applied materials research division at Lucent Technologies' Bell Labs in Murray Hill, New Jersey. And even if you manage to create a promising thin film in a 200 × 200 micrometer patch, there's no guarantee that it can be produced in the bulk quantities needed commercially.

    Other key problems include testing the activity of microscopic amounts of hundreds or thousands of new compounds simultaneously. Few materials assays have been adapted for arrays generated by combinatorial methods. Until they are, says Ezzell, there's little point in going to the trouble of high-speed synthesis if you can't screen the results rapidly.

    Rapidly analyzing the materials in a combinatorial grid to tell what's been created is also a potential stumbling block. Even if researchers know what elements they have added into a promising new phosphor, for example, they have no idea whether the elements have mixed evenly or remained segregated, which means that they don't know the exact structure of the resulting material.

    Wu says these concerns are valid, but he notes that researchers at Symyx and elsewhere are working to develop many of the screens and characterization schemes to work on arrays of compounds all at once. Xiang and his LBNL colleagues, for example, are finishing work on a pair of new microscopes that will be able to assay the electronic and magnetic properties of between 1000 and 5000 superconducting samples a day. Schultz adds that whatever the hurdles in scaling up production from a tiny sample to a useful amount of product, combinatorial techniques speed up the time-consuming front end of the discovery process, identifying promising compounds for further testing and development.

    But even if the technical problems can be solved, there's yet another barrier to be overcome: money. Symyx is spending millions of dollars on robotic synthesis and screening machines, computerized databases, and an array of experts to carry out all the synthesis, characterization, and informatics tasks involved. At the same time, “there's not quite as much of an economic driving force” for new materials as there is for new drugs, says DARPA's Dubois. This combination of high start-up costs and uncertain returns has left many company research directors feeling torn. “It's a very interesting and intriguing approach,” says chemist Gerald Koermer of Engelhard Corp. in Iselin, New Jersey. However, he adds, “most research managers with budget responsibilities don't want to take too big of a gamble” on an unproven technology. “Their tendency is to hold back until the odds get better.”

    As a result, most companies are hedging their bets. DuPont researchers, for example, established a combinatorial chemistry effort to work on developing new agrochemicals 3 years ago. So they were able to put their in-house machinery and expertise to work in a small, exploratory combinatorial materials effort. Hoechst is using its deal with Symyx to buy access to the technology without having to establish its own effort, a model also being explored by officials at Dow and Engelhard. Such deals represent “a relatively small amount of the overall research budget” for industrial giants, says Zaffaroni. And if the venture doesn't pan out, the big companies can walk away without disbanding an internal effort.

    Of course, Zaffaroni and other true believers are confident that won't happen. And if their optimism about the new technology is justified, says Ezzell, “there will be a lot of people in this sandpile before all is said and done.”


    Biologists Cut Reductionist Approach Down to Size

    1. Nigel Williams

    LondonFor the past 2 centuries, scientists have been animated by the belief that a complex system can be understood by seeking out its most fundamental constituents. This approach, known as reductionism, views nature as something of a Russian doll: Features at one layer are explained by the properties of the layer below. Hence, physicists search for the basic particles and forces; chemists seek to understand chemical bonds; and biologists scrutinize DNA sequences and molecular structures in their efforts to understand organisms.

    Among biologists, however, resistance is growing to what physicist Steven Weinberg of the University of Texas, Austin, has called “grand reductionism”: the idea that the most fundamental layer of nature holds an explanation for all the features of the outer, higher layers. A group of 30 distinguished biologists and other researchers met here recently at the Ciba Foundation to discuss the future of the reductionist approach. The meeting saw some spirited attacks, as biologists discussed cases where reductionism falters, drawn from fields ranging from cell biology to behavioral strategies. But other talks demonstrated how powerful the approach remains, epitomized by computer models of embryonic development that show how just a handful of molecular signals can give rise to complex patterns. Many participants came away concluding that reductionism is just one of many tools biologists will need to fathom the mysteries of nature.

    Thomas Nagel of New York University, the only philosopher at the meeting, presented a two-part case against reductionism. First, he said, even though nature could, in principle, be explained in terms of universal basic laws, in practice our finite mental and computational capacities mean that we either cannot grasp the ultimate physical explanation of many complex phenomena—or we can't fruitfully link this basic level to higher order phenomena. The second, more controversial, part of Nagel's argument was that additional principles, not evident in the laws governing basic constituents, are needed to explain higher order phenomena.

    Such additional principles might be required to explain what biologists refer to as “emergent properties”—properties that are found at higher levels of organization of an organism but are absent at lower levels, as if one layer of the Russian doll had features in its appearance totally unrelated to the doll below. Emergent properties are found in chemistry and physics: Temperature, for example, is a property of a collection of particles and is irrelevant to individual particles. But biologists can point to an extraordinary array of features, such as genes, eyes, and wings, that are meaningless on the level of atoms and molecules.

    Cell biologist Paul Nurse of the Imperial Cancer Research Fund in London says that even a biological system as small as a single cell exhibits a wealth of features that are absent at the molecular level: positional information, compartmentalization, kinetics of signaling, oscillations, and rhythms, to name a few. By focusing on molecules, he believes cell biologists may be missing out. “Cell biologists haven't yet thought much about this,” he says.

    Many features, however, are not obviously emergent; only a detailed analysis of complex interactions at the organism level shows that they are genuinely new features that only appear at higher levels. For example, physiologist Denis Noble of Oxford University has been studying a key ion-transporter molecule found in heart muscle that is involved both in maintaining normal rhythm and also in some life-threatening abnormal rhythms. Attempts to treat the abnormal rhythms by chemically blocking this transporter had been unsuccessful, so Noble set about producing a computer model of its activity. To create an effective model, he found he had to consider not only how the molecule behaves throughout a key population of cells responsible for generating cardiac rhythm, but also how those cells are linked to other regions of the heart. These sophisticated interactions showed why simple blocking drugs had been “spectacularly disappointing,” says Noble.

    Other computer models seemed to even the score for reductionism. Biologist Michel Kerszberg of the Pasteur Institute in Paris has developed computer models of signal-transduction pathways in early embryos created by gradients of chemical signals called morphogens, produced by the embryonic cells. “I can show from the complexity of morphogen concentration gradients in the embryo surprising precision and reliability in signal delivery to the appropriate cells,” he says. “The number of variables needed to create complex states is sometimes very small, often three or fewer,” says biophysicist Benno Hess of the Max Planck Institute for Medical Research in Heidelberg, Germany.

    Another analysis, however, showed that biological complexity can be modeled without any regard to its molecular constituents, presenting a challenge to reductionism. Economic game theory, which was developed to study how humans respond to various economic situations, can often predict complex behavior of organisms without analysis at the molecular level. Computer models in which individuals behave as “hawks” and take an aggressive stance in interactions while others, called “doves,” take a passive stance have helped reveal strategies that bear a stunning similarity to some real situations. “Game theoretical models move up rather than down the reductionist scale,” says biologist John Maynard Smith of the University of Sussex in the United Kingdom, a pioneer in applying game theory to biology.

    But perhaps the biggest challenge to reductionism comes from the concept of information. In biology, information is carried and received by molecules, which is fully consistent with reductionist principles of physics and chemistry. Yet natural selection has often produced multiple chemical and physical ways of conveying the same message, so that by looking only at the molecules, researchers may be missing the message. “Information is not reducible,” says psychologist Jeffrey Gray of the Institute of Psychiatry in London. About 50% of the genome of a multicellular organism may code for proteins involved in cell signaling, says biologist Dennis Bray of Cambridge University; hence, organisms can be viewed as complex information-processing systems, where molecular analysis alone may not be sufficient. “There's a need to realize that information may be transmitted in ways that may be lost by studying molecules alone,” says Nurse. “It may not be possible or even necessary to explain all cellular phenomena in terms of precise molecular interactions.”

    Participants argued that genetic redundancy poses a similar challenge to reductionism. Researchers who have created genetically modified organisms in which a single gene has been deleted or blocked, known as “knockouts,” have often been surprised to find that some other gene can take over at least part of its function. “Some mouse knockouts have turned out to be messy,” says Nurse. These lessons have forced researchers to look harder at how genes map onto the form, or phenotype, of developing organisms, particularly their behavior. Biologist Sydney Brenner, of the Molecular Sciences Research Institute in La Jolla, California, argues that studying genes can help researchers understand how organisms are put together, but may not be helpful in describing some of the ways they function. “You can map genes onto behavior, but [the map] doesn't give you a causal explanation,” he says.

    Participants in the conference agreed that reductionism has a future in biology—but only as one approach among many. A growing number of questions will require other approaches. Some delegates felt that a deeper understanding of the role of information may yet throw a spanner in the grand reductionist scheme and that Nagel may be right in his suggestion that additional principles are needed. Asks biochemist Max Perutz of Cambridge's Laboratory of Molecular Biology: “Will there be new laws of biology?”


    Model Explains Internet 'Storms'

    1. Charles Seife
    1. Charles Seife is a writer in Riverdale, New York.

    Every cyber-junkie knows that the Internet is a crowded place. As computers send volumes of data from server to server, phone lines fill up, causing Internet traffic jams—and making Web browsers chug away in fruitless attempts to retrieve information. Then, moments later, the congestion abates. On page 535, two physicists present mathematical and computer models that point to the causes of these Internet “storms.” The explanation, say the researchers, lies not in technology but in social behavior: Millions of users who have no incentive to economize flood the Internet with data, clogging it, and then get discouraged, relieving the congestion—all at roughly the same time.

    The researchers, Bernardo Huberman and Rajan Lukose of the Xerox Palo Alto Research Center in California, aren't the first to recognize that the Internet tends to be overused because most users pay a flat rate for unlimited access. Instead, their achievement is to show exactly how these incentives lead to the spates of congestion seen on the Internet, says Kenneth Steiglitz, a computer scientist at Princeton University. “You look at the Internet and say, ‘My god, it's a mess; nobody's going to understand it,’ but Huberman gets qualitative insights into very complicated problems,” says Steiglitz. Huberman himself thinks these insights might eventually point to ways to unclog the Internet.

    He explains that everyone who logs onto the Internet faces a “social dilemma” like the one posed by a group dinner in which the bill will be split evenly. If you are in a selfish mood, you might order a lobster, hoping that your friends will economize and choose the salad. Because the price of the lobster gets split among the whole group, you would pay little for a sumptuous meal. But your friends see no reason why they should settle for salad while you order shellfish. They order the pricey lobster as well, placing a heavy demand on the lobster chef and leaving the whole group with a hefty bill.

    The Internet is like one big, expensive dinner where no one expects to pay his share, says Huberman. Because of flat-rate pricing, people have no incentive to limit the size of their downloads, their Web meanderings, their e-mail, or their Internet chatting. As everyone consumes bandwidth—just as when everyone consumes lobsters—there is a price to pay: in this case, congestion. “Individually, their actions are rational, but collectively they're suboptimal,” says John Bendor, a political scientist at Stanford University. The result of this collective display of self-interest, adds Lukose, is “overusing and degrading the value of resources. That's the tragedy of the commons.”

    But unlike the gradual deterioration of other common resources—for example, the atmosphere, where countries see no incentive to reduce greenhouse-gas emissions if other countries don't cut back as well—the Internet's congestion is sporadic. “There are short spikes of congestion,” says Huberman, “on the order of seconds or tens of seconds.” To explain this behavior, he and Lukose created a mathematical model of Internet use in which each user behaves rationally, overusing the Internet most of the time but logging off when congestion becomes too great.

    The model borrows from statistical mechanics, a branch of physics that deals with the collective effects of many simple objects, such as molecules or magnetic spins. It predicted the statistical properties of the network delays as many agents—each representing an Internet user—logged on and off. The result was a so-called lognormal distribution, which resembles a skewed bell curve, indicating “latency”—the extra time it takes to send a packet back and forth—on the horizontal axis and how often a user would encounter each latency on the vertical axis. Most of the time, the latency fell on the hump of the distribution, and the delays were small. But every so often—on the tail of the distribution—the delays spiked in an Internet storm as a large number of users put a load on the system at the same time.

    To see what this statistical behavior would mean in the real world, Huberman and Lukose wrote a computer model based on the equations. By plugging in values for such things as the network bandwidth, the number of users, and how much congestion it takes to discourage a user, they were able to simulate congestion on an actual network. They also tested the mathematical model's predictions by timing how long it took to send packets of data from Stanford to England and back. Sure enough, they measured spikes of congestion distributed roughly in a lognormal curve.

    Explaining Internet storms may prove easier than controlling them, because doing so will entail changing the behavior of vast numbers of users. “As the size of the group grows, it gets tougher to produce collective levels of common good,” says Bendor. The answer, Huberman thinks, will lie in new pricing schemes, such as a pay-per-packet scheme or a priority-pricing method (the Internet equivalent of Federal Express). Huberman hopes to put his model to work studying the effects of various changes in incentives on Internet congestion. But one thing is already clear, he says. “I don't think the idea of a [flat-rate] Internet will go on forever.”

    Cyberspace weather. An Internet weather site maps delays between Austin, Texas, and hosts around the world.


    Gram-Positive Bacterium Sequenced

    1. Nigel Williams

    The first genome sequence from an important group of bacteria that includes both commercially useful and pathogenic strains has been completed by an international team led by researchers in the European Union (EU) and Japan. After 5 years of work, leaders of the team of 37 laboratories announced the complete sequence of the 4.2-million-base genome of Bacillus subtilis at a meeting on the organism at the University of Lausanne in Switzerland last week.

    The new sequence joins the published genomes of nine other microbes. This one, however, is an important industrial source of enzymes used in detergents, baking, and the manufacture of vitamins. It is also a member of the so-called “gram-positive” group of bacteria, which includes notorious pathogens such as Staphylococcus aureus, the scourge of surgical patients; Streptococcus, which causes middle-ear infections, pneumonia, and meningitis; and the pathogens responsible for tetanus, anthrax, and diphtheria. “The sequence information should help boost our understanding of the mechanisms of protein secretion and pathogenesis in gram-positive bacteria,” says team coordinator Frank Kunst of the Pasteur Institute in Paris.

    The EU spent $5.3 million to sequence 60% of the genome, with Japan sequencing a further 30%. One Korean and two U.S. laboratories helped complete the sequence. Although comprehensive analysis of the genome will take several years, researchers have already spotted features of interest. “Several genes encoding proteins with potential antibiotic properties have been identified,” says Kunst.

    The genome, a circular double strand of DNA, displays unusual features at the sites where DNA duplication begins and ends before cell division. The frequency of certain dinucleotides is higher than usual at the site where replication begins and lower at the end. “Compared with other bacteria already sequenced, this may help us understand how the DNA sequence is interpreted by the bacterium to initiate duplication,” says Kunst.

    Researchers will also be paying close attention to genomes within the genome: the integrated sequences of several bacterial viruses called bacteriophages. Under unfavorable conditions, these viruses kill the host cell and infect new ones. But some of the bacteriophages in B. subtilis also appear to contribute genes that aid the host bacterium by helping it resist harmful substances such as heavy metals. Others carry toxin genes, which may be responsible for some of the pathogenic properties of other gram-positive bacteria.

    Now that the sequencing has been completed, the EU plans a follow-up program to study the function of each of B. subtilis's estimated 4000 genes. “Comparison with genomes of other bacteria and other organisms will provide us with the most complete understanding of what is required to sustain microbial life,” says Kunst.


    Meteorite Grains Trace Wandering Stars

    1. David Ehrenstein

    Some of the tiny dust grains found inside meteorites may be the relics of stars that migrated from distant parts of our galaxy, says astrophysicist Donald Clayton of Clemson University in South Carolina. In a report presented this week at the Meteoritical Society meeting in Hawaii and in this week's Astrophysical Journal Letters, Clayton presents what astrophysicist Frank Shu of the University of California, Berkeley, calls a “pioneering” theory. He argues that isotope ratios in the grains record how the stars that made them drifted long distances outward from the galactic center during their lifetimes.

    Scientists have known for a decade that certain microscopic particles found in meteorites—flecks of graphite, aluminum oxide, and other materials—predate our solar system. Isotopes of common elements occur in unusual ratios in the grains, convincing researchers that the grains condensed more than 5 billion years ago in the atmospheres of other stars. The stars then shed them into interstellar space, and they ended up in the cloud of material that coalesced to form our solar system.

    One particular kind of grain, made of silicon carbide, presented a special puzzle: These grains contain a higher ratio of heavy silicon isotopes to normal silicon than is found in the sun. Yet the abundance of heavy isotopes in the galaxy has been increasing over time as they are forged in the cores of massive stars. Because the meteorite grains formed before the sun did, the isotope ratio they captured should be lower than the sun's, not higher.

    There is one region where heavy isotopes might have been plentiful enough to explain the dust grains: toward the galaxy's center, where large numbers of massive stars have lived and died. In Clayton's scenario, stars that formed closer to the center of the galaxy migrated outward. Their lazy orbits around the galaxy, he says, would have taken them past huge gas clouds that gave them a “gravity assist,” boosting them into larger orbits just as Jupiter's gravity gave a boost to the Voyager space probes. Eventually these wandering stars ended up near the sun's future birthplace, where they exploded, depositing their remains.

    This picture, says Clayton, was inspired by a recent suggestion that the sun has itself drifted outward during its 4.5 billion years of life, explaining why our solar system has more heavy elements than other stars in the region. The stars that spawned the anomalous grains, he says, were born even closer to the galactic center than the sun. He thinks his new scenario will allow researchers to treat the isotopic composition of these grains as records of these stellar wanderings. If his theory is correct, he says, meteorite grains “offer a whole new tool for exploring the structure and evolution of our galaxy.”

    Shu points out that such new models are often incorrect in some details, but he thinks the basic picture of stars moving radially through the galaxy is probably correct. And he praises Clayton for showing how only milligrams of dust have “tremendous stories to tell about things that [occurred] very long ago and very far away.”


    SOHO Probes the Sun's Turbulent Neighborhood

    1. Alexander Hellemans
    1. Alexander Hellemans is a writer in Paris.

    Earth's atmosphere may seem like a tumultuous place with its hurricanes and thunderstorms, but it pales next to the atmosphere of the sun. This vast mantle of gas, dubbed the chromosphere in its lower reaches and the corona at higher altitudes, is the scene of processes that heat it to millions of degrees, unleash huge jets and arcing filaments of gas, and launch great bubbles of matter called coronal mass ejections (CMEs) into space. This turmoil is felt throughout the solar system, because it generates a relentless “solar wind” that distorts Earth's magnetic field and blasts material off comets, producing their tails. Now solar physicists are getting their clearest look yet at what drives solar weather, courtesy of the European Space Agency's Solar and Heliospheric Observatory (SOHO).

    From its vantage point 1.5 million kilometers sunward of Earth, SOHO's 11 instruments have been watching the sun's every move, from deep-seated pulsations in its visible surface—the photosphere—to the far reaches of the corona at distances of several solar radii. SOHO data are now beginning to put flesh on physicists' models of how processes such as the relentless dance of magnetic field lines above the solar surface—storing and then violently releasing energy—stir the corona. “It is the first time we have been able to view the sun's atmosphere at different levels and temperatures together,” says Richard Harrison of the Rutherford Appleton Laboratory in the United Kingdom. Last month, 300 solar physicists gathered in Oslo, Norway, at a SOHO workshop—the first since SOHO's launch last year (Science, 10 May 1996, p. 813)—to discuss the results so far. They include hints that the magnetic field is directly involved in accelerating the solar wind; that the field heats the corona by carrying energy outward from the sun's churning surface; and that the eruption of solar prominences and CMEs are the visible signs of global magnetic disturbances.

    Overall, the sun's magnetic field is roughly like that of a bar magnet. At lower latitudes, the field lines are “closed,” arcing from pole to pole. But above the poles, the lines point out into space, and it is there that SOHO's ultraviolet coronagraph spectrometer (UVCS) has gathered evidence for the field's role in accelerating the solar wind. The UVCS, which analyzes the corona's ultraviolet spectrum, shows that where the field lines are open, gas particles “boiling off” the atmosphere seem to be accelerated to very high velocities by the field, smearing out their spectral lines. “Our spectroscopic data are telling us that in the regions where the [field] lines are open, and where there is flow of the solar wind, [spectral] line profiles are particularly wide,” says UVCS team member Ester Antonucci of Turin University in Italy. The correlation between open field lines and high velocities makes the magnetic field the prime suspect in the acceleration of the wind.

    The acceleration mechanism, whatever it is, seems to do its job efficiently. Researchers had always assumed that the magnetic field needed a lot of space to accelerate solar wind particles, but SOHO's visible light telescope, the Large Angle and Spectrometric Coronagraph Experiment (LASCO), has observed particles in the polar regions being accelerated to high velocities close to the surface. “The surprising result is that the acceleration of that wind seems to occur below two solar radii—this is against all theories,” says LASCO team leader Guenter Brueckner of the U.S. Naval Research Laboratory in Washington, D.C.

    SOHO is also gathering evidence that magnetic effects are responsible for heating the corona to its million-degree temperature, which is hundreds of times hotter than the photosphere. For example, the group working with the Michelson Doppler interferometer (MDI), which detects oscillations on the sun's surface as well as in magnetic fields, found a correlation between magnetic phenomena in the photosphere and fluctuations in the intensity of emission lines of oxygen ions much higher up, closer to the corona, indicating that energy from the churning of the surface was being transferred to the upper atmosphere. “The key word [for the energy-transfer process] is reconnection,” says Olav Kjeldseth-Moe of the Institute of Theoretical Astrophysics in Oslo. “We have evidence [from the MDI] that in the regions where there are usually closed fields, some of the field lines open up.” Researchers believe that the swirling of the sun's atmosphere then sweeps the lines around until lines pointing in different directions meet and reconnect. Such reconnections release huge amounts of energy from the magnetic field and hence heat up gas in the atmosphere.

    According to SOHO's instruments, magnetic disturbances also drive some of the most spectacular events in the solar atmosphere, in which the corona spews huge clouds of gas into space in CMEs. Two very large CMEs took place in January and April this year, captured in spectacular photographs by LASCO, which blocks light coming directly from the sun to get a clear view of the corona. The largest CMEs launch up to a billion tons of gas into space in one go. “They are really carrying more matter than was thought before, and are explaining 50% of the slow solar wind,” says Antonucci.

    Antonucci reported how the UVCS team observed the twisting of magnetic fields at the onset of a CME. Field lines can be viewed as rubber bands in the ionized gas of the corona, storing up energy and later violently releasing it. “Magnetic energy is transformed into energy supplied to the lifting of the CME,” she says, adding that CMEs often seem to be linked. “Sometimes one [CME] appears from a region of the sun, and immediately another one comes out from another part. This really means that there is a global kind of disruption of magnetic fields.” Jean-Pierre Del-aboudinière of the Institute of Space Astrophysics at Orsay, head of the extreme ultraviolet imaging telescope team, also reported that CMEs seem to trigger global disturbances: “One out of three is particularly spectacular and starts up with an exploding protuberance, a mass of gas emerging from the solar surface, which produces a spherical shock wave that propagates throughout the solar surface.”

    While researchers are only just beginning to fill the gaps in their solar models, they have high expectations for SOHO's steady flow of data. One factor contributing to the optimism is that the scientific groups can control their instruments directly, instead of having to follow observation schedules set by the space agency. “You can plan like in a ground-based observatory. … This is unique,” says Harrison. And, with just a few exceptions, the spacecraft and its 11 instruments have been functioning almost flawlessly. Says Harrison: “Overall, it has been a resounding success.”


    How Jet-Lag Hormone Does Double Duty in the Brain

    1. Marcia Barinaga

    Countless travelers pop melatonin pills in an effort to get back in sync when they fly across time zones, and studies have shown that the hormone can be quite effective. But researchers have had a hard time saying why, because they understand so little of how melatonin affects the brain.

    Used as a drug, the hormone is capable of resetting the brain's 24-hour circadian clock, turning it forward a bit when given at dusk and back when given at dawn. However, the shifts melatonin causes are less than an hour long—barely enough for a New Yorker to adjust to Chicago time—and that has led some researchers to expect that the key to the hormone's effect on jet lag lies in other influences it may have on the brain. A new study reported in today's issue of Neuron now offers some clues about what those influences might be.

    By creating genetically altered mice that lack the major melatonin receptor, Steven Reppert at Harvard Medical School in Boston, Val Gibkoff at Bristol-Myers Squibb Pharmaceutical Research Institute in Wallingford, Connecticut, and their colleagues found to their surprise that this receptor is not responsible for melatonin's clock-shifting effects. Those seem to be the domain of a much rarer receptor. Instead, the major receptor underlies a second—and previously largely overlooked—function of melatonin: turning down the activity of neurons in the suprachiasmatic nucleus (SCN), the part of the brain that contains the circadian clock.

    Melatonin is secreted by the pineal gland at night, at which time, Reppert suggests, its normal function may be to keep the clock from being activated and inadvertently reset by stray bursts of neural activity. Others speculate that the SCN inhibition could also account for another of melatonin's effects when taken as a drug—its ability to induce sleep, which may provide the missing link in the jet-lag mystery.

    The discovery that melatonin's effects can be traced to different receptors means such speculations should be easier to test by experimentally isolating and studying melatonin's different actions on the brain, says neuroscientist Michael Menaker, who studies circadian clock evolution at the University of Virginia, Charlottesville. Circadian rhythm and sleep disorder researcher Charles Czeisler of Harvard Medical School adds that the separation of melatonin's actions also has potential value for drug development. It suggests, he says, that “analogs of melatonin could be developed that are specific to one or the other receptor,” acting specifically as clock resetters or sleeping pills.

    The current finding grew out of work in which Reppert's group cloned a family of genes that code for melatonin receptors. Mice have two of the genes, known as 1a and 1b, but because nearly 100% of the detectable melatonin receptors in the brain are of the 1a type, Reppert's team expected that they would carry out essentially all of melatonin's effects.

    But that's not what the researchers found when they knocked out the gene for the 1a receptor in mice. To see how the lack of the receptor would affect melatonin's phase-shifting ability, Chen Liu, a postdoc in Reppert's lab, used a so-called “clock-in-a-dish” method, in which he removed the SCNs from mice and kept them alive in laboratory dishes. Work by several other teams had shown that the clock continues to run for several days under these conditions: The activity of SCN neurons rises during the day and falls at night just as it does in the intact animal. In addition, the clock in a dish responds to melatonin: The hormone suppresses SCN activity, and it sets the clock forward or back when given at dusk or dawn.

    The surprise came when the researchers looked for that phase shift in SCNs removed from the knockout mice. “We got a very potent phase-shift response,” says Reppert. That was unexpected, he says, because “99.9% of the melatonin receptors are gone.” Preliminary evidence suggests that the 1b receptor, although so scarce as to be nearly undetectable, may be responsible for the phase shift. For example, the team found that pertussis toxin, which inhibits both 1a and 1b receptors, blocks the effect. To be sure of the 1b receptor's role, the Reppert team plans to create and study mice lacking the 1b receptor gene.

    While melatonin could still shift the clock in the SCNs from the knockout mice, it totally lost its ability to inhibit SCN activity, suggesting, says Reppert, that SCN inhibition is the main melatonin action mediated by these receptors. “That effect is a bona fide action of melatonin,” says Reppert, “and we have to think more seriously about what that means for the biology of the animal.” There are a lot of things that can reset the clock, such as the neural activity generated if an animal is physically active at an unusual time. Reppert suggests that melatonin may act to quiet the SCN at night, “providing a level of security” by preventing such bits of neural activity from resetting the clock. Light, the most powerful phase shifter, would be the one exception, he says, because it turns off melatonin production.

    That general quieting of the SCN by melatonin could explain why the hormone can relieve jet lag when it is given as a drug, says melatonin researcher Vincent Cassone of Texas A&M University in College Station. Melatonin's phase shifting alone cannot explain the effect, Cassone argues, and so the key to jet-lag relief may instead be its sleep-inducing effects. Those in turn may be rooted in its ability to hush the SCN. “The output of the SCN potently drives waking at certain points,” agrees Harvard's Czeisler. By inhibiting the SCN, melatonin taken as a drug may be “tempering that drive for wakefulness.” If future studies in whole animals can verify hypotheses like these, then the way is open to design drugs to optimize those effects.

    But until researchers sort out the many remaining mysteries about melatonin and its receptors, they advise jet-lagged travelers to lay off the melatonin pills. Too much melatonin could “numb” the SCN, Reppert warns. The hormone also has potent effects on the reproductive system of many mammals, especially seasonal breeders like hamsters and deer. Even though humans aren't seasonal breeders, melatonin may have as yet unknown reproductive effects in humans as well. “All this ignorance,” says Menaker, “makes it doubly important that people don't take melatonin indiscriminately.”


    Quantum Spookiness Wins, Einstein Loses in Photon Test

    1. Andrew Watson
    1. Andrew Watson is a science writer in Norwich, U.K.

    “I cannot seriously believe in [the quantum theory] because it cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance,” wrote Einstein to the German physicist Max Born in March 1947. Einstein was particularly bothered by quantum theory's oddball claim that the states of two particles or photons can influence each other no matter how far apart they are. Despite Einstein's misgivings, researchers have gone on to demonstrate the reality of quantum spookiness, and now—just 140 kilometers as the photon flies from Bern, where Einstein did some of his greatest research—a Swiss group has provided the best demonstration yet of quantum “action at a distance.”

    Photons phone home.

    A photon source plugged into the Geneva telephone exchange sends pairs of correlated photons to two nearby villages (map).

    The new result, announced earlier this month at a quantum computation workshop in Turin, Italy, and hailed as “very important” by Boston University theorist Abner Shimony, shows that links between quantum entities persist over distances of up to several kilometers. Some theorists have speculated that these correlations would weaken with distance, says another quantum mechanics expert, John Rarity of the Defence Evaluation and Research Agency in Malvern, United Kingdom. But in the Swiss result, “we've now got to 10 kilometers' separation, and quantum mechanics is apparently still holding.”

    As early as 1926, the Austrian physicist Erwin Schrödinger, a father of quantum theory, pointed out that the theory allows a single, pure quantum state—a particular polarization, for example—to be spread across two objects, such as a pair of simultaneously created photons. In the lingo of quantum mechanics, the photons are “entangled,” and they remain entangled even when they fly apart. Then quantum theory predicts that a measurement on one photon will influence the outcome of a measurement on its distant twin. This is the action at a distance that Einstein detested, as it appears to be at odds with the prohibition of faster-than-light effects in his theory of special relativity. But short-range laboratory experiments, notably those of Alain Aspect and his colleagues in Paris in 1982, have backed the quantum claim.

    With a little help from Swiss Telecom, Nicolas Gisin and his group at the University of Geneva have now demonstrated quantum action at a distance on a large scale by turning the countryside around Geneva into a giant quantum laboratory. Gisin's team created pairs of entangled photons, using a specially constructed, suitcase-sized generator in central Geneva, and sent them through fiber-optic lines to the two small villages of Bellevue and Bernex, 10.9 kilometers apart, where the streams of photons were analyzed and counted.

    The total energy of each entangled pair is fixed, but the energy of each photon in a pair can vary within a narrow range. An analyzer is effectively an energy filter, offering each photon a random choice of either being counted, or of being lost from the experiment. Each photon makes its choice depending on its energy and the setting of the analyzer, explains Wolfgang Tittel, one of Gisin's colleagues. When the photon counts were relayed to Geneva via a second fiber-optic system and compared, they turned out to be correlated. Each photon in a pair knows what its distant partner does, and does the same thing.

    “Even if you change a [setting] only on one end, it has an influence on what happens on the other end,” Gisin says. “There is indeed spooky action at a distance, in the sense that what happens at one detector has some influence on what happens at the other one.” Adds Shimony, “It is spooky in the sense that causation is a more subtle relation than we had ever realized. I think Einstein loses on this point.”

    The implication, says Rarity, is that certain properties of the photon twins aren't defined at the moment the pairs are created. “This is really another nail in the coffin of that world view which says that certain quantities exist before measurement,” he says. “It turns out that they don't.” Instead, the photons acquire a particular state only when a measurement is made on one of the pair, instantly determining the state of the other.

    Shimony adds that the result is “pretty definitive disproof that entanglement falls off with distance,” contrary to proposals by some, including the late British theorist David Bohm. Indeed, it hints that quantum events in a far corner of the universe might influence events here on Earth.

    Gisin points to more down-to-earth implications for telecommunications, implications presumably not lost on Swiss Telecom: “If these correlations hold over very long distances 1/4 then they could be exploited for a variety of applications, especially quantum cryptography.” Contrary to Einstein's fears, quantum correlations can't be exploited to transfer information faster than light. “You cannot control what will be transmitted; therefore, you can't send an SOS message, or any other … message, by means of quantum correlation,” says Shimony. But these correlations could in principle create two perfect copies of random digits in two places. These could serve as the key to some code.

    Not only would the transmission be error free; it might also be uncrackable. “Any eavesdropper who tried to eavesdrop these quantum channels would break the correlation,” explains Gisin; the two parties could detect the intrusion by comparing parts of the received signals, which should be identical. Quantum spookiness might be just the thing to foil a spook.

  16. Human-Dominated Ecosystems

    1. Richard Gallagher,
    2. Betsy Carpenter

    Ecologists traditionally have sought to study pristine ecosystems to try to get at the workings of nature without the confounding influences of human activity. But that approach is collapsing in the wake of scientists' realization that there are no places left on Earth that don't fall under humanity's shadow. Furthermore, many scientists now believe that eventually all ecosystems will have to be managed to one extent or another, and, to do this well, managers will need sound scientific advice.

    In this Special Issue, Science takes a look at the progress made to date in the study of human influence on the planet. The focus is on the science rather than the politics of the topic, but authors were invited to explore the policy implications of their findings. The global reach of their efforts is underscored by the fact that the authors are from five different continents.

    The Editorial is contributed by a politician, Gro Harlem Brundtland, who realized very early on the importance of considering scientific findings when crafting legislation for resource management. She chaired the World Commission on Environment and Development, and the publication a decade ago of the commission's report, “Our Common Future,” was a pivotal event, helping to spark the current political interest in global environmental issues.

    Opening the Articles, Peter Vitousek and colleagues provide a synthesis of the biotic and abiotic influences of humans, demonstrating beyond all doubt that this is indeed a human-dominated planet. F. Stuart Chapin III and co-authors discuss the influence of individual ecosystem components on ecosystem functioning as a whole. Current practices and the possibilities for sustainable agriculture, fishing, and forestry are assessed by Pamela Matson and colleagues, Louis Botsford and colleagues, and Ian Noble and Rodolfo Dirzo, respectively. The Articles conclude with a frank assessment from Andrew Dobson and co-authors of the prospects for restoring degraded land.

    The News reports focus on the seas. The first piece questions the old assumption that the oceans are too big and their inhabitants too prolific for humans to threaten many marine creatures with extinction. The second asks whether “no-take” marine reserves will help replenish dwindling fisheries. The third takes a look at what has been learned about the world's coral reefs since the late 1980s when they were first described as globally imperiled. New reports suggest that the decline may be more local or regional than global in scope.


    Extinction on the High Seas

    1. David Malakoff
    1. David Malakoff is a writer in Bar Harbor, Maine.

    Biologists have long assumed that the oceans are too vast, and theirs inhabitants too profilic, for humans ever to extinguish any marine species. But now that assumption is under attack.

    When former marine ecologist Ted Tutschulte got the news, he could hardly believe it. Thirty years ago, as a graduate student, he had taken a deep breath, dived into the shallow seas off California's Catalina Island, and scooped up as many of the big marine snails called abalone as he could hold. Later, as part of his doctoral work, he estimated that a single hectare of his study area harbored up to 10,000 white abalone, one of the three abalone species he was studying. But last month, as Tutschulte sat in his Mariposa, California, home, he learned that a recent census in his old study area had turned up just three white abalone, and scientists were predicting that the species would soon be extinct in the wild. “It just doesn't seem possible,” he said.

    Tutschulte is not alone in his thinking. For centuries, biologists have doubted that humans could ever extinguish white abalone or any other species that spends its whole life in the ocean. According to conventional wisdom, the sea was just too big and deep—and its inhabitants too numerous, prolific, and widespread—for humans to leave that kind of permanent biological scar. Even the most persecuted marine creatures, biologists said, would always be able to find refuge somewhere in an ocean's vastness and eventually repopulate the seas.

    Such views are reflected in government lists of endangered and threatened species and in the scientific literature. Currently, there are just a handful of marine creatures on the national and international lists of species known to be in danger of extinction. In the United States, for instance, just one fish that spends its entire life cycle in the sea, the Gulf of California's totoaba, is protected under the Endangered Species Act. Similarly, over the last 200 years, scientists have documented the extinction of only one totally marine mammal, the stellar sea cow, and just four marine mollusks.

    But despite such reassuring evidence, the conventional wisdom about marine extinctions is, itself, now under threat. A small but growing number of scientists say that widespread, human-caused marine extinctions are a possibility. And one researcher—Marjorie Reaka-Kudla of the University of Maryland, College Park—argues that we have already erased thousands, if not hundreds of thousands, of species from the sea. “Humans have already caused many more extinctions in the marine environment than we are aware of,” says Reaka-Kudla. She estimates that at least 1200 marine species have become extinct in the last few hundred years, mostly unknown species that inhabited coral reefs. To Reaka-Kudla and like-minded biologists, the paucity of documented extinctions only shows how few marine biologists are out there looking for missing forms of marine life.

    Marine myths

    Few marine biologists endorse numbers as high as Reaka-Kudla's, but many have lost the old complacency about the resilience of marine life. They point out that Earth's growing human population is putting unprecedented pressure on life in all parts of the sea. Pollution, overfishing, the introduction of exotic species, and habitat destruction already have wrought dramatic changes in shallow coastal waters. Now, new technologies—from ruthlessly efficient deep-water fishing nets to rugged, seabed mining equipment—have opened up even the deep sea to exploitation. And these scientists no longer take comfort in what had been received wisdom: that most marine creatures have sex lives that render them “extinction proof.”

    “Many of us who study the ocean have been embarrassingly slow to reject widespread myths that purport to describe the reproductive lives of the majority of marine organisms,” says Jeremy Jackson of the Smithsonian Tropical Research Institute in Balboa, Panama. In particular, Jackson rejects “the dangerous notion that most marine species are ‘extinction-proof’ because they produce huge numbers of planktonic larvae that drift vast distances with the current, and hence have large populations with very wide geographic ranges. … People invoke this idea that the ocean holds this homogeneous larval soup that rains down everywhere,” he says. “So they assume that we can remove a population of fish or mollusks from one place, and that there will always be a ‘somewhere else’ that is a source of replacements.”

    This idea is false, says Reaka-Kudla, “because the reality is that many macroscopic marine organisms have limited ranges.” The misperception, she concluded in the 1995 book Biodiversity II (Joseph Henry/National Academy Press), has at least two sources. One is the fact that marine biologists have tended to study larger, more visible marine organisms, such as starfish, crabs, and fish. These larger organisms generally do produce relatively long-lived larvae that can drift great distances. But the majority of marine species, she says, are smaller and tend to produce fewer, shorter lived larvae that do not travel far.

    Another source of the misperception is that biologists have long assumed that widely distributed organisms belong to a single species. New genetic studies, however, have revealed that many commonly found organisms are in fact groups of distinct, “concealed sibling species” that can have smaller ranges and significantly different life histories. In 1988, for example, researchers discovered that the popular, edible blue mussel, found throughout the North Atlantic and North Pacific, was actually three species. The same year, scientists discovered that two commercially important deep-water crabs were in fact 18 distinct species.

    Human ignorance of such basic biological facts can have dire consequences for marine species, such as the white abalone, researchers say. At first glance, a big range and a fecundity that is remarkable by human standards makes Haliotis sorenseni an unlikely candidate for extinction. The snail lives along 1200 kilometers of Pacific coastline south of California's Point Conception, clinging to rocky reefs 26 to 65 meters, or deeper, below the surface. A single, mature female can release 15 million eggs a year.

    Commercial abalone divers began harvesting the species in 1965, after they had overfished stocks of other abalone species living in shallower water. To regulate the fishery, California officials imposed minimum size limits. In theory, the scheme allowed the animals to reproduce for several years before harvest, assuring a steady supply of marketable snails. In practice, however, the fishery collapsed in just 9 years. And what biologists didn't realize for almost another 20 years was that the species had been pushed to the brink of extinction.

    An early sign of trouble came in 1980 and 1981, when National Park Service biologist Gary Davis and a team of divers surveyed the sea floor around the Channel Islands National Park. The area had been an abalone hotbed, but they found only 21 snails in a hectare of prime habitat. Over the last 5 years, Davis has conducted broader and more thorough searches. But his team found a total of only eight live white abalone on 8 hectares of sea floor. The same habitat supported between 16,000 and 82,000 abalones 20 years ago, Davis notes, citing Ted Tutschulte's doctoral work. Other research suggests similar declines have occurred throughout the white abalone's range.

    Lonely at the bottom

    The abalone population plummeted, Davis and other biologists say, because regulators overlooked a critical fact about the snail's reproductive biology: To breed successfully, the snails must be close together so that the eggs and sperm released into the water can find each other. “We're dealing with animals that need to be within a meter of each other to have effective reproduction,” Davis says. “The harvest apparently reduced their population density below a critical level. It looks as if the last successful breeding season was in 1969, and those animals have been dying from natural causes ever since. Extinction is imminent unless there is human intervention.”

    Some state biologists and commercial divers disagree, claiming that remnant white abalone populations remain in deep water. But even if that's true, researchers point out that the white abalone is “ecologically” extinct. “Even if the species is not biologically extinct,” says Paul Dayton of the Scripps Institution of Oceanography in La Jolla, California, “its population has been reduced so low that it cannot exert its former ecological role.”

    Currently, white abalone is the only totally marine species that scientists confidently claim is in immediate danger of extinction due to overexploitation. Some researchers worry, however, that even apparently prolific commercially exploited fish could also have hidden vulnerabilities in their population biology. “There is no reason to reject the idea that many marine fish also have critical but unknown population thresholds,” argues Carl Safina, who directs the National Audubon Society's Living Oceans Program. Just because cod, tuna, and other common food fish can produce millions of wide-ranging larvae, he says, does not mean “that reproduction will always compensate for the rate at which we are killing them.”

    That idea is the subject of a heated debate, however, which was sparked by a decision last year by the International Union for the Conservation of Nature (IUCN) to add 118 marine fish, including overexploited food-fish species such as Atlantic cod, haddock, and bluefin tuna, to its Red List of threatened animals. At the center of the debate are IUCN criteria calling for species with populations that have dropped by at least 20% in 10 years to be categorized as “vulnerable” to extinction, and species with 50% declines to be categorized as “endangered.” While species appearing on the Red List gain no legal protection, listing does give them increased visibility in national and international policy forums.

    Many biologists say the criteria and the categories are not appropriate for many fish species, because they experience dramatic natural population fluctuations from year to year. “Anybody who has worked with marine fish knows that the IUCN categories are meaningless for many commercial species, which have high reproductive potentials or extreme natural population fluctuations,” says John Musick of the Virginia Institute of Marine Science in Norfolk, Virginia, who was on the IUCN scientific team. “Often, the goal in fisheries management is to reduce the standing stock by at least 50%, so even commercial species that are properly managed could be classified as endangered under these criteria. So would species, such as herring, that might naturally have 100,000 individuals one year and millions the next.”

    “I believe we can demonstrate that population fluctuations that would be of concern in terrestrial vertebrates are simply not relevant for many marine species,” says Jake Rice, a fisheries biologist who is leading a special scientific review of the issue for Canada's Department of Fisheries and Oceans. “Atlantic cod may have been severely overexploited in parts of its range, but it is not threatened with extinction. There are still billions of cod on the Canadian side of the Atlantic alone.”

    “The passenger pigeon was one of the most abundant birds in the world 75 years before it became extinct in 1914—there is a humbling lesson in that,” responds Elliott Norse of the Marine Conservation Biology Institute in Redmond, Washington. He contends that fisheries biologists don't know enough about marine ecology to predict confidently that even seemingly prolific fish species can cope with what he calls “multiple assaults” on the ocean environment, from fishing to pollution.

    Even researchers who are skeptical about the extinction threat to many fish species agree that some, such as sharks, have a reproductive strategy that puts them at risk. Musick, for example, notes that “some sharks take decades to sexually mature and then produce a relatively small number of young,” explaining why he pushed to have six sharks added to the Red List. “I believe you could drive some sharks to extinction. Despite their expansive ranges, they often live in coastal populations and tend not to cross the open ocean. If the economic incentives were strong enough, the fishery could move from population to population until you've wiped out the species.”

    Deep disturbances

    The growing demand for seafood has put even fish living in the deep sea at risk. In the 1980s, for example, pioneering New Zealand fishers developed deep-sea netting techniques for catching orange roughy—a fish that lives more than a kilometer down—and reduced some populations by 70% in just 6 years. Although the species is not threatened with biological extinction, researchers with the Fisheries Research Centre in Wellington, New Zealand, documented “significant” reductions in genetic diversity in three major spawning populations. Such losses could make it more difficult for the species to adapt to future environmental changes, the researchers say.

    Increasing use of a common fishing technique known as trawling—in which nets are repeatedly dragged across the sea floor—is also putting new pressures on sea-floor creatures, including ones that are not the intended catch. The technique, which Leslie Watling of the University of Maine's Darling Marine Center in Walpole compares to “ransacking a house two or three times a year,” is particularly damaging to organisms, such as some tubeworms, that lose the ability to rebuild their homes as adults. Creatures dwelling on the deepest sea floors may have particular difficulty coping with such physical disturbances because they grow slowly: A clam less than 2.5 centimeters long, for example, may be more than 100 years old.

    Unless greater steps are taken to protect these and other vulnerable marine species, Maryland's Reaka-Kudla fears that the number of marine extinctions could soon be staggering. For example, she estimates that unless steps are taken to slow coral reef destruction, up to 1.2 million reef species alone could be extinct within 40 years. Her estimate, calculated using equations originally developed to predict how many species can live on an island of a certain size, rests on theoretical assumptions that coral reefs are as species-rich as tropical forests and that 30% of reefs will be gone in 40 years.

    Other biologists are wary of such predictions, noting that Reaka-Kudla's theoretical work has yet to be backed by documented examples of coral reef extinctions. But they say the work that she and other biologists are doing is prompting growing interest in the issue and discussions of what should be done to reduce the risk of marine extinctions. Canada's Jake Rice, for example, says that governments around the world are increasingly concerned about the ecological impacts of fishing and are looking for advice on how to avoid potentially mortal blows to their fisheries. The trick, he says, will be gaining a better understanding of the biology of marine organisms, so that proposed solutions match the problems. The best strategy for protecting a mollusk threatened by habitat destruction, he says, could be quite different from the best one for protecting an overexploited fish. Other researchers note that the increasing use of marine reserves may help preserve some species, but afford others little protection (see next story).

    While scientists try to sort out these issues, they may be missing marine extinctions occurring just outside their labs, says James Carlton, director of the Williams College-Mystic Seaport Maritime Studies Program in Mystic, Connecticut. As Carlton points out, the demise of even once-common creatures can pass unnoticed. He should know. In 1991, he became the first scientist in modern history to document the extinction of a marine invertebrate: a limpet that lived along the North Atlantic coast. About 1930, the limpet apparently succumbed after a blight killed most of its major food source, eelgrass. “What does it tell us that we didn't notice for 60 years that a once-common species became extinct, literally under the noses of marine biologists?” Carlton asks, noting that the New England coastline is “dotted with some of the nation's most prestigious marine biological laboratories.”

    Many other marine creatures have not even been described yet (see sidebar) making it all the more likely that no one would note their passing. Moreover, the world's classically trained marine taxonomists and biogeographers—who would be the first to notice the disappearance of a species—are themselves dying out. “Future historians of science may well find a crisis was upon us at the end of the 20th century,” Carlton concluded in a 1993 American Zoologist paper on marine invertebrate extinctions. “[It was] the extinction of the systematist, the extinction of the naturalist, the extinction of the biogeographer—those who could tell the tales of the potential demise of global marine diversity.”


    The International Union for the Conservation of Nature's 1996 Red List of Threatened Animals

    U.S. National Marine Fisheries Service Office of Protected Resources

    SeaWeb (a nonprofit advocacy organization) Background Articles on Marine Conservation Issues

    “Marine Biodiversity,” Chapter 11 of World Resources 1996–97: A Guide to the Global Environment (book published by World Resources Institute)

    National Academy Press DocuWeb: Search for “Understanding Marine Biodiversity” (128-page National Research Council report, issued in 1995)

    Marine biological diversity: Some important issues, opportunities and critical research needs (paper by Cheryl Ann Butman and James T. Carlton)


    Seas Yield a Bounty of Species

    1. David Malakoff

    How many marine species are there? Concerns that a wave of marine extinctions may be taking place (see main text) have made that a pressing question. Biologists are now coming up with some stunningly high estimates.

    Only about 275,000 marine species have actually been described, says University of Maryland, College Park, biologist Marjorie Reaka-Kudla, out of a total of 1.8 million known species in all habitats, on land and in the sea. But she says such numbers dramatically undercount marine species, because life in the sea has been studied so little.

    But assuming that tropical seas harbor the kind of biological diversity found in tropical forests, Reaka-Kudla estimates that the ocean's coral reefs alone support at least 1 million species, and possibly up to 9 million. The deep sea's expansive floor, once thought to be barren, may be home to another 10 million species, estimates Fred Grassle of Rutgers University in New Brunswick, New Jersey. He and his colleagues arrived at that estimate after finding more than 1500 deep-sea species, including polychaete worms, crustaceans, and mollusks, in North Atlantic sea-floor samples collected in the early 1990s. Many of the species in the samples were rare: Almost one-third of them were collected only once.

    While the total number of species in the ocean is still unknown, scientists are already certain that the sea boasts a world's record when it comes to body plan variations—the basic designs that distinguish large groups of organisms. Reaka-Kudla notes that while land hosts 28 phyla, or major groups of living organisms, the sea harbors 43.


    'No-Take' Zones Spark Fisheries Debate

    1. Karen F. Schmidt
    1. Karen F. Schmidt is a science writer in Greenville, North Carolina.

    An unusual experiment is getting under way this month on a 30-kilometer-square patch of coral reefs, sea grass meadows, and mangrove swamps off the Florida Keys. Federal officials are banning all fishing from this part of the Florida Keys National Marine Sanctuary—in order to help replenish fisheries elsewhere. The hope is that the Western Sambos Ecological Reserve, as it's called, will serve as a source of fish, larvae, and eggs that will spill over into surrounding waters to help restock populations suffering from overfishing, pollution, and heavy tourism.

    The reserve is the first no-fishing zone set up for this purpose in U.S. waters, but many ecologists and fisheries scientists hope it will fuel a trend. They argue that no-take reserves are crucial for preserving marine biodiversity and healthy ecosystems, and for restoring the ocean's dwindling fisheries. “[No-take marine reserves] are being seen as the last great hope for fisheries management in many parts of the world,” says Kim Holland, a fish physiologist at the Hawaii Institute of Marine Biology in Kaneohe.

    But although the Florida Keys reserve is finally a reality after six tumultuous years of back and forth between scientists, fishers, divers, aquarium fish collectors, local business leaders, and county, state, and federal officials, the idea that it and others like it will help enhance fish stocks is still very much a theory. “We have no idea, really, if the Western Sambos Ecological Reserve will have an effect on replenishment,” says John Ogden, director of the Florida Institute of Oceanography in St. Petersburg and an ardent advocate of the reserve.

    No-take marine reserves are “showing signs of being a fad, and fads don't necessarily promote good science,” says Nicholas Polunin, a marine ecologist at the University of Newcastle in the United Kingdom. “How can we justify them to people for whom fishing is their livelihood [when] we cannot predict what the benefits will be?” Even those who support the strategy acknowledge that, in most instances, researchers don't know where many fish species spawn and how they disperse, making it difficult to pick out the best areas for protection.

    Few and far between. Historically, proposals to shut down fishing grounds have sparked fierce opposition, and that's why few no-take zones of any substantial size have been established so far. Most marine sanctuaries around the world place restrictions on fishing but don't ban it altogether. For instance, various kinds of fishing are allowed in more than 99% of the Florida Keys National Marine Sanctuary. And among the few reserves that do ban fishing, many are just “paper parks” where the ban is not effectively enforced, says Jane Lubchenco, a marine biologist at Oregon State University in Corvallis.

    Saving fisheries requires tougher action, says Tony Pitcher, director of the Fisheries Centre at the University of British Columbia in Vancouver, pointing to the dismal record of fisheries management. In the last decade, at least 20 major fisheries have collapsed around the world, he says. As he points out, some were quite closely monitored, including the North Atlantic cod fishery. “We've obviously screwed up. The idea of closing off areas as a hedge against this imperfect science is a powerful one,” he says.

    Fish species also are losing their natural refuges as improved fishing technologies give fishers access to ever more remote corners of the sea, says Callum Roberts, a marine ecologist at the University of York in the United Kingdom: “As we erode away their natural refuges, [reserves] are the only way to protect [vulnerable species].” Although reserves could not entirely replace current restrictions on fishing gears—such as mesh sizes of nets, he asserts—a “no fishing here” rule should be easier to enforce and produce more ecologically sound results than catch quotas.

    Rare beauty.

    A pillar coral in the Western Sambos Ecological Reserve.

    William Harrigan

    At the annual meeting of the Society for Conservation Biology held in June in Victoria, British Columbia, Lubchenco and others called for a bold step to halt these trends: boosting no-take marine areas from today's one-quarter of 1% of the ocean's surface area to 20% by 2020. Doing so could help replenish fisheries in two ways, say some proponents. First, reserves would allow fish inside them to live longer, grow larger, and produce a bounty of eggs. Over time as the population density increased, adult fish would leave each reserve, adding to catches in neighboring areas. Currents also would transport eggs and larvae from the reserve to surrounding fishing grounds, reseeding them. “The country that has the courage to set up no-take reserves now is the country that will have a thriving fishing industry in 20 to 30 years,” says Pitcher.

    Bigger fish. It is still not clear, however, how much of this will stand up to scientific scrutiny. There is good evidence from around the world that fish are, in fact, larger (presumably because they are older) and more abundant inside marine no-take reserves. For instance, in tropical Kenya the red lion triggerfish, which has been nearly wiped out from other coral reef areas, has rebounded inside no-take zones, says Tim McClanahan, an ecologist who works for the New York City-based Wildlife Conservation Society. Rockfish living in two no-take reserves off the coast of northern California grow up to 48 centimeters long, compared to just 36 centimeters outside the reserves, says Michelle Paddack, who surveyed the fish populations for a master's degree at the University of California, Santa Cruz, and presented her findings at the Victoria meeting.

    A study of a tiny, 400-meter-square coral reef reserve in the Philippines called Apo Island also supports the idea that adult fish migrate from reserves into surrounding fishing areas. Surveys of large, predatory fish, such as groupers and tropical snappers, over 10 years in areas 200 to 300 meters outside the reserve revealed an increase in fish density over time, with significantly more fish after 9 years, says Garry Russ, a marine biologist at James Cook University in Queensland, Australia. McClanahan has also reported that adult fish from a 6-kilometer-square reserve in Kenya can venture into neighboring waters.

    But so far there's little evidence that this spillover of adults is swelling catches. McClanahan found that in one study area where a reserve was set up, total catches dropped by 35% by the third year. He says he's now skeptical that closing more areas to fishing will increase catches overall: “I don't think marine no-take areas are the golden goose. A lot of the claims are too hopeful and are made because there's almost no data to show the opposite.”

    Even those researchers who argue that reserves will up catches in the long run say they don't expect the spillover of adult fish to do the job. Many fish species may not move around much as adults, particularly if they live on reefs. For reserves to really enhance fisheries, say researchers, most adults need to stay inside the protected area, producing massive quantities of juveniles and larvae to be transported by ocean currents into fished areas. While there's no evidence yet that replenishment by this mechanism has occurred at any reserve, it is known that fish larvae disperse widely. In the Caribbean, for instance, larvae have been found to ride on ocean currents for 50 days on average and can settle throughout an area 1900 kilometers by 800 kilometers, says the Florida Institute of Oceanography's Ogden.

    Joshua Sladek Nowlis, an ecological modeler at the University of the Virgin Islands on St. Thomas, created a computer model that tested the effects a no-take reserve could have on fisheries if adult fish stayed put and larvae traveled far and wide. He found “a huge fisheries enhancement,” he says. The reserve also appeared to save some species that otherwise would have gone extinct. Concludes Nowlis, “What we need to do is design reserves in a way that larvae get distributed well and adults are maintained inside.”

    Researchers are still debating how big reserves should be to accomplish these goals. If a reserve is very small, too many adults and eggs may leak out and the population inside may not be self-sustaining (although many tiny reserves studied so far don't appear to suffer from this problem). Conversely, with an enormous reserve, the lost catch would be so great that replenishment may not be enough to compensate for it and catches overall could decline, says Nowlis.

    Even more important than size, however, is location. Says James Cook's Russ, “The key to whether marine reserves will have any positive effect on fisheries is if they protect critical spawning sites.” Those that do, he says, could help conserve even those species that migrate vast distances, such as the beleaguered bluefin tuna. No single reserve could ever protect adult bluefins, but a reserve placed where they spawn, such as in the Gulf of Mexico, could increase their numbers by aiding the survival of juveniles. Reserve designers could also focus on other “source” areas such as certain coral reefs, where myriad fish larvae of many species are produced before dispersing on ocean currents. Reserves may be most effective when placed where overfishing is most severe, says Nowlis. One of his computer models turned up the surprising result that heavily fished populations show a faster turnaround than lightly fished populations when reserves are set up.

    Strategic planning. Such factors have typically been given short shrift in the design of existing reserves, most of which have been placed where public resistance is lowest, says Lubchenco. But designers have also been hampered by a dearth of information—on the basic biology of species, including ranges and locations of spawning grounds, as well as on ocean currents and patterns of larval dispersal. To reduce the risk of selecting a site that's too small or in a less than optimal location, most proponents of no-take reserves say it's best to set aside a network of reserves, rather than one big area of ocean. By spreading reserves over a larger area, networks allow designers to sidestep the all-the-eggs-in-one-basket problem, which looms particularly large when the precise locations of spawning sites and larval sources are unknown. Networks also help get around the size question by including more types of habitat that are likely to meet the needs of more species, says Bill Ballantine, a marine ecologist at the University of Auckland in New Zealand, a country that has established 13 marine reserves over the past 20 years and continues to work toward building a linked network.

    Still, even strategically placed networks of no-take marine reserves cannot fully conserve all species and ecosystems. Reserves can best protect species that dwell at well-defined sites, such as coral reefs, says Julia Parrish, a conservation biologist at the University of Washington, Seattle. By contrast, it would be nearly impossible to design a reserve to protect the Pacific salmon because the fish spawn in coastal river beds and migrate through the open ocean, swimming different routes each year depending on prevailing ocean current temperatures. In addition, marine reserves can't protect species and ecosystems from oil spills, pollution from coastal runoff, or climate change. Says Parrish, “Marine reserves are one tool and a good one. We should be serious about using them. But we would not be able to go to sleep at night thinking that everything is taken care of.”

    Ultimately, the success of no-take marine reserves is likely to depend as much on building and maintaining political support as on improving the science, proponents say. Take the Florida Keys case: Planners considered the best available science in coming up with their original proposal for a network of four reserves, says Rod Fujita, a marine ecologist at the Environmental Defense Fund in Oakland, California, who was involved in the planning. Although little was known about where important species—snappers, lobsters, and groupers—spawn and how their larvae disperse, the planners tried to infer crucial locations from the known water circulation patterns.

    But in the final hour, some commercial fishing and “wise use” groups fought the proposal, and a nonbinding referendum last November in Monroe County, Florida, revealed that 55% of the voters opposed the entire marine zoning proposal for the sanctuary. As a result, the final plan included just one no-take reserve—the Western Sambos. “We shed a lot of blood to come up with the original network of four ecological reserves … [and] we just barely got a scrap of what's needed,” contends Fujita.

    Researchers are now preparing to monitor fish density, diversity, and size, and look for signs of spillover into waters outside the Western Sambos. But because the site is quite small, may not include any spawning grounds, and is located in the path of runoff full of silt, excess nutrients, and algal blooms from Florida Bay, some researchers fear that studies of the reserve may yield little useful data. “What worries me is that if we do not get a big response, we won't know if it's because reserves don't work, or because the pollution from Florida Bay is killing everything,” says Fujita.

    Even with the uncertainties, most ecologists and fisheries scientists remain steadfast in their support of no-take marine reserves. “Despite the controversy over the fisheries enhancement benefit, there's little controversy over the other benefits: protecting ecological integrity and biodiversity from the direct effects of fishing,” says Fujita. “Reserves placed almost anywhere are going to be better than no reserves at all,” says York's Roberts. “Let's just get on and do it.”

    In fact, some argue that the whole debate should be turned on its head. No-take marine reserves should not be viewed as an experiment at all, contends James Bohnsack, a research fisheries biologist at the National Marine Fisheries Service in Miami: “The reserves are the controls, and everything else is the experiment.” By allowing fishing throughout the ocean, he says, “we've been conducting a giant, uncontrolled experiment over the entire ocean for years.”


    For country-by-country information on marine sanctuaries and reserves, see

    For information on the Florida Keys National Marine Sanctuary, including the Western Sambos Ecological Reserve, see


    Brighter Prospects for the World's Coral Reefs?

    1. Elizabeth Pennisi

    Just a few years ago, scientists sounded the alarm that coral reefs around the world were seriously ailing. Some were bleaching a ghostly white as warmer than usual sea temperatures caused corals to expel their symbiotic algae. Others were being buried in silt, overrun by seaweed, or devastated by violent storms and disease. Scientists convened meetings, launched new research initiatives, and declared 1997 the International Year of the Reef to promote a greater awareness of the plight of these rich marine ecosystems.

    But now, midway through that year, some coral reef scientists are beginning to suspect that reefs may not be quite as widely imperiled as they once thought. Increasingly, researchers are wondering whether the decline may be local or regional rather than global in scope. “I don't think reefs remote from centers of population are as bad as the horror stories [we've heard],” says marine geologist Robert Ginsburg of the University of Miami in Florida. Although reefs in the Philippines are dying, for instance, those of Palau, French Polynesia, the Marshall Islands, Micronesia, Fiji, and the Cook Islands seem, for the most part, to be thriving. Likewise, although corals throughout the Caribbean are crumbling, those in the Gulf of Mexico seem to be stable.

    No one is suggesting that the major threats facing reefs have diminished or disappeared. In many fast-growing regions of the globe, such as the Caribbean and Southeast Asia, ship groundings, oil spills, and fishing with dynamite or cyanide are damaging reef communities, possibly beyond recovery. But new research indicates that some of the more tractable problems, such as simple overfishing, may be playing a larger role in reef decline than was once believed. Further, there's growing evidence that reefs do recover when given a chance. When communities or nations tighten restrictions on reef fishing or clean up pollution, reefs have rebounded.

    “If we can begin to curb these [stressors], I think the oceans would be much healthier, [and] I think you would see reefs respond,” says coral ecologist Phillip Dustan of the University of Charleston in South Carolina. Says Barbara Brown, an ecophysiologist at the University of Newcastle in the United Kingdom, “Some reefs will certainly deteriorate, and they are certainly going to change. But I don't think coral reefs are going to disappear.”

    Globalized anxiety. Concerns about a global decline began to gel 10 years ago, when a severe wave of coral bleaching in the Caribbean coincided with rising concern in the United States about global climate change. At congressional hearings in 1990, scientists and environmentalists portrayed reefs as fragile sentinels warning of the dire consequences of global warming. A handful of studies showed that corals were sensitive to even small temperature changes, which, to many marine biologists, suggested that ocean warming ultimately would lead to the loss of many reef communities.

    The focus—although not the level—of concern shifted in 1991, when scientists gathered in Miami to discuss the implications of global climate change for reefs. They concluded that the gradual warming expected in coming years was the least of their worries. “Most coral reef scientists [were] concerned that by the time reefs had to cope with global warming, they would be dead anyway” from pollution, destructive fishing, and other more immediate threats, says Judy Lang, a coral reef scientist at the University of Texas, Austin. The assembled scientists realized that “the coral reefs were disappearing so fast from [direct] human impacts that we had to get a handle on that first,” says John Ogden, director of the Florida Institute of Oceanography at the University of South Florida, St. Petersburg.

    In a much-cited study the following year, coral reef ecologist Clive Wilkinson at the Australian Institute of Marine Science in Townsville calculated that these activities had already destroyed 10% of the world's reefs. He also estimated that another 60% would effectively collapse in the next 2 decades or so if no actions were taken and if human populations along tropical coastlines continued to skyrocket. In Southeast Asia, which encompasses almost a third of the world's reefs, he warned that a mere 5% of reefs were safe.

    What the experts overlooked in the frenzy to convey the severity of the problem, however, was how narrowly reefs had been surveyed. Scientists don't even have a good handle yet on the location and extent of all coral communities. Over the past 20 years, published estimates of the total area of reefs have ranged from 100,000 to as high as 3.9 million square kilometers. “They are just wild, seat-of-the-pants estimates,” says Charleston's Dustan. The navigation charts often used to estimate the size of reefs tend to include only those that pose a shipping hazard. Figures also vary according to how the researcher defines a reef, explains Joanie Kleypas, a marine biologist at the National Center for Atmospheric Research in Boulder, Colorado. For instance, some surveys only include reefs that break the surface and ignore those in deeper waters.

    Moreover, many known reefs have not been surveyed. “There is not much published that quantitatively and objectively asks the questions: Are reefs of the world dying; if so, exactly where and at what rate?” notes Robert Steneck, a marine ecologist at the University of Maine's Darling Marine Center in Walpole. Wilkinson, for example, had based his global estimates primarily on his work in Southeast Asia, but it appears that less than 10% of those reefs have been thoroughly surveyed, says John McManus, a coral biologist at the International Center for Living Aquatic Resources Management (ICLARM) in Manila, the Philippines. Scientists' grasp of the condition of reefs is just as patchy in the Pacific, where some 90% are unexplored. “We need more data,” says McManus.

    Reef gladness. Further, some reefs that have been extensively surveyed in recent years have been found to be in good shape, particularly when they are surrounded by deep water and protected from land-based runoff. In Palau, Charles Birkeland of the University of Guam resurveyed 12 spots studied by Japanese researchers in the 1930s and found them “richer and apparently better off” than before, he says. All but a few spots along Australia's 2000-kilometer Great Barrier Reef are thriving. Even Wilkinson now finds cause for optimism. “Most reefs in the Pacific are in good health,” he notes, and those in the Indian Ocean, the Maldives, and the Chagos Archipelago, southwest of India, “are in great shape.”

    Just as researchers are realizing that they may have overgeneralized about the extent of the damage, they also are realizing that they may, at times, have jumped to conclusions about the causes of reef mortality. Over the past 25 years, for example, scientists have watched Jamaica's once spectacular reefs become smothered in seaweed. Some biologists have argued that the algae are thriving because unchecked development in Jamaica has dosed coastal waters with high levels of nitrogen, phosphorus, and other nutrients. Corals, unlike algae, they say, evolved to deal with a low nutrient environment and fare badly in enriched water.

    But coral reef paleobiologist Jeremy Jackson at the Smithsonian Tropical Research Institute in Panama and Terence Hughes, a coral reef biologist at James Cook University in Townsville, Australia, contend on the basis of a close examination of historical demographic and fishing records and experimental data (Science, 9 September 1994, p. 1547) that centuries of overfishing set the stage for the algal takeover. In the Caribbean, 17th- and 18th-century hunters almost eradicated algae-eating turtles, and subsequent generations of fishers have rid reefs of most of the large herbivorous fishes. For a long time, Jamaica's reefs seemed to function just fine without these creatures, because algae-grazing sea urchins kept the seaweed under control. But in recent years, two hurricanes and an epidemic nearly wiped out the sea urchins, and the reef communities could no longer keep down the algae. As a result, reefs quickly turned into beds of seaweed, Jackson reported in June at the annual meeting of the Society for Conservation Biology, held in Victoria, British Columbia. “This is happening all over [the Caribbean],” he asserts.

    Similarly, there is new evidence from the Florida Keys that attributing algal overgrowth to excess nutrients in the water may be oversimplifying the situation. While high nitrogen and phosphorus concentrations in water carried across the Keys from Florida Bay correlate with the decline of some reefs along this island chain, in other cases there doesn't seem to be a link. “It is true that coral reefs can be overwhelmed by nutrients, [but] it takes quite a bit of nutrients to do that,” says Alina Szmant, a physiological ecologist at the University of Miami.

    Birkeland found no indication that sewage outfall from a treatment plant harmed a reef site in Palau, which he studied in 1976 before the plant was built and again 17 years later. “The coral and fish communities hardly changed at all,” he notes.

    The lack of a clear smoking gun, even for reefs as well studied as Florida's have been, is due partly to the tendency of reef scientists to focus their research on a single parameter—say, water quality or fish abundance—in a small geographic area. Such narrow projects haven't been able tease apart the effects of several insults, says Steneck, or reveal global trends. “In my opinion, too many folks are looking at the mechanistic level without having demonstrated that there is a pattern [to the decline],” says Steneck.

    And in those few instances where patterns have been sought, the results have been mixed. Florida's Ogden has coordinated a 16-country, 25-site reef monitoring program called CARICOMP to look for regional trends in the Caribbean, but after 5 years, no clear patterns have emerged. In some cases, the source of the problem is a “no brainer,” says Ogden, but in general, the picture is still quite murky: “What we are finding are patterns of variation across the whole range.”

    Scientists' efforts to unravel the causes of reef decline also have been complicated by the fact that reefs respond to stresses in a nonlinear way, contends Jackson. A coral community may appear to be unaffected by heavy fishing, excess nutrients, or some other stress until a certain threshold is reached. Then the decline may be precipitous and may appear to have been prompted solely by the most recent assault.

    Call to action. For some scientists, the data on the extent and causes of reef decline are already convincing enough to warrant immediate action. “We don't need one more bit of science to know there really is a crisis,” says Jackson. Tim McClanahan, who works for the New York City-based Wildlife Conservation Society overseeing coral reef monitoring programs in Kenya and Belize, argues that burgeoning populations will only increase assaults on reefs. “The decline is worse than most people think—and scientists admit,” he says.

    But even he supports efforts to fill out the picture of global reef health by compiling results of scattered studies in more useful ways and by conducting more extensive surveys. In one such effort, ICLARM's McManus is coordinating the development of a database containing information on some 7000 reefs. Called ReefBase, the database is available on a compact disc that can be queried by both scientists and conservation managers. At this point, “the data matrix is quite sparse and geographically patchy,” McManus points out, because researchers have taken many different approaches to studying reefs. “To do any serious analyses, we have to develop a solid block of information through standard techniques,” he says.

    There are several efforts under way to collect data more systematically. ReefBase will include information from a new monitoring program, the Global Coral Reef Monitoring Network (GCRMN), which Australia's Wilkinson is setting up as part of the International Coral Reef Initiative, a multilateral agreement signed in 1994 to encourage coastal zone management and more sustainable use of reefs. Another program, called ReefCheck (Science, 6 June, p. 1494), is enlisting groups of recreational divers led by coral reef scientists to take a global snapshot of reef conditions. A fourth effort, developed by Steneck and Austin's Lang, and called the Rapid Assessment Protocol, calls for coral reef scientists to complete a standardized survey of their particular reefs in addition to carrying out their normal research.

    Uncertainty about how best to assess reef health may well hamper efforts to get a better understanding of global conditions. “We still don't have a good definition of an unhealthy reef,” asserts McManus. For instance, coral reef biologists have long assumed that if a high percentage of a reef was inhabited by living coral, the reef was healthy; less coral cover meant the reef was in trouble. Coral cover is what the GCRMN divers will use to assess reef health, for instance. “The problem is that some reefs have low coral cover regardless [of their health],” explains Miami's Ginsburg.

    Indeed, some scientists now think that a better measure of long-term reef health may be whether tiny coral animals, called polyps, are gaining a foothold on a reef's limestone skeleton. A reef with a lot of coral cover may look healthy enough, but in some cases the lack of new recruits means it's “on its way out,” asserts Birkeland, who studied reefs in American Samoa between 1979 and 1995. At times, a region's reefs may be so damaged that there's no longer a local source of polyps. In other areas, reefs may be so overrun by mats of algae that recruits that do drift in don't stand a chance. Steneck and Lang also argue that it is most important to get a handle on the overall dynamics of the reef's corals. They want to know the rate at which polyps are dying, and how quickly they are laying down new skeletal limestone.

    By any criterion, though, there has been a little progress toward stemming the decline of reefs. Governments and conservation groups have begun to set up marine reserves (see p. 489). Although many are designed primarily to replenish fisheries, those around reefs should also help the corals, because more fish can help keep algae in check. For instance, in 1990, Bermuda established no-fishing zones on its deteriorating reefs, which bring in some 9 million tourist and recreation dollars a year. The government paid fishers $75,000 each to stop pot fishing on the reefs and then helped them develop other marine-related occupations. Similarly, in 1994, Palau, recognizing that coral reef fish were worth more as a tourist attraction than as an export product, passed a law to phase out the export of reef fish.

    Reefs are now a priority for both the public and the policy-makers, says Wilkinson. “There's a much greater awareness at the level of vice presidents and presidents. … It's gone from the departments of environment and fisheries to departments of state.” If countries and their citizens curb overfishing, stop destructive harvesting practices, and improve water quality, he says, then reefs may be able to come back.

    This “sentinel” is still breathing, adds Wilkinson. The reefs that are still flourishing may fuel this comeback, by providing new coral recruits. Others point to the fossil record, which indicates that reefs have persevered for many millennia, disappearing and reappearing several times. “I've seen reefs thriving under arduous conditions and coming back under [even] worse conditions,” says Newcastle's Brown. “Maybe they are not so fragile as people have portrayed them.”


    A map of the world's reefs can be viewed up close on the ReefBase Web site

    International committee looking at effects of global change on corals

    U.S. National Oceanic and Atmospheric Administration's coral health monitoring network; provides a directory of coral reef scientists and links to other coral reef sites

    Australian Institute of Marine Science

    International Center for Living Aquatic Resources Management in the Philippines, ReefBase

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution