News this Week

Science  09 Apr 1999:
Vol. 284, Issue 5412, pp. 230
  1. IMMUNOLOGY

    Alternatives to Animals Urged for Producing Antibodies

    1. David Malakoff

    A National Academy of Sciences (NAS) panel has concluded that biomedical researchers should produce most types of monoclonal antibodies using methods that don't require killing mice. But it argues that the use of mice is essential in some cases and should not be banned. Observers say that the committee's report,* released this week, could help prevent a long-running feud from escalating into a high-stakes legal fight.

    Tail end.

    Producing monoclonal antibodies requires a mouse at the start, but in vitro alternatives exist for extraction.

    SOURCE: ADAPTED FROM “MONOCLONAL ANTIBODY PRODUCTION,” NATIONAL RESEARCH COUNCIL

    Two animal rights groups—the American Anti-Vivisection Society (AAVS) of Jenkintown, Pennsylvania, and its research arm, the Alternatives Research and Development Foundation (ARDF) of Eden Prairie, Minnesota—have threatened to sue the National Institutes of Health (NIH) to prevent researchers from using a technique, known as the mouse ascites method, to manufacture monoclonal antibodies. Researchers using the method inject an antigen, or disease-causing agent, into a mouse so that its spleen cells begin producing antibodies—immune system proteins that react to the antigen. Then, spleen cells producing the desired antigen are removed and fused with fast-growing cancer cells to produce a hybridoma, or tumor, that manufactures one kind of antibody. To increase production, researchers inject the hybridoma into the abdominal cavity of a mouse, where the cells grow and secrete the antibody. Technicians harvest the antibodies from the swelling abdomen using a syringe. Typically, scientists can “tap” a mouse only a few times before it dies or must be killed.

    Many monoclonal antibodies can be grown by culturing the hybridoma in plastic flasks or bioreactors, then isolating the antibodies. But U.S. researchers still tap an estimated 1 million mice per year to produce monoclonals used for everything from analyzing tissue samples to attacking cancer.

    In an April 1997 petition, the AAVS charged that NIH was ignoring its own animal care guidelines by not doing enough to promote alternatives to the ascites method. It demanded that the agency prohibit researchers it funds from using the method unless they could show it was essential. Such rules, the group noted, would bring the United States in line with four European nations—the United Kingdom, Germany, the Netherlands, and Switzerland—that ban routine use of the ascites method, with some exceptions. But NIH concluded that a ban was “not appropriate” and that, although many alternatives appear promising, some antibodies cannot be grown outside mice or are too expensive to culture.

    Unwilling to take no for an answer, however, AAVS revised its petition in March 1998 and threatened to sue if the agency again rejected its request. Seeking an outside opinion, NIH asked the National Research Council, the NAS's contracting arm, to convene a blue-ribbon panel to assess the alternatives.

    The report, by an 11-member panel led by pathologist Peter Ward of the University of Michigan Medical School in Ann Arbor, estimates that alternatives to mice are available about 90% of the time. And it concludes that “tissue culture methods for the production of monoclonal antibodies should be adopted as the routine method unless there is a clear reason they cannot be used.” The panel opposed a European-style ban, however, noting that some antibodies—such as one widely used to prevent transplant patients from rejecting their new organs—resist being raised in a flask, for reasons that are still not understood. And it said that culturing might be too expensive for researchers who need only small quantities. “This is not the time to abandon the ascites method,” says Ward.

    Although neither NIH nor animal rights advocates had seen the report as Science went to press, one activist was cautiously optimistic that his group's concerns had been heard and that a courtroom showdown could be avoided. “We recognize some researchers are going to have to use mice,” says the ARDF's John McArdle, a former animal researcher. “But they should be obligated to consider alternatives before just doing what they've always done.”

    • *Monoclonal Antibody Production, a report of the Institute for Laboratory Animal Research, National Research Council.

  2. HUMAN EVOLUTION

    Forming the Robust Australopithecine Face

    1. Virginia Morell

    Some 2 million years ago, three species of hominids roamed the savannas of Africa, showing the world a most peculiar face. With their massive molars, tall jaws, and bony skull crests, these three robust australopithecines are generally regarded as a side branch to human evolution. But there the agreement ends. Older analyses suggested that, like fashion designers who converge on a similar style, these hominids were distantly related creatures who evolved their heavy-jawed, Darth Vader look independently. But on the basis of their many facial similarities, recent analyses have concluded that the three form their own small hominid family. Now on page 301 of this issue, a researcher offers a new explanation for why robust australopithecines look the way they do—and suggests that they may not be so closely related after all.

    Researchers have identified 50 or more skull characteristics shared by all the robust australopithecines, but anatomist Melanie McCollum of Case Western Reserve University in Cleveland, Ohio, says that the facial traits are the developmental consequences of a single character—a unique combination of cow-sized molars and small front teeth. “There are not 50 or 70 traits in the [hominid] skull that evolve independently,” and studies that assume so are deeply flawed, says McCollum. Instead, she argues that the robust australopithecines look alike because their unusual teeth force the hominid face to take on its distinctive robust shape. Even if the robust australopithecine species evolved separately on opposite sides of Africa, “as long as they have big molars and small front teeth, their faces will look alike,” she says.

    Although some researchers note that previous analyses have raised similar cautions, many say that the paper is a needed tonic for the field. “It's high time this kind of thing was said,” says Tim White, a paleoanthropologist at the University of California, Berkeley. The anatomical features used for phylogenetic analysis “have become too atomized,” he says. Adds Daniel Lieberman, a paleoanthropologist at George Washington University in Washington, D.C. “She's created a challenge for us to better define what a good trait is biologically.”

    To analyze the way australopithecine faces grew, McCollum studied how the differently shaped skulls and faces of living hominoids—humans, chimpanzees, gorillas, and orangutans—grow during postnatal development. The comparison showed that teeth drive the shape of much of the rest of the face. For example, the australopithecines' massive molars require a tall back jaw, along with big jaw muscles and the skull-crowning crests that serve to anchor them. And their small front teeth change the configuration of the floor of the nose. In order to balance the competing demands of the growing mouth and nose, including the tall back jaw, the palate, the boundary between all these areas, thickens, forming a massive bone in the center of the face. The rest of the face then has to adjust to this bone, with the net result being a face so tall that it almost rises above the brain.

    The analysis “shows that if you have similarities in dental pattern, then you're going to get similarities in facial features,” says McCollum. Selection—perhaps for crunching tough nuts and tubers—shaped the teeth, and the striking facial shape just came along for the ride. Thus it doesn't make sense to count up facial changes when deciding who's most closely related to whom, says McCollum. “We've been chasing a red herring.” To sort out the robust lineage, researchers should instead “look for traits in the shape of [australopithecine] teeth,” she says. And although she doesn't do the analysis, she points out that variations in tooth shape suggest the robust australopithecines may not be closely related. If she's right, then paleoanthropologists will be heading back to the bench with only their dental calipers in hand.

    Bernard Wood, a paleoanthropologist at George Washington University, notes that others have argued before that teeth are the best features to use in phylogenetic analyses of human ancestors. But others welcome the work's larger implication: that any traits used in phylogenetic studies should be scrutinized from a developmental perspective. “I'm thrilled,” says developmental biologist Rudy Raff of Indiana University, Bloomington, who has long argued for explicit consideration of development in evolutionary studies. “She's looked at the growth consequences—what big teeth do to the shape of the skull during development. That adds a dimension that's not usually thought about.”

  3. COLUMBIA UNIVERSITY

    Earth Institute Director Bows Out

    1. Constance Holden

    An ambitious attempt to bring scientists from diverse disciplines together to study global problems is about to get fresh leadership. Peter Eisenberger, the controversial director of Columbia University's Earth Institute, resigned on 24 March, citing differences over the institute's direction as well as his health. Columbia has named executive provost Michael Crow, a key force behind the creation of the Earth Institute, as its interim leader until a replacement is found.

    Columbia lured Eisenberger from Princeton University, where he had founded the Materials Institute, to head the new Earth Institute in 1995. Eisenberger's mandate was to bring members of a vaunted physical sciences team at Columbia's 50-year-old Lamont-Doherty Earth Observatory (LDEO)—renowned for their research on topics like plate tectonics—together with experts on the main campus, in research cultures ranging from biology to social science, to work on climate change and other pressing societal issues. Not surprisingly, the wrenching changes drew resistance, with many scientists complaining that Eisenberger was slighting traditional areas like petrology and rushing headlong into squishy realms such as the economics of global climate change (Science, 22 May 1998, p. 1182).

    The culture clash and Eisenberger's management style may have precipitated his resignation, observers say. LDEO geochemist Wallace Broecker, who doesn't hide his distaste for Eisenberger's leadership, says he's “not a good manager,” and he “does not know that much about the Earth.” Broecker says he's “delighted” he'll be getting a new boss. He's not the only Columbia scientist who Eisenberger rubbed the wrong way. Oceanographer Taro Takahashi, associate director of LDEO, says the hard-driving Eisenberger “didn't listen to people very well,” although he says, “I thought he was getting better.” In Takahashi's view, Eisenberger “likes to handle [global] problems … not as a scientist but as a politician.”

    Ironically, Columbia provost Jonathan Cole expressed confidence in Eisenberger's leadership in a letter to staff last December, saying that despite “bumps … in the road,” the institute was “making excellent progress.” Some colleagues agree. Eisenberger “did an excellent, courageous job under difficult circumstances,” says Columbia mathematician and economist Graciela Chichilnisky.

    Eisenberger did not return repeated calls from Science. But in his resignation statement last month, he cited “differences on matters of principle and how best to proceed with the growth of the Institute, and more recently my personal health.”

    Crow's most pressing task will be to bring some equanimity to the institute. Crow could not be reached for comment, but Takahashi says one big issue is whether the Earth Institute and LDEO directorships, both of which were held by Eisenberger, should be offered to two people instead.

    “The Earth Institute is a great idea,” says Broecker. “It's just got to be done in the right way.” Few would disagree—especially if somebody can figure out just what the right way is.

  4. JAPAN

    New Career Path Seen for Young Scientists

    1. Dennis Normile

    Four years ago, Japan set out a 5-year plan to create 10,000 postdoctorate positions to provide more opportunities for younger researchers. The government will meet its goal this year, ahead of schedule. That success, however, leads to the next challenge: how to find jobs for these scientists at a time when public payrolls are being reduced. The answer, according to a government advisory committee, is to loosen up the research tenure system, which traditionally bestows lifetime appointments, by offering fixed-term positions to both “superpostdocs” and more established researchers. In exchange for giving up job security, the researchers would receive greater freedom to explore their ideas. “It would be a new career path for researchers in Japan,” says Ken-ichi Arai, director of the University of Tokyo's Institute of Medical Science and a member of the committee, which last week submitted its report to the Science and Technology Agency.

    Young scientists typically begin their careers as lecturers or researchers, advance to associate professors or group leaders, and eventually become professors or heads of research departments. Although they have a job for life, they achieve full independence only after reaching the top of the administrative ladder. The committee's recommendations envision an alternative starting point with much more autonomy: superpostdocs for younger researchers who have finished one postdoctorate position and are ready to work on their own.

    The committee—which was asked to reconcile the need for more research positions with growing political pressure to help close a budget deficit by reducing the number of public employees—says such flexibility also should extend up the career ladder. It is recommending that fixed-term independent researcher positions be created for senior people capable of directing a team. The committee hopes that these positions, filled through an open competition, will appeal to scientists who want to switch from a traditional career track. The trade-off for this impermanence, says Yuji Kamiya, a plant scientist at the Institute of Physical and Chemical Research (RIKEN) and a member of the committee, would be “more money and more freedom.” Those who have completed a superpostdoc or a term as an independent researcher would be free to seek tenured positions at national universities or laboratories.

    One model for such an arrangement exists at RIKEN, whose status as an independent research entity gives it greater flexibility than national institutes in personnel matters. Hitoshi Okamoto, a developmental biologist working with zebrafish, gave up a tenured position at the private Keio University for a position at RIKEN's Brain Science Institute. Okamoto says the level of financial support made it “a great chance.” And he is confident that his productivity will win him a renewal of his current 5-year term. “I think a lot of Japanese young people would be willing to apply for those positions,” he says.

    Miho Ohsugi, a postdoc in the oncology department at the University of Tokyo's Institute of Medical Science, says she would be interested in the new career path. “It would be very attractive to be able to work on what you want to work on, even if the position has a limited term,” says Ohsugi, who is studying proteins involved in spermatogenesis.

    Many details must be worked out before Ohsugi and her peers can apply for a position, however. Ohsugi wonders how a superpostdoc would affect the government's promise to forgive most, if not all, of her graduate school loans if she joins a national university faculty. It's also not clear if superpostdocs would have access to existing equipment. And Okamoto notes that Japan's pension schemes heavily penalize those who change jobs.

    There is also the question of how the new positions would be attached to existing institutes and who would pay for them. Arai says institutes would want money to cover the indirect costs of supporting a new researcher. Introducing fixed-term employment at national universities and labs might also require amendments to public servant employment laws.

    The committee's recommendations will be passed along to the Council for Science and Technology, the nation's highest science advisory body, which is reviewing the results of a 5-year plan adopted in 1996 to boost the nation's scientific prowess. Any decision on a new career track is likely to be part of a broader set of R&D policies.

  5. METEOROLOGY

    Link Between Sunspots, Stratosphere Buoyed

    1. Richard A. Kerr

    Everything from the stock market to climate has been linked to the 11-year cycle of sunspots—dark splotches on the sun's surface that mark an increase in solar activity. Almost all such correlations fall apart soon enough, but one has held up: For more than four sunspot cycles, the “weather” in the stratosphere has varied in time with solar activity, with atmospheric pressure peaking in a mid-latitude ring and plummeting over the North Pole at solar maximum. Yet solar output changes so little over the sunspot cycle that it's hard to see how the cycle could affect any earthly activities, even in the wispy stratosphere. Now on page 305 of this issue, a group of climate modelers presents the most promising mechanism yet for amplifying the effects of the solar cycle—and they suggest that sunspots' effects may even work their way down to the surface.

    Ozone makes the match.

    Allowing ozone to vary with sunspots causes a model to react (red) like the real atmosphere (blue).

    SOURCE: SHINDELL ET AL.

    The mysterious amplifier, say modelers Drew Shindell of NASA's Goddard Institute for Space Studies (GISS) in New York City and his colleagues, is the stratosphere's much lamented ozone. By including ozone and its ability to absorb the sun's ultraviolet radiation in their computer model, Shindell and colleagues were able to mimic the high-latitude seesaw of pressure seen in the real stratosphere at altitudes of 25 kilometers. Their model runs also produced subtle climate change at the surface, including a few tenths of a degree warming of Northern Hemisphere high latitudes.

    The stratospheric effect seems reasonable enough, says modeler Jerry D. Mahlman of the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, who regards the mechanism as the first plausible means of linking sunspots and Earth's atmosphere. But “there's some skepticism” of a trickledown effect stretching all the way to the surface, says theoretician Lorenzo Polvani of Columbia University. Meteorologists have long doubted that the vanishingly thin stratosphere can affect the massive, turbulent lower atmosphere, called the troposphere. And although researchers back in 1987 reported that surface climate does vary in step with solar cycles, that correlation didn't hold up for long.

    But a different correlation has lasted. More than a decade ago, meteorologists first reported that when sunspots hit their peak, a ring of relatively high pressure encircles a cap of low pressure over the North Pole in the winter stratosphere (Science, 11 May 1990, p. 684). When the sun's output falls, the pressure pattern reverses. Yet the sunspot cycle alters the sun's total output by only 0.1%, too little for any direct effect on Earth's climate. What could be causing the connection?

    Modeling studies had already suggested that the answer might involve ozone. Ozone warms the stratosphere by absorbing ultraviolet light, and the sun's UV output rises and falls significantly during the sunspot cycle, varying 10 times more than does its total output at all wavelengths. Because the north polar region is cloaked in darkness during the winter, the UV-induced warming is limited to lower latitudes. That geographical disparity can drive circulation in the stratosphere, raising atmospheric pressure there and so boosting the westerly stratospheric winds that blow around the pole at 30 to 50 degrees north. And in a positive feedback that could amplify this effect, the increased UV light at solar maximum creates more stratospheric ozone from oxygen, triggering more stratospheric warming and perhaps a greater pressure difference.

    Although this scenario is not new (Science, 4 August 1995, p. 633), earlier models left out the upper half of the stratosphere, and hence part of the ozone layer, to save computing time. Shindell and colleagues are the first to include a complete stratosphere as well as a chemical simulation that can produce more or less ozone depending on the amount of ultraviolet light. In their model, the 1% extra UV light at the solar maximum produced the characteristic stratospheric high-pressure ring and low pressure over the pole. A similar sort of pattern is seen in the Arctic Oscillation, a hemisphere-wide driver of northern climate (see p. 241).

    Shindell even sees changes down at the surface. In his model, the stratosphere doesn't strong-arm the muscular troposphere but rather uses the troposphere's power against itself, creating a weak and indirect link between sunspots and surface climate. The GISS researchers found that at solar maximum, the mid-latitude, high-pressure ring deflects atmospheric waves that propagate up from the troposphere and carry energy from place to place in the atmosphere. The deflection of these waves back into the troposphere alters circulation in such a way as to produce a high-pressure ridge at 40°N that intensifies winds at the surface and redirects storms into Canada and northern Eurasia. The net result is to warm the high latitudes by a few tenths of a degree.

    Even for researchers who find Shindell's sun-stratosphere connection reasonable, the step from the stratosphere to the surface is a stretch. “I'm skeptical about the models,” says Polvani. “Other groups have similar models, and they haven't been able to reproduce those results.” Indeed, the tropospheric changes are nearly lost in the noise, and this GISS model has been criticized as “rather crude” (Science, 10 April 1998, p. 202). Still, the idea that the stratosphere may influence the troposphere is “picking up momentum,” says meteorologist Marvin Geller of the State University of New York, Stony Brook. If the history of sun-climate relations is any guide, it's got a long way to go.

  6. ACOUSTICS

    Miniaturizing the Mike, in Silicon

    1. Alexander Hellemans*
    1. * Alexander Hellemans is a writer in Naples, Italy.

    The microphone is being reincarnated in silicon. At a recent meeting+ in Berlin, several groups reported progress in converting the standard elements of a microphone—a vibrating membrane that picks up the sound and circuits that convert the vibration into an electrical signal—into structures on a silicon chip. Silicon microphones may not yet be as sensitive as conventional microphones, but they will be robust and cheap. “You can make thousands of them on a wafer,” says physicist Gerhard Sessler of the Technical University of Darmstadt in Germany. “It is the coming thing,” adds acoustic engineer Allan Pierce of Boston University in Massachusetts.

    Most silicon microphones still rely on vibrating membranes to capture sound, but these membranes are micromachined from silicon and measure just 1 millimeter or so on a side and a micrometer thick. In the type of silicon mike that is closest to commercial production, known as a condenser microphone, the membrane is positioned next to a charged electrode. Together, the electrode and membrane form a capacitor, a structure that can store charge. Its capacitance, or ability to hold charge, depends on the distance between electrode and membrane. As the membrane vibrates in response to sound, the distance changes and so does the capacitance, creating an electrical signal in a circuit connected to the device.

    In a variation on this theme, the field-effect microphone, the membrane is given an electric charge and positioned near a semiconductor channel that separates two contacts. The channel's ability to carry current varies in an electric field; as the membrane vibrates, it subjects the channel to a varying electric field, modulating the amount of current flowing through it.

    In early prototypes of condenser and field-effect microphones, the membrane was etched out of one chip and the other part of the device was built on another, and the two were pressed together. At the meeting, Sessler reported a new technique for creating the whole device on a single chip. “On the chip you deposit a so-called ‘sacrificial layer’ … and on top of that layer you deposit the membrane,” he says. Chemically etching away the sacrificial layer leaves a free-floating membrane anchored to the chip at its edges.

    Other presentations described microphones in which piezoelectric and piezoresistive materials are deposited on top of the silicon membrane. These materials generate a current or a change in resistance, respectively, in response to changes in pressure. The result is a varying electrical signal as the membrane flexes in response to sound waves.

    A few microphone designs presented at the meeting translate the vibration into an optical signal rather than an electronic one. The advantage of these designs, explains Sessler, is that optical signals don't interfere with each other via magnetic fields, so large numbers of optical mikes can be packed close together. The optical output can also travel long distances through optical fibers without degrading. “You don't have to preamplify directly at the microphone,” says Sessler.

    In one such device, developed by Sessler's group, the vibration of the membrane deforms an optical waveguide, altering its ability to transmit light. Two other designs pick up vibrations by bouncing a laser off a silicon membrane and recording variations in the reflected signal—a scaled-down version of a Cold War eavesdropping technique that picks up conversations that are taking place inside a room by playing a laser beam off a window. Pierce and his team at Boston have created small portable arrays of over 10,000 tiny microphones of this design connected to a small display device. The result is an acoustic imaging system, which can reconstruct the shape of objects by detecting differences in the arrival time of reflected sound pulses. The team is now developing an “artificial eye” for use underwater that would send out ultrasound pulses and detect reflected waves to distinguish objects as small as 1 millimeter.

    One of the new designs even shuns the traditional membrane. Jörg Sennheiser of Sennheiser Electronic Corp. in Wedemark, Germany, presented a microphone that consists simply of two tiny wires placed close together and heated electrically. The small flows of air molecules generated by sound waves cool the wires. “The temperature difference in the two wires depends directly on the velocity of the air particles,” explains Hans-Elias de Bree, who developed the concept several years ago while still a student at Twente University in Enschede, the Netherlands. The microphone, which Sennheiser says could be realized in silicon, cannot respond to sound frequencies any higher than about 10 kilohertz, making it usable for telephones but not for high-fidelity recording. But it can stretch down to waves below 20 hertz, which are important in seismology. “A pressure microphone simply cannot do this,” says de Bree.

    Although no silicon microphones are yet produced commercially, researchers in the field are bullish about their prospects. In a few years, says Sessler, “nobody will use conventional microphones anymore, only silicon ones.”

    • + The Joint 137th Meeting of the Acoustical Society of America and the 2nd Convention of the European Acoustics Association Integrating the 25th German Acoustics DAGA Conference, Berlin, 14–19 March.

  7. FISHERY MANAGEMENT

    Plan Would Protect New England Coast

    1. Karin Jegalian*
    1. Karin Jegalian is a science writer in Cambridge, Massachusetts.

    BOSTON—For centuries, the gravel and sand of Georges Bank and the great canyons, muddy basins, and shallow ledges of the Gulf of Maine have supported one of the world's most productive fishing regions. But big boulders have historically protected a 1050-square-kilometer region at the bank's northeastern tip from dredging boats in search of scallops and trawlers hunting down groundfish. However, those boulders are becoming less of a deterrent against improved and sturdier gear. So when geologist Page Valentine of the U.S. Geological Survey in Woods Hole, Massachusetts, stood before his colleagues last month and defended his proposal to safeguard this rare, undisturbed gravel bed, he knew that he was also standing at the crossroads of science and politics.

    Valentine's presentation was part of a 2-day workshop held at the New England Aquarium here to build support for Marine Protected Areas (MPAs), a controversial concept aimed at preserving biodiversity in coastal waters. The meeting, organized by Elliott Norse, founder of the Marine Conservation Biology Institute in Redmond, Washington, featured talks by 21 experts across a range of marine habitats and species and represented the marine community's biggest push for MPAs.

    The discussion generated a map (see above) that nominated 29% of the ocean floor off the coast of New England and Canada's Maritime Provinces for protection, as well as 25% of pelagic (open-ocean) waters. The next step will come in the fall, when the scientists discuss the plan with government officials, commercial stakeholders, and environmental activists—meetings that are likely to be contentious. “The conservation groups will want to see if various species are covered. And various fishermen will be convinced that their livelihood is threatened,” says Mike Pentony, an analyst for the New England Fishery Management Council, who was an observer at last month's workshop. The areas could be established by the National Marine Fisheries Service or under existing U.S. and Canadian laws to protect endangered species and habitats.

    Existing MPAs in the United States cover only small regions in the Florida Keys and off the coast of Seattle, and there is no consensus among scientists on what they should protect or how. An MPA could merely be spared from oil drilling and sand mining, or it could restrict any activity with the potential to harm marine life, including whale watching and research. There are even protected areas in Georges Bank—about 14% of U.S. waters near New England are closed to groundfishing—but the lines have been drawn by regulators focused more on the welfare of economically important fish than on science. “Fisheries closures are for the simple purpose of rebuilding overused fish stocks,” says Peter Auster, an ecologist at the University of Connecticut, Avery Point, “and they will be repealed when the target populations rebound.” Norse and others say that the extensive research already done on the Gulf of Maine and Georges Bank makes them prime targets for MPAs.

    As a first step toward that goal, the scientists at the workshop chose 36 areas that warrant closer attention. Some are particularly rich in biodiversity; others support fish nurseries or contain rare or fragile species like barndoor skates and corals. Any discussion of exactly how to protect them was postponed until the fall, and although the researchers all said they wanted to remain above the political fray, they agreed to adjust many of the boxes to make them more acceptable to competing interests, such as fishers. For example, Ransom Myers, a fish biologist at Dalhousie University in Nova Scotia, had proposed a protected area for the barndoor skate that included areas in the gulf where the skate has not been observed for many years. Although he argued that the larger regions might restore the skate to levels not seen in decades, the committee decided that a more reasonable goal might be preservation of existing habitats.

    John Williamson, a former commercial fisher who serves on the New England Fishery Management Council, thinks that fisheries management “has embraced” habitat protection as a way to restore marine ecosystems crucial to many species, including commercially important ones. But he calls it “a delicate situation” and compares working with the fishing community to herding cats. In addition, he chides scientists for “talking in isolation.”

    To help break down those barriers, scientists hope to repeat the New England workshop in other regions, notably the Pacific Northwest. Ideally, workshop members say, scientific participation in future debates over ocean zoning also will help officials avoid the mistake made a century ago when beauty rather than ecological importance was the driving force behind the creation of national parks.

    Norse admits that the process is far from perfect. “We're working with the best people and the best data,” he says, “but there's still a lot that's arbitrary” about the recommendations. “It's art based on science.” Still, he says, “it's good that we're doing this now and not in 10 years.”

  8. PSYCHOPHARMACOLOGY

    Can the Placebo Be the Cure?

    1. Martin Enserink

    A promising new drug for depression failed to clear efficacy tests this year, illuminating a decades-old problem in psychopharmacology that deserves more study, researchers say

    Last winter, psychiatrists and drug company executives were eagerly anticipating the arrival of a new product to fight depression. A novel compound—a Merck invention known as MK-869—then in several clinical trials, seemed set to become a new millennium drug for millions of people who take antidepressant medication every day. Results published in Science (11 September 1998, pp. 1624, 1640) had shown that it worked well and caused almost no sexual dysfunction, a side effect of many other pills on the market. Merck assured financial analysts in December that MK-869 was likely to be a big moneymaker. But on 22 January, those hopes were dashed when Merck, in an abrupt reversal, disclosed that MK-869 would be shelved as an antidepressant, although it may find a limited market as a treatment for nausea during chemotherapy. What went wrong?

    Merck was struck by “the curse of the placebo effect,” some researchers concluded. A Merck press release explained that when the company analyzed data from a new clinical trial in January, it found that patients who had received a dummy pill had done unexpectedly well. They did almost as well, in fact, as those on MK-869, wiping out the rationale for the new drug. The news was a downer for Merck and Wall Street: The price of the company's stock dropped 5% on the day Merck broke the news. It rebounded within the week, however, in part because Merck is already testing a new antidepressant that could be more potent and “much better than MK-869,” according to Reynold Spector, executive vice president of Merck Research Laboratories in Rahway, New Jersey (see sidebar).

    The MK-869 reversal may have been a temporary setback for Merck, but it highlights a chronic problem for psychopharmacology—the placebo effect. It's a phenomenon that bedevils many trials of antidepressant drugs, spoiling some and driving up the cost of others, as clinicians are forced to recruit more patients to obtain statistically significant data. Drug developers regard it as an occupational hazard that masks the effects of potentially useful compounds. But there's more to it than that. Some psychiatrists and clinical psychologists are fascinated by the power of the placebo effect, viewing it not as a problem but as a source of insight into mental health. And a few—such as University of Connecticut, Storrs, psychologist Irving Kirsch—go further, challenging the scientific basis of much of the multibillion-dollar market for antidepressant drugs: They argue that many compounds, even those with good scientific pedigrees, may be little more than sophisticated placebos themselves.

    This is a minority view, but one that's getting new attention as researchers try to understand how promising drugs like MK-869 can fail. Even mainstream scientists agree that the subject has been neglected. William Carpenter, director of the Psychiatric Research Center at the University of Maryland, Baltimore, says the placebo effect has been “kind of a soft underbelly” that both academic and industry researchers “have been more comfortable leaving out of sight.”

    Miracle cures

    The placebo effect has complicated medical research ever since its miraculous powers were discovered in the 1950s. Administering a simple sugar pill or injecting water, for instance, can alleviate symptoms or even cure a disease—as long as patients believe they could be getting a real drug.

    To ensure that new drugs have “real” value, companies test them in a trial where the patients are randomly assigned to a group that gets the placebo or the drug. Because hopeful patients and doctors can unknowingly skew the results, most trials are double-blind: Neither party knows what the patient gets. Only afterward, when the blind is broken, does a comparison between the results in both groups show whether a new drug is a hit or a miss.

    For afflictions that have a strong psychological component, like pain, anxiety, and depression, the placebo response rates are often high, making it more difficult to prove a drug's efficacy. In trials of antidepressants, says Dennis Charney, director of the Yale Mental Health Clinical Research Center, it's not uncommon for 65% of the patients on the new drug to get better. But 35% of the patients in the placebo group also typically improve. Frequently, the differences between the two groups are so small as to be statistically insignificant. “That's probably the most common reason for depression studies to fail,” says Thomas Laughren, team leader for the psychiatric drug products group at the U.S. Food and Drug Administration (FDA).

    Researchers think several different factors play a role in helping some people get better on a dummy pill. Depressions often wax and wane, so improvements observed during a trial may be part of the disease's natural cycle. Simply enrolling in a trial helps some patients, says Charney, no matter what's in the capsules they take home: “You come in, you haven't gotten any help, and [now] you're seeing somebody who cares about you, who is asking about your life. That will improve symptoms.”

    Trial results may also be blurred because it's difficult to measure depression. When testing the value of, say, a cholesterol-lowering drug, says Spector, scientists can count the deaths in each group at the end: “You don't have to be a rocket scientist to do those trials.” But depression is usually measured using the Hamilton scale, which gives patients one to four points on items like mood, guilty feelings, suicidal thoughts, and insomnia. Generally, patients are recorded as “responders” to a drug if the “Ham” score drops by at least 50%. But many patients in the placebo group also fit that criterion.

    Poor patient selection may play a role, too. When many participants in an experiment aren't really suffering from the affliction under study or have mild symptoms, the results may be ambiguous. Laughren says his experience at the FDA supports that idea. When companies started testing drugs for obsessive-compulsive disorder back in the mid-1980s, he recalls, the placebo response rate was almost zero. “As time went on, you began to get a creep upward—up to a point where you could reasonably conclude that some trials failed because of high placebo response rates,” he says. One possible cause is that as more and more studies are done, competition for patients increases, and clinicians loosen criteria, admitting people who are more likely to respond to a placebo.

    Because a high placebo response rate can make a drug look less effective, the FDA recommends that drug companies add a third “arm” to every trial—a group of patients that gets a drug whose effectiveness has been demonstrated in previous trials. If the trial doesn't prove the new drug's effectiveness, but also fails to find a difference between the placebo and the old drug, “at least you can chalk it up to a failed trial, rather than concluding that your drug doesn't work,” says Laughren. In regulatory review, a failed trial—unlike a negative outcome—isn't scored against a drug. Laughren adds, “It's sort of an insurance policy to protect the company.” Many companies heed this advice; in its MK-869 trial, for example, Merck included an established drug from the Prozac generation—the company declines to say which one—and found that it, too, failed to beat the placebo.

    What is “real?”

    Researchers agree that a clever trial design may reduce, but will never eliminate, the placebo response. And the sheer size of the phenomenon, Kirsch argues, suggests that it is an integral part of the effectiveness of almost all antidepressant drugs. To test this idea, Kirsch and his colleague, psychologist Guy Sapirstein from Westwood Lodge Hospital in Needham, Massachusetts, carried out a meta-analysis of 19 antidepressant drug trials last year. They found the usual placebo response but expressed it in a different way—not as an independent factor but as a percentage of the “real” effect of the test drugs. Their conclusion: Antidepressants in these trials probably relied on the placebo effect for 75% of their effectiveness. If an antidepressant caused a 12-point drop on the Hamilton scale, for example, the placebo effect might be responsible for nine of those 12 points. They published these findings in June in Prevention and Treatment, a new, peer-reviewed online journal of the American Psychological Association (journals.apa.org/prevention).

    The study triggered a series of angry commentaries, all published on the same Web site. Many criticized the way the authors had drawn numbers from a series of different trials—an “unacceptable methodology,” fumed Columbia University psychiatrist Donald Klein. Other researchers objected that even if the authors were right, a small difference on the Hamilton score can represent a big difference in a patient's condition.

    What really stirred things up, however, was an even more provocative contention: Kirsch and Sapirstein argued that even the 25% “real” drug effect might be little more than a disguised placebo effect. They noted that patients often see through the carefully applied double-blind mask. Because real antidepressants have noticeable side effects—like dry mouth, nausea, dizziness, or sexual dysfunction—trial participants may figure out whether they have swallowed a drug or a placebo. Indeed, some studies have shown that up to 80% of patients could guess correctly to which group they were assigned. Such unblinding may cause a greater improvement in the drug group, not because of the drug's psychoactive effect but because both the patient and the doctor expect the drug to work. The patients in the control group, on the other hand, suspecting they're not getting that potential new cure, may do less well. Even small differences between the drug and placebo group may exaggerate the drug's power, says Kirsch.

    Controversial study.

    Kirsch's meta-analysis of 19 antidepressant trials, each represented by a dot, revealed a pattern: The placebo effect on average accounted for 75% of the effect of real drugs.

    SOURCE: APA, I. KIRSCH AND G. SAPIRSTEIN

    Some of Kirsch's and Sapirstein's colleagues have supported their findings. For example, Roger Greenberg, head of the division of Clinical Psychology at the State University of New York Health Science Center in Syracuse, reached similar conclusions in several studies and in a 1997 book that reviewed the evidence, From Placebo to Panacea. “If people get physical sensations in the context that they may be on a real drug, they tend to be responsive,” says Greenberg. He points to a few studies in which tricyclics (the pre-Prozac generation of antidepressants) were tested against a compound like atropine, which mimics these drugs' side effects but is not psychoactive. In these, Greenberg says, the differences between drug and placebo were small. And in a meta-analysis of Prozac trials published in 1994, Greenberg found that the severity of side effects correlated with the drug's efficacy. “This was ironic, because [Prozac] was marketed as having few side effects,” he says. “We found that the more side effects, the better it did.”

    The idea that antidepressants may be just a tiny bit better than a placebo doesn't sit well with pharmaceutical companies. Eli Lilly of Indianapolis, Indiana, the manufacturer of Prozac, declines to discuss the issue because, a spokesperson says, “Prozac's efficacy has been well established.” Spector concedes that the blind in some trials may not be perfect but says the effect on the outcome is “exceedingly speculative.” He thinks “the antidepressants on the market actually do work, and it cannot be explained by placebo effect or anything else.” He adds, “I would give them to my mother.”

    Many academic researchers feel the same way. Only a moderate percentage of all candidate drugs make it through FDA's screening process, says Klein, so “if active placebos did the job, they would all get through.” To him, Kirsch's idea “doesn't make much sense.” Steven Hyman, director of the National Institute of Mental Health (NIMH) in Bethesda, Maryland, calls Kirsch's interpretation “rather radical. … As a doctor, it would be very miraculous if all the people I've seen getting better were getting better only by virtue of placebo,” he says. Kirsch says he isn't surprised by such reactions. “Antidepressant drugs have become the mainstay of the psychiatric profession,” he notes.

    To bolster his case, Kirsch is now working on a new study together with Thomas Moore of the Center for Health Policy Research at George Washington University Medical Center in Washington, D.C. Using the Freedom of Information Act, the duo obtained data from 30 trials submitted to the FDA for the approval of five modern antidepressant drugs: Prozac, Zoloft, Paxil, Effexor, and Serzone. “I'm just beginning to write this up,” says Kirsch, “But again, we get about 78% of the drug effect duplicated by placebo.” He hopes the uniform methodology imposed by FDA guidelines will preempt the criticism this time.

    But to understand exactly how the placebo response influences results, Kirsch says he would like to try a new design. In this setup, which he has used in a study of caffeine, half the patients are told they'll be on placebo, the other half, that they will get the active drug. In reality, however, each half is subdivided into a placebo and a test group. The design makes it possible to find out what a drug does when people think they're not getting it. Kirsch admits that it involves deceit, but he thinks it is ethically acceptable if the research is important and the patients are debriefed afterward. Kirsch says he wants to approach Merck to see if the company is interested in running such a trial with MK-869. But Spector dismisses the idea out of hand: “That's a no-no,” he says. “You can't lie to patients, because then they can't give informed consent.”

    Greenberg agrees that the standard trial should not be abandoned, but says one or two refinements might be worth it if they help uncover the importance of the placebo effect. For instance, subjects could be asked what group they think they were in, and drugs could more often be compared with “active placebos” that mimic side effects.

    Who will support such experiments? “You can be assured that the pharmaceutical industry is not about to finance studies that minimize their claims,” says psychologist and neuroscientist Elliot Valenstein of the University of Michigan, Ann Arbor. Nor have many academics been very interested in exploring the issue in depth, says Carpenter. “It would help us have a better appreciation of the limitations of our treatments,” he says, but “the truth is, as biomedical scientists, we see all this information, we know it matters, but we don't really grapple with it.”

    But that may be about to change. Hyman says he would like to see more research into the role of the placebo. “If we got a [grant] application to study placebo in depression and it was good science, I would be really interested,” he says. Hyman says he would also like to cooperate on such studies with a new center for alternative and complementary medicine, which the National Institutes of Health is currently setting up at the request of Congress. “The more we understand about the role of placebo … the better trials we can design without fooling ourselves.”

  9. PSYCHOPHARMACOLOGY

    Drug Therapies for Depression: From MAO Inhibitors to Substance P

    1. Martin Enserink

    Antidepressants have evolved through several generations since the 1950s, each a huge improvement over its predecessor—or so advocates have claimed. But a government-sponsored study published last month confirmed what other analyses had shown before: The fashionable antidepressants of the 1990s are no more effective than those of previous generations. Even the heavy-duty drugs of the Eisenhower era appear to be on a par with those used today. The newer drugs do have a plus, however: fewer side effects.

    The study, a meta-analysis commissioned by the Agency for Health Care Policy and Research (a part of the Department of Health and Human Services) and carried out by the Evidence Based Practice Center in San Antonio, Texas, looked at 315 studies carried out since 1980. It focused primarily on the hottest pills that have hit the market since 1987, the “selective serotonin reuptake inhibitors” (SSRIs), a group that includes such brands as Prozac, Paxil, and Zoloft. The study found that on average, about 50% of patients in SSRI treatment groups improved, compared to 32% in placebo groups. But in the more than 200 trials that compared new drugs with older ones, the two classes proved equally efficacious. Because the newer drugs appear to have less severe side effects, however, patients may be able to stay on them longer.

    The failure to find evidence of progress is disappointing, scientists admit. And one of the biggest disappointments is that researchers still don't understand what causes—or relieves—depression. Most antidepressant drugs are based on the assumption that depression results from a shortage of serotonin or norepinephrine in the brain. Both are neurotransmitters, chemical messengers that cross the synapse, the cleft between two nerve cells. The first generation of antidepressants, discovered during the early 1950s, the MAO inhibitors, block monoamine oxidase, an enzyme that breaks down serotonin and norepinephrine. This allows the neurotransmitters to linger in the synapse, increasing their effect. Another type of drug discovered in the late 1950s, the tricyclics, prevents the nerve cells that excrete the neurotransmitters from mopping up these compounds shortly after they are released. Blocking “reuptake” also prolongs their effect. Because studies pointed to serotonin shortage as the main culprit in depression, industry developed the selective reuptake inhibitors, which now dominate the market. But even the SSRIs have side effects.

    Psychiatrists are “desperately waiting for effective antidepressants that use other mechanisms,” says Steven Hyman, director of the National Institute of Mental Health. In the past few years they have zeroed in on a mechanism that drives the “fight-or-flight” reaction—the hypothalamic-pituitary-adrenal (HPA) pathway. Studies have suggested that in depressed people, this system churns out cortisol, increasing alertness but depressing sexual drive and appetite. Some researchers think this may lead to depression, and several companies are studying drugs that block activation of the HPA pathway.

    Another hot target is substance P, a short neuropeptide found abundantly in brain regions that control emotion which may be involved in depression. Substances like Merck's MK-869 block the natural receptor for substance P and prevent its action. Other companies are testing similar methods of regulating emotions. But this strategy has yet to deliver: After disappointing trial results, Merck sidelined MK-869 this year as an antidepressant (see main text). Hoping for good news soon, Hyman says: “All of us are holding our breath.”

  10. ATMOSPHERE

    A New Force in High-Latitude Climate

    1. Richard A. Kerr

    The Arctic Oscillation is vying to be seen as the prime mover of high-latitude climate shifts, and may even be an agent of greenhouse-induced warming

    El Niño, the periodic warming of the tropical Pacific that roils rainfall patterns from Indonesia to Brazil and the Horn of Africa, is the undisputed king of climate shifts. But although El Niño's reach extends to latitudes as high as North America, some researchers are concluding that an oscillation of another sort holds sway over climate in high-latitude parts of the globe.

    According to new analyses by a pair of meteorologists, an erratic atmospheric seesaw, which alternately raises pressures over the pole and in a ring passing over southern Alaska and central Europe, is the master switch for climate over high northern latitudes. When this Arctic Oscillation, or AO, is in its so-called positive phase, pressure drops over the polar cap and rises in high latitudes around 55 degrees, which in turn strengthens westerly winds there. Ocean storms steer more northerly, wetting Scandinavia and Alaska, for example, and drying Spain and California. Ocean warmth blows into Eurasia, thawing Moscow. After weeks or years, the AO flips to the opposite phase, reversing these climate extremes—wetting Spain and chilling Moscow.

    Meteorologists David Thompson and John M. Wallace of the University of Washington, Seattle, think the AO is a natural atmospheric response that can change minor perturbations into major climate shifts. Researchers are still analyzing many possible triggers for the AO, but whatever its cause, understanding it is “going to help make sense out of” high-latitude climate variability, says Wallace. The pressure oscillation has largely been stuck in the same phase for decades, for example, and may be responsible for an ominous warming trend in the high-latitude Northern Hemisphere that has been viewed as a sign of human-induced warming.

    Thompson and Wallace add that the AO also seems to encompass a smaller scale oscillation already recognized as the reigning climate maker in the North Atlantic region: the North Atlantic Oscillation or NAO, a seesaw of atmospheric pressure between Iceland and Lisbon (Science, 7 February 1997, p. 754) that skews climate near the North Atlantic but not the North Pacific. “The NAO is just a regional manifestation of the AO,” says Thompson. Wallace calls recognition of the AO “a paradigm shift in the way we think.”

    Some other atmospheric scientists agree that he and Wallace are onto something big. “This AO is a significant mode of variability,” says meteorologist David Karoly of Monash University in Clayton, Australia. But others point to uncertainties in the data and suggest that Thompson and Wallace are just seeing the effects of the NAO, whose climatic impact is largely limited to the North Atlantic and northern Eurasia. “I think the AO and the NAO are largely the same thing,” says meteorologist James Hurrell of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado.

    The AO, if it's real, has a counterpart in the Southern Hemisphere, which is why Thompson and Wallace went looking for it in the first place. In the Southern Hemisphere, westerly winds in the troposphere—the lowermost 10 kilometers or so of atmosphere—form a complete ring around a latitude of about 55°, as do the overlying stratospheric winds. Atmospheric pressure in the polar region and in the encircling ring rises and falls in a seesaw fashion, and this pressure difference in turn causes the winds to oscillate in strength. A stratospheric vortex swirls all the way around the North Pole too, but there was no such obvious ring in the troposphere, only the NAO's north-south pressure seesaw over the Atlantic.

    Yet computer models of the atmosphere implied that the complete oscillating ring should be there. “You strip the models down to the barest essentials and the [annular winds] are still there,” says Wallace. “They're very, very fundamental,” even though it's hard to tell what is driving them.

    But if these oscillations are so fundamental, where were they in the Northern Hemisphere troposphere? Thompson and Wallace sought them in the latest, most complete compilation of weather data from weather stations and balloons from 1900 to 1997. From the surface to the top of the troposphere, they found the seesaw pattern of the NAO. And for the first time, they also found another part of the ring: The troposphere over the northern North Pacific pulsed in time with that over the North Atlantic, so that the climate of both places varied in synch. Parts of the ring over land masses are still missing, but Thompson and Wallace put that down to obstacles like the Rocky Mountains and the Tibetan Plateau and to temperature contrasts between land and sea that can disrupt weather patterns. In a paper published in Geophysical Research Letters last year, they dubbed the pattern the Arctic Oscillation.

    According to Thompson and Wallace's analysis, the AO explains northern climate better than the NAO alone. When the high-latitude tropospheric winds of the AO are strong, they blow the ocean's warmth onto the continents, moderating winter chill in northern Canada and Eurasia; weaker winds let the continents cool. Over 30 winters, Thompson and Wallace found that the AO's varying winds accounted for 42% of Eurasian wintertime temperature variation, while the NAO's explained only 32%. And in a paper in press in the Journal of Climate, Thompson and Wallace detail the AO's strong resemblance to the long-studied southern annular oscillation, which they now call the Antarctic Oscillation or AAO. In both hemispheres, they found, when the pressure over the poles falls, high-latitude westerlies strengthen, high latitudes are unusually warm, the polar caps are cold aloft, and the stratospheric vortex intensifies; as a side effect, the stratospheric cold helps destroy ozone in the springtime, deepening polar ozone losses.

    Reaction to the AO's debut has been mixed. “I think it's real,” says climate modeler Ngar-Cheung Lau of the Geophysical Fluid Dynamics Laboratory at Princeton University. Thompson and Wallace “are fairly convincing that there's something there,” agrees meteorologist Brian Hoskins of the University of Reading in England. Others are more cautious. Meteorologist Clara Deser of NCAR isn't sure that the AO is different from the NAO. In particular, she is not sure that the Pacific and Atlantic are connected tightly enough to warrant lumping their variability together.

    If the AO does rule high-latitude climate, it may be behind the dramatic Northern Hemisphere warming trend over recent decades. For the past 30 years, the northern oscillation has drifted into a “positive phase,” so that while pressure continues to rise and fall, it is almost always below normal over the polar region, subpolar westerlies are stronger than normal, and high-latitude land is warmer than normal. The wind shifts also tend to open more breaks in Arctic Ocean ice. If the trend continues, says Wallace, in a few decades “we may be on the verge of an ice-free Arctic Ocean in the summer.” In a second paper in press at the Journal of Climate, Thompson and Wallace calculate the amount of warming expected from these changes in wind patterns. They conclude that the AO trend can account for about half of the winter warming observed over Eurasia during the past 30 years and about 30% of the winter warming seen over the whole Northern Hemisphere.

    It was this very warming, particularly over northern land areas, that helped convince an international panel of scientists, the Intergovernmental Panel on Climate Change (IPCC), that the first glimmer of greenhouse warming had been spotted (Science, 8 December 1995, p. 1565). The link to the AO doesn't necessarily negate the greenhouse connection; greenhouse advocates could argue that the intensifying greenhouse is tipping the AO, notes Karoly, who is a co-leader in the next IPCC report due in late 2000. A computer modeling study by John Fyfe and his colleagues at the Canadian Center for Climate Modeling and Analysis in Victoria supports that possibility. In their model, described in an upcoming issue of Geophysical Research Letters, increasing greenhouse gases triggered a positive trend in the AO.

    Something's up.

    Both the AO and NAO indices have been on the rise, perhaps fueled by global change.

    CREDIT: TODD MITCHELL/UNIVERSITY OF WASHINGTON

    But, as Karoly points out, a greenhouse skeptic could say that much of the observed warming is simply part of a natural AO cycle that will soon reverse. Deciding who's right will require a reversal of the AO trend—something a natural trend but not an anthropogenic one could do—or a better understanding of what drives the AO.

    Researchers are beginning to suspect that many of the possible triggers originate in the stratosphere. That's a startling idea, because meteorologists have traditionally assumed that the connection goes the other way, from the massive troposphere to the wispy stratosphere. A stratospheric influence on the troposphere seemed too much like “the tail wagging the dog.” Only powerful forces in the lower atmosphere or at the surface, such as El Niño's ocean warming or the ocean changes implicated in the NAO, seemed capable of roiling the troposphere. Still, some researchers had noticed that the northern oscillations are several times larger in winter, a time when stratospheric circulation favors a link between the stratosphere and troposphere. That suggests to them that this connection strengthens the oscillation.

    Completing the ring.

    The NAO (top), a seesaw of atmospheric pressure between blue and yellow areas, may be part of the larger AO (above).

    CREDIT: TODD MITCHELL/UNIVERSITY OF WASHINGTON

    And a study of 40 years of weather data from throughout the atmosphere, submitted to the Journal of Geophysical Research by meteorologists Mark Baldwin and Timothy Dunkerton of Northwest Research Associates in Bellevue, Washington, shows that short-term switches in the AO tend to propagate from the upper stratosphere downward to the surface. The change beefs up or weakens the stratospheric vortex, spreads to the troposphere, and shifts storm tracks across the North Atlantic. With about 3 weeks required for the trip from upper stratosphere to the surface on average, forecasters might conceivably have a chance to predict shifts in storminess, precipitation, and temperature across Europe, says Dunkerton.

    But no one knows which factors in the stratosphere might be calling the tune long-term, and there are a number of possibilities. Volcanoes are one: In recent years, both observations and modeling have suggested that stratospheric debris from the eruptions of El Chichón and Mount Pinatubo triggered warm winters in the high latitudes (Science, 28 May 1993, p. 1232), just as the AO can. And in this issue of Science (see News story and p. 305), researchers report that in a computer model, subtle variations in the sun's brightness accompanying the 11-year sunspot cycle trigger swings in stratospheric and even tropospheric circulation; during the solar maximum, the resulting pattern resembles a positive AO, warming Northern Hemisphere high latitudes in the winter. Loss of stratospheric ozone can also drive a long-term trend in the AO, according to a modeling paper submitted to the Quarterly Journal of the Royal Meteorological Society by E. M. Volodin and V. Ya. Galin of the Institute of Numerical Mathematics in Moscow. And greenhouse gases also may be behind the trend, as they too affect the stratosphere. Stratospheric greenhouse gases radiate more heat to space, cooling the polar stratosphere, which in turn strengthens the winds of the stratospheric vortex and eventually warms high latitudes.

    Of course, other, slightly different models don't produce an AO trend with such forcings. Because not everyone believes that the AO actually exists, perhaps it's not surprising that reproducing the trend in the oscillation “is still somewhat model-dependent,” as Lau puts it. It will take models that consistently produce an AO trend through an understandable mechanism—and confirmation of the AO's existence by independent analyses of weather data—before this oscillation can be crowned as the climate maker of the north.

  11. AMERICAN CHEMICAL SOCIETY MEETING

    Chemists Mix It Up in California

    1. Robert F. Service

    ANAHEIM, CALIFORNIA—Nearly 15,000 scientists gathered here from 21 to 25 March for the American Chemical Society's (ACS's) semiannual research powwow. The week's highlights included an effort to combine tissue engineering with gene therapy, new ways to synthesize polymers, and the welcome news that caffeine doesn't affect the brain like more potent stimulants.

    New Routes to Polymer Synthesis

    In the chemical industry, fine-tuning a reaction can make a giant difference in the bottom line. Because the quantities of product are so large, a more efficient synthesis can mean millions of dollars in cost savings. And a better product can open up whole new markets. At the meeting, chemists described schemes to improve the synthesis of two polymers—one a type of polycarbonate used in making ceramics and the other a plastic used in semiconductor manufacture.

    In an effort to use cheap and abundant CO2 as a feedstock for polymer synthesis, Geoffrey Coates and his colleagues at Cornell University developed a novel zinc-based catalyst that makes a specialized family of polycarbonates. Used as binders in making ceramics, these polycarbonates are chainlike molecules formed by condensing two chemicals, carbon dioxide and an oxygen-containing hydrocarbon known as an epoxide. This is not the first time a zinc catalyst has been devised for coupling CO2 and epoxides. But previous versions have been notoriously slow on the job, taking an hour for a gram of catalyst to synthesize a gram of plastic, and thus find only limited use.

    Swift action.

    A novel zinc-based catalyst speeds production of polycarbonates.

    SOURCE: G. COATES/CORNELL UNIVERSITY

    Still, Coates says, the earlier catalysts provided lessons that helped in the current work. Those catalysts consisted of zinc atoms, each attached via oxygen atoms to bulky hydrocarbon groups, and analysis suggested that they failed because the oxygen links broke and the hydrocarbons fell off during the reaction. That allowed the zincs to clump together, which halts the catalyst. To combat this problem, the Cornell researchers came up with a new bulky hydrocarbon arm that is securely bound to the zinc with a pair of nitrogen atoms.

    The new catalyst also carries another group—either an acetate or methoxide—bound to the zinc. By controlling the chemistry at the reactive end of the polymer, that component helps the CO2 and epoxide molecules link up in an alternating sequence. It also helps the polymers all grow to roughly the same length. In contrast, those produced by typical catalysts can vary in sequence and length.

    But more important, the new catalyst works over 50 times faster than its predecessors, and at a temperature of 20 to 50 degrees Celsius and a CO2 pressure of 100 pounds per square inch (psi)—mild conditions compared to those required by conventional catalysts. As a result, the new catalyst may help keep down the costs of polycarbonate synthesis and be environmentally friendly to boot. And because the catalyst can polymerize a wide variety of epoxides, it should help polymer chemists create a range of new polycarbonates. Indeed, Robert Weymouth, a polymer catalyst expert at Stanford University in California, describes the achievement as “great science.”

    The other polymer-making scheme, by researchers at the IBM Almaden Research Center in San Jose, California, and the University of California, Santa Cruz (UCSC), could open a new application for the polyacrylate polymers now found in everything from paints to rubber. In particular, they could come in handy in crafting computer chip circuitry.

    Chipmakers currently use patterned films of a different polymer, polystyrene, as “photoresists” to control which regions of a silicon wafer are eaten away by chemicals. Patterning these polymers requires that they be semitransparent, so that a polymer layer can absorb light throughout its full depth. Yet in an effort to create ever finer features on the chips, chipmakers want to reduce the wavelength of the ultraviolet light used to create the patterns. Polystyrene absorbs such short wavelengths too strongly, preventing it from penetrating throughout the polymer. So the IBM team, led by Craig Hawker, was looking for an alternative.

    The IBM team turned to polymer building blocks known as acrylates, which are transparent to the shorter wavelength light. But there was a problem: To get the uniform etching needed for chip patterning, all the chains in a polymer have to be roughly the same length. For polystyrenes, a capping group called TEMPO (for tetramethylpiperidine-N-oxide) ensures this consistency. During synthesis, TEMPO essentially jumps on and off the growing polymer chain, ensuring that neighboring chains don't react with each other and suddenly double their length. The capping group thus helps all the polymer chains grow to be the same length. But TEMPO doesn't work for polyacrylates.

    For completely unrelated studies, however, Rebecca Braslau and Vladimir Chaplinski at UCSC had synthesized a library of TEMPO-like compounds called nitroxides. And when Hawker's team tested the nitroxides, they found one that controls the polymerization, producing uniform chain lengths.

    The new work is an “important development,” says Xerox polymer researcher Peter Odell—only in part because of the polyacrylates' potential as new photoresists, he adds. Hawker also found that depending on the sequence in which different monomers are added to the reaction, the nitroxide can control whether the polymer develops as a linear chain or branches like a tree. It also works with other polymer building blocks, such as acrylamides. That could open new uses for these materials as well.

    Merging Tissue and Gene Engineering

    Engineering new tissues to grow in the human body is a hot area of research. So is developing ways to introduce healthy genes into cells with missing or defective copies. Now these two hot areas may be merging. At the meeting, University of Michigan, Ann Arbor, chemical engineer David Mooney reported creating polymer scaffolds that both seed the growth of cells and provide them with new genes. Preliminary results show that the scaffolds are already as successful at delivering new genes to cells as viruses, the most successful gene transfer vehicles to date.

    “I think it's a pioneering contribution” that other groups will definitely pick up on, says Anthony Mikos, a tissue engineering specialist at Rice University in Houston, Texas. The approach, say Mikos and others, could prove useful for treating wounds, heart disease, cancer, and other conditions that might benefit from genes added locally to, say, increase blood flow or block cell growth.

    Groups including the Michigan team had already shown that, as Mooney puts it, “these scaffolds can be used for a lot more than just their mechanical properties.” Although tissue-engineering scaffolds are primarily designed as just a friendly surface onto which cells can bind, grow, and spread, implantable capsules made from the same polymers can release therapeutic proteins, such as growth factors that induce the growth of new blood vessels. But the capsules can exhaust their protein supplies quickly. So Mooney and his colleagues wanted to see if they could use the polymers to deliver the genes for making the proteins instead.

    First, Mooney and fellow Michigan researchers Lonnie Shea and Jeff Bonadio had to find a way to trap a lot of DNA inside a polymer scaffold and then have it be released to cells over time. They started with a rigid biodegradable tissue engineering polymer, known as polylactide coglycolide, or PLG, to which they added a mixture of salt and DNA. To encourage the polymer to take up the mix, the researchers also exposed it to pressurized carbon dioxide, which dissolves in the polymer, softening it and creating a network of gas bubbles. The DNA and salt could then diffuse into the polymer, where it became trapped in the bubbles. Finally, the Michigan researchers immersed the polymer in water, which washed out the salt, leaving the DNA behind.

    The researchers next shaped the DNA-laden polymer into 1.5-centimeter-wide disks, which they implanted under the skin of rats. Mooney and his colleagues hoped that as the DNA diffused out of the polymer, the cells growing on the scaffold would take it up.

    In their first trial, the DNA stored in the polymer coded for a test protein called β-galactosidase. After waiting 4 weeks, Mooney and his colleagues retrieved tissue from around the disks and stained the cells to see which were making the protein. They found that up to 1000 times more cells expressed the introduced gene compared with cells from controls that just had the naked DNA injected into similar wounds.

    The researchers then repeated the study with a gene for a protein called platelet-derived growth factor, which stimulates the growth of new blood vessels. After 4 weeks, they tested the tissue next to the implant and found increased vascularization compared to control animals that had received the tissue scaffold with the β-galactosidase gene.

    Mooney says that a key advantage of the gene-bearing polymers is that, unlike viral vectors, they are not likely to trigger an immune response that can limit the effectiveness of the gene transfer. He adds that by altering the composition of their scaffold polymer, “we can control the time release [of DNA] from a couple of days to over 1 month.” That may help tissue engineers further control how much DNA finds its way into neighboring cells.

    This localized approach to gene therapy isn't likely to be useful for treating illnesses such as muscular dystrophy, where genes must be delivered to muscle cells throughout the body, says Robert Langer, a tissue engineering specialist at the Massachusetts Institute of Technology in Cambridge. But for treating local conditions, gene-therapy scaffolds may soon find themselves to be a hot commodity.

    Coffee Cravers Are Not Addicts

    Sure, we may have a few jitters and facial tics. But fellow caffeine drinkers: Rest easy. Animal studies presented at the ACS meeting show that the juice in java is not an addictive drug.

    Found in coffee, tea, and chocolate, not to mention many soft drinks, caffeine is the most widely used psychoactive drug in the world. Few researchers contend that the mild stimulant is as dangerous as potent and illegal stimulants such as cocaine, but some behavioral scientists have argued that because users seek out caffeine repeatedly, it should be considered a drug of dependence. Other experts counter that caffeine use doesn't bear other hallmarks of dependence, such as increased usage over time and the inability of users to give it up.

    To try to settle this dispute, neuroscientist Astrid Nehlig and her colleagues at the French National Institute for Health and Medical Research (INSERM) in Strasbourg decided to see if caffeine triggers the same kind of brain effects as cocaine and other addictive stimulants. Those drugs are thought to foster dependence partly by increasing activity in certain brain regions, such as the nucleus accumbens, that are involved in the brain's reward system.

    For their experiments, the INSERM team injected rats with a radioactive form of glucose, followed by varying caffeine doses, equivalent to the amounts consumed by people drinking one to 10 cups of coffee. Next they killed the animals and determined how much radioactive glucose the nucleus accumbens and several other brain regions contained. Higher levels indicate higher metabolic rates and therefore higher activities.

    By that measure, Nehlig's team found, the brain activity of animals given caffeine rose in brain regions involved in locomotion, mood, and awakeness. But the researchers found virtually no added activity in the nucleus accumbens except at an extremely high caffeine dose—the equivalent, for a person, of drinking seven cups of coffee in one sitting. Those findings were further buttressed by the results of a study of one human subject—an epilepsy patient who was undergoing brain scans before surgery.

    With the human subject, Nehlig and her colleagues adopted a different strategy for mapping brain activity: a relative of the common PET brain-imaging technique called SPECT, which tracks blood flow in different parts of the brain. The images showed no change in nucleus accumbens activity after the person consumed caffeine equivalent to that found in three cups of coffee. “I do not think caffeine shows any evidence of dependence,” concludes Nehlig.

    Dan Steffen, a caffeine expert at Kraft Foods—which manufactures some caffeine-containing products—says that if the new work holds up in humans, it should end the debate over whether people become addicted to caffeine. “It's pretty powerful in that it indicates the reward system is not being activated at levels where caffeine is normally consumed,” says Steffen. Instead, Nehlig proposes, people become regular caffeine users because of the positive reinforcement of feeling more alert and able to concentrate. With that said, would you please pass the cream?

  12. AVALANCHE RESEARCH

    Computer Models Aim to Keep Ahead of Snowslides

    1. Robert Koenig

    This year's severe avalanche season in the Alps was a test for Switzerland's avalanche forecasters, but the data should improve future predictions

    DAVOS, SWITZERLAND—For Swiss avalanche researcher Perry Bartelt, work came uncomfortably close to home one Saturday morning in February when he was making breakfast. Suddenly, across the valley, a wall of snow slid down the Brämabüel mountainside, snapping tree trunks, burying the main road under 10 meters of snow, and then, to his surprise, surging up the steep incline toward his neighborhood. The avalanche stopped 50 meters short of the lowest chalet, but cut off the access road for several days.

    “I never expected that,” says Bartelt, the head of avalanche dynamics and numerical modeling for Switzerland's Federal Institute of Snow and Avalanche Research (SLF). He and his colleagues had used a computer model to simulate how snow might slip down that slope, and it had predicted a smaller avalanche. “Our dynamics model was pretty much on target, but we had underestimated the volume of snow. Now we need to reassess our assumptions on ‘fracture depths’”–the point at which the snow slab breaks to start the avalanche. Instead of fracturing at the typical depth of 1 to 1.5 meters, the Brämabüel avalanche began as a 3-meter-deep slab of snow that picked up more mass as it hurtled into the valley.

    For Bartelt and his team of half a dozen snow and avalanche modelers at the Davos institute, this winter's severe Alpine avalanche season—with more than a thousand avalanches and 31 fatalities in Switzerland alone—is a watershed. Although SLF's researchers have made great strides in understanding avalanches in the 63 years since it was founded, this year's season—the worst since 1951—revealed some key weak points in their numerical models. No one claims to be able to predict that an avalanche will occur at a particular spot at a specific time, but the researchers want to give better assessments of the threat and more accurately estimate runout distances so local authorities can clear hazard areas.

    Before this winter, the Davos researchers had already been refining their techniques, for example, by developing computer models that, based on data on recent snowfalls and weather, can predict how likely the snowpack is to fracture and slide. They are also supplementing models of avalanche dynamics that essentially treat the moving snow as a continuous fluid with more sophisticated “granular flow” models, which take into account the properties of the fast-moving granules in a flowing avalanche. The researchers hope that the wealth of data gleaned from this year's avalanches will accelerate that effort. “1999 should prove to be a breakthrough for avalanche modeling,” says geographer Urs Gruber.

    Nearly 50 years ago, in the severe winter of 1950–51, a tragic series of avalanches in the Alps spurred Swiss researchers to begin work in earnest to study the problem. A Swiss pioneer in the field, Adolf Voellmy, published the first treatise on avalanche dynamics in 1955, using a relatively simple model to calculate runout distance and maximum flow velocities. An improved Voellmy model is still widely used to prepare avalanche-hazard maps, but avalanche models became more sophisticated in the following decades, and the effort broadened to include researchers in France, the United States, Norway, Japan, and Iceland. Most in this field acknowledge the preeminent position of the SLF, however. Robert L. Brown of Montana State University in Bozeman, one of the foremost U.S. avalanche researchers, calls it “the premiere research institute in the world when it comes to snow and avalanches.”

    Avalanches usually occur within a few days of heavy snowfalls, when snowpacks fracture at unstable points—such as “depth hoar,” a buried layer of large, weakly bonded crystals—underneath fresh snow. They can be big or small, fast or slow, depending on the amount and wetness of the snow, the topography, and meteorological conditions.

    To predict when and how the snowpack is likely to fracture, researchers need to be able to model the mechanical strength of the snow. But that depends on a complex array of factors, such as the size and shape of the snowflakes in each snowfall, the strength of the wind that redistributes the snow, the temperature at the time of snowfall, and the pressures exerted by new snowfalls that compact the older snow beneath. One lesson from this winter's avalanche season, for example, is that some older layers of snow—which tend to be more compact and stable—should be taken into account in estimating likely fracture heights and runout distances. Existing guidelines are based on the belief that only 3 days' worth of snowfalls should be factored into such calculations, but the almost continuous heavy Alpine snow in February greatly increased the expected volume of snow in that month's avalanches.

    To cope with all these factors, Bartelt and his team are now putting the finishing touches to an innovative simulation, called SNOWPACK, which models snow as a three-phase (ice, water, and air), porous, phase-changing medium. It takes into account the microstructural properties of snow (such as the size and shape of grains of snow, as well as their bonding strength), and traces the way the snow is transformed under changing pressure and temperature.

    Atmospheric and environmental scientist Michael Lehning says the aim of SNOWPACK is to provide more details about the snow base so that avalanche forecasters can make more accurate predictions. The model is fed with meteorological data—including snowfall, temperature, wind speed, humidity, and solar radiation—that is sent hourly from the dozens of automatic measuring stations across the Swiss Alps. Lehning then uses that data to predict snowpack conditions through much of the Swiss Alps, including the rate at which the snow is densifying, the state of stress, and the microstructures of snow layers—all of which are used to determine the stability of the snowpack. Lehning is also developing a new model that seeks to predict the influence of wind on snow deposits—a significant factor in avalanches because it affects the accumulation and characteristics of the snow.

    Meanwhile, SLF is refining its ability to predict what happens once an avalanche is unleashed—how fast and far it will travel. In the past, models of avalanche dynamics have treated the snow as if it were a flow of shallow water down a steep slope. But such equations only offer a simple, if well calibrated, description of avalanche motion, because an avalanche is actually a complex granular flow. So Bartelt says his researchers are now trying to make avalanche models more realistic by using complex granular-flow simulation. Such models track the motion of individual “particles”–clumps of ice formed during the avalanche's downward motion. Granular flow models require so much computer power that it is not yet feasible to use them for practical avalanche prediction and warnings. But Bartelt says “we are using granular-flow models to make the existing hydrodynamic flow laws more rational.”

    The SLF recently asked the Swiss National Science Foundation to help fund a major new effort to develop state-of-the-art particle models of snow entrainment (how the avalanche picks up greater mass as it descends) and sliding friction in dense snow avalanches. Researchers also want to apply these techniques to model how snow flows around avalanche defense structures, including earthen banks built up to protect towns.

    Besides modeling avalanches and studying their aftermath in the field, the SLF researchers also test their models with experimental avalanches. On a cordoned-off mountainside near Sion, scientists—positioning themselves in a small bunker near the end of the runout zone—set off real avalanches using dynamite charges and then use radar to measure the flow velocities and other equipment to measure the pressure the snow exerts on various structures placed in its path. When the next big avalanche season comes, the SLF researchers plan to be ready for it.

  13. ECOLOGY

    The Exxon Valdez's Scientific Gold Rush

    1. Jocelyn Kaiser

    Ten years after the worst oil spill in U.S. waters, scientists are learning valuable lessons from the research done in the disaster's wake

    ANCHORAGE, ALASKA—To study how seabirds forage, David Duffy used to have to chase after a flock in a skiff or bargain his way onto an oceanography ship to steal a few moments of observation time. No longer. Four years ago, the University of Alaska, Anchorage, ecologist found himself aboard whalers racing up to 60 kilometers an hour after radio-tagged kittiwakes and tracking schools of herring by sea and by air. “The amazing thing is, we were given enough resources” to mount the ambitious, expensive studies, says Duffy, now at the University of Hawaii, Honolulu. “It's like being let loose in the toy store.”

    Greasing the wheels.

    The Trustee Council has spent $110 million on research, much of which probes how environmental factors affect wildlife.

    SOURCE: EXXON VALDEZ OIL SPILL TRUSTEE COUNCIL

    Duffy's spree comes courtesy of the Exxon Valdez Oil Spill Trustee Council, a government body that has overseen the $900 million civil settlement fund set up after the infamous supertanker ran aground on 24 March 1989, disgorging 42 million liters of crude oil into pristine Prince William Sound. The fund was established to restore and conserve the sound's natural resources, but researchers like Duffy have snared a big chunk—$110 million over the past 8 years—to probe how the region's ecosystems have recovered from the spill. Their work is beginning to unravel how relationships spanning the food web—from the lowliest plankton to killer whales—and shifts in ocean temperatures have driven alarming species declines in the Gulf of Alaska, which supports some of the richest fisheries in the United States.

    Scientists gathered at a symposium* here last month, 10 years after the worst oil spill in U.S. waters, to trot out findings from what may be the most expensive ecology program ever. After a rocky start afflicted by subpar studies done on the fly in the initial months after the spill, many researchers say the fund has transformed Exxon Valdez science from a scientific pariah to a respected effort. Marine ecologist Charles Peterson of the University of North Carolina, Chapel Hill, who's helped guide the program as a reviewer, calls the recent work “just magnificent.”

    But like most aspects of the Exxon Valdez disaster, the research program has sparked controversy. Some observers question its underlying philosophy, which is to restore resources by understanding them better. “We can't fix what was broken here. The notion that studying it was helping it is perverse,” says University of Alaska, Anchorage, outreach adviser Richard Steiner, a longtime critic of the council-funded science. And some prominent ecologists question the vast expenditure on studying the fallout of a local calamity when research efforts on more-imperiled species and global problems like tropical deforestation are scrambling for funds. “I still have some big problems with this way of doing science, the philosophy of requiring a catastrophe of this sort to generate this effort to understand nature,” says Jim Estes of the U.S. Geological Survey in Santa Cruz, California.

    The research bonanza is now drying up, as the fund shifts its focus from restoration to long-term monitoring. As scientists debate the program's value, they acknowledge they may never see its like again: The law that brought the Trustee Council to life has been revised to encourage faster restoration efforts and fewer field studies in the wake of future environmental debacles.

    Into the breach. Scientists were caught off guard in the hours after the tragic spill. Few data existed on the sound's ecology, so some, in desperation, scrambled out on the water to snap Polaroid photos of the shoreline before the oil started washing up. “It was a crisis atmosphere,” says Stan Senner, science coordinator for the Trustee Council. The panic-driven initial studies often featured inadequate controls and ignored possible explanations other than oil for wildlife declines; data were later thrown out.

    In the following weeks, an intense media circus shifted from dying birds and otters coated with oil to the lawsuits shaping up against Exxon. By the fall of 1989 scientists had been called in to review government studies, but lawyers with the Department of Justice and Alaska, says Senner, “guided the studies that were ultimately carried forward, and the thing they were after was recovery of damages.” That meant focusing on charismatic species, like sea otters and seabirds, to which lawyers could attach a dollar value, and ignoring ecological interactions among species.

    Barred by lawyers from both sides from sharing data, Exxon and government scientists carried out redundant studies. And many scientists outside Alaska who offered to help were rebuffed. “The agencies wanted to keep their fingers in the trough,” asserts Chris Haney, an ornithologist with The Wilderness Society and a reviewer of the council's science program. All told, government agencies sank $86 million into narrowly limited studies in the first 2 years after the spill, Senner says, before a civil settlement between Exxon, the U.S. government, and the state of Alaska in 1991 at last took the lawyers out of the picture.

    Bucking public pressure to spend the $900 million settlement only on purchasing land to set aside for conservation, the trustees—acting on behalf of the federal government and the state of Alaska—opted to carve out $110 million for research, restoration, and monitoring. Another $40 million may be spent by 2002.

    Ironically, it was Alaskan fishers, not scientists, who played a major role in steering the program onto a successful trajectory. In 1993, diseases devastated the Pacific herring population in Prince William Sound. Dealt a crippling blow, fisheries wanted to know whether the collapse as well as erratic yields of pink salmon were related to the spill. Scientists didn't know. Urged by fishers to troll the whole ecosystem for answers, the Trustee Council “stopped looking at how many dead bodies we had and started to really look at [ecological processes],” says Peter McRoy, an oceanographer at the University of Alaska, Fairbanks.

    With the addition of more outside expertise, trustee scientific advisers laid a course for a series of major research undertakings that would undergo rigorous peer review, Senner says. The council is still bound legally to focus on restoring and “enhancing” natural resources. “We can't carry out a project that doesn't have an application somehow,” Senner says. But his staff has managed to fold in basic science.

    Taking stock. Now researchers are reeling in the data. The largest project, the $23 million, 7-year Sound Ecosystem Assessment (SEA), began collecting data in 1994 on everything from the sound's bathymetry to plankton and fish counts in an effort to understand which factors are critical to the survival of juvenile herring and salmon. The key for herring, it turns out, is how much fat they can store up before winter. Salmon fry, on the other hand, depend heavily on the timing of spring zooplankton blooms—a big bloom around the time the fry emerge helps shield them from predation, as hungry pollock will gorge out on the zooplankton instead of the fry.

    Other projects have sought to find out why populations such as pigeon guillemots and harbor seals are declining or recovering poorly. The leading suspects were oil pollution or changes in food supply linked to long-term climate change. Paul Anderson of the National Marine Fisheries Service and his colleagues in the Alaska Predator Ecosystem Experiment focused on the influence of climate change on fish populations. They combed a database of over 10,000 trawler catches since the 1950s, looking for signs that a slight warming—about 2 degrees Celsius—in the Gulf of Alaska in the late 1970s has affected fish populations. They found a distinct shift from a preponderance of fatty fish like capelin, sand lance, and shrimp to a system now dominated by leaner pollock, flounder, and cod.

    To investigate the impact higher up the food chain, Duffy and others are sampling what seabirds regurgitate, studying metabolism, and feeding chicks different diets in the lab to see how quickly they grow. With one more field season left, they're leaning toward climate change —not oil—as the key ecological force in the Gulf: The documented trend toward leaner fish seems to account for much of the seabird declines, Duffy says. Experts are impressed with the wide-ranging study. “We've tried to do it [off California] but we've never had sufficient funding or vessel time,” says Bill Seideman, an ornithologist at Point Reyes Bird Observatory in California.

    Some studies are beginning to pay off for the Prince William Sound community, too. The SEA results, for example, are helping hatchery managers decide when to release fry. And by probing how oil, a fish's age, and the time of year might make herring more susceptible to the pathogens that devastated stocks in 1993, researchers hope to refine models for predicting herring yields. Besides helping fishers, says co-principal investigator Gary Marty of the University of California, Davis, the herring work “is the most comprehensive disease study of any wild fish population.”

    The vast overhaul of Trustee Council research has not eliminated all controversy, however. Council-funded scientists still spar with scientists who receive money directly from Exxon to study lasting effects of oil on species. (The company disputes the trustees' assertion that only two species, bald eagles and river otters, have fully recovered after the spill.) And some scientists say the program still shuns outsiders. “It's mostly the same old,” says University of Washington, Seattle, ecologist Dee Boersma, who has Exxon funds to study murres on the gulf's Barren Islands. According to Peterson, scientists in Alaska who had been studying the spill from the beginning often crafted stronger proposals.

    Research spawned by future environmental disasters may be even more controversial. The regulations that guided the Trustee Council science were revised in 1996 to encourage more cooperation between a guilty party and the government to make restoration the top priority. That can mean sampling a few species and using models to project damage to others instead of doing comprehensive field studies, says Roger Helms of the U.S. Fish and Wildlife Service (FWS), who adds that moving quickly to restoration is a worthy goal.

    The new law made its debut with a small oil spill off the Rhode Island coast in January 1996. The postaccident research, Peterson says, amounted to “a few collections of dead things that washed up on the shore and a few other odd data sets.” This approach is sure to miss chronic or subtle effects seen in the long-term Valdez studies, he says, calling it “a godawful disgrace.” Doug Helton of the National Oceanic and Atmospheric Administration's Damage Assessment and Restoration Program demurs, arguing that the harm done by oil “doesn't all have to be shown with original research.”

    Council-funded scientists, meanwhile, are looking forward to a trickle of funding to keep portions of projects going. Last month, the trustees decided to use $115 million left over from the restoration fund to endow a long-term research and monitoring program starting in 2003 that will have a budget of up to $10 million a year. To give the program the credibility that has at times eluded earlier efforts, the trustees plan to ask the National Research Council to vet its design.

    Early plans are to “take the pulse of the northern gulf” and to fund research on key species like sea lions and harbor seals, says Molly McCammon, the council's executive director. That, along with the solid work already funded, says council chief scientist Robert Spies, will “truly leave something behind of lasting value.”

    • *Legacy of an Oil Spill—10 Years After Exxon Valdez, 23–26 March, Anchorage, Alaska.

  14. GENE ENGINEERING

    EPA, Critics Soften Stance on Pesticidal Plants

    1. Michael Hagmann

    Four years after airing a controversial plan to regulate “plant-pesticides,” battle-weary opponents are finding common ground

    For 6 years John Sanford, inventor of the gene gun, led a handful of researchers on a mission to endow roses, petunias, and other ornamental flowers with genes that help plants resist mildew. Last year Sanford threw in the towel, selling his Waterloo, New York, firm—not because the research was sputtering, but because he feared a new rule from the Environmental Protection Agency (EPA) would put him out of business.

    EPA intends to require companies to submit data showing that plants equipped with new or foreign genes coding for pesticides or other resistance traits are safe for humans and the environment. The agency also wants the seeds to carry a label saying they make their own antipest substance. “It's our legal mandate and obligation to regulate these substances,” says Janet Andersen, director of EPA's Biopesticide and Pollution Prevention Division. But many scientists and politicians are pressing EPA to narrow the rule. “This new regulation has large implications for the U.S. biotech sector,” Representative Thomas Ewing (R-IL), chair of the House Subcommittee on Risk Management, Research, and Specialty Crops, said at a 24 March hearing on Capitol Hill.

    The gap between the feuding sides appears to be closing, however. At the hearing, EPA officials said they plan to make changes—for example, expanding a list of modifications exempt from regulation—before issuing a final rule this year. “We're very close to getting these things clarified,” says R. James Cook, a plant scientist at Washington State University in Pullman and a spokesperson for 11 societies* that have banded together to fight the rule. EPA's relaxed stance, however, may raise the hackles of some groups that want to see even more stringent regulation.

    Citing its authority over pesticides, EPA aired a proposed rule in November 1994 that it said would ensure the safety of plants altered to express pest-resistant traits or protective substances. The rule cast a broad net, covering everything from the genes for making Bacillus thuringiensis (Bt) toxins, bacterial proteins that kill most insects, to genes that would tell plant cells to self-destruct upon attack. Among the exemptions are traditionally bred plants and gene transfers within a species.

    The rule has drawn fire from all quarters. Because it includes so many exemptions, the proposal is “far too weak,” says Margaret Mellon of the Union of Concerned Scientists in Cambridge, Massachusetts. But she and others support the philosophy of the rule, which would protect against hazards that may arise if, say, a potent horseradish protein conferring disease resistance were spliced into vegetables eaten in greater portions.

    Other critics, however, have assailed the rule's scientific basis. The main shot came from a consortium assembled by the Institute of Food Technologists in January 1996. The consortium issued a report that year calling the rule “scientifically indefensible” because, it argued, the EPA was essentially proposing to regulate the process—gene engineering—rather than the product. It has since suggested that EPA regulate only plants modified to express substances found to be toxic to other species. “Nobody's suggesting that if you insert a highly toxic substance into a plant that this shouldn't be regulated. That's risky,” says plant pathologist Arthur Kelman of North Carolina State University in Raleigh.

    He and others claim a broad rule could jeopardize confidence in the safety of the food supply. “The label ‘pesticide’ has the connotation of danger; it means ‘kill.’ That doesn't do much to lower the anxiety in the public,” Kelman says. Walking a fine line, the Biotechnology Industry Organization backs oversight of what it, along with the societies, prefers to see labeled “plant-expressed protectants.” Such regulation is “critical for public acceptance” of new products, says spokesperson Joseph Panetta.

    EPA announced at the hearing that it is considering revisions that should help mollify critics. For instance, says James Aidala, EPA's associate assistant administrator, the agency is willing to adopt the term plant-expressed protectants. EPA also plans to broaden its exemptions, including a wider range of plants given viral proteins that—like vaccines—immunize them against viruses. “The rule is mainly about what doesn't need regulation,” Aidala says.

    Environmental groups say they are disappointed by EPA's narrowed focus. But Cook and his allies are optimistic. “If the EPA had started where they are today,” he says, “we would probably not have issued the report in the first place.”

    • *American Institute of Biological Sciences, American Phytopathological Society, American Society for Horticultural Science, American Society for Microbiology, American Society of Agronomy, American Society of Plant Physiologists, Crop Science Society of America, Entomological Society of America, Institute of Food Technologists, Society of Nematologists, Weed Science Society of America

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution