News this Week

Science  02 Apr 1999:
Vol. 284, Issue 5411, pp. 18

    Colorado Nobelist Chosen to Lead Howard Hughes

    1. Jon Cohen

    Joining a list of world-class life scientists who have taken on major management positions without abandoning their research, chemist Thomas Cech has been named as the next president of the $11 billion Howard Hughes Medical Institute (HHMI). The 51-year-old Cech, who won the 1989 Nobel Prize for his work on the enzymatic activity of RNA—which influences everything from the origin of life to treating diseases—will keep his lab at the University of Colorado, Boulder, where he is an HHMI investigator. In doing so, he is following in the steps of fellow Nobelists Harold Varmus, who leads the National Institutes of Health, and David Baltimore, president of the California Institute of Technology, as well as Bruce Alberts, president of the National Academy of Sciences.

    Cech replaces current HHMI president Purnell Choppin, who is retiring in December after 12 years. With the help of chief scientific officer Maxwell Cowan, who is staying on—and an endowment that has more than doubled to $11.4 billion during his tenure—Choppin established HHMI as a leading supporter of basic biology research through its network of more than 300 university-based investigators.

    Cech's colleagues think he's just the person to continue building HHMI. “It's a great call,” says Nobel laureate J. Michael Bishop, chancellor of the University of California, San Francisco. “I couldn't think of anyone better.”

    In taking the new job, Cech resisted the HHMI board's initial request that he shut down his HHMI-funded lab. “Things are going so well in the research lab, I hate to interrupt it,” he says about recent work on x-ray crystallography of RNA and the study of telomerase, an enzyme that helps dividing cells protect their chromosomes. The board eventually agreed to let Cech spend 1 week a month in Colorado.

    Heading the Chevy Chase, Maryland-based organization, however, means reducing his HHMI funding and downscaling his lab. Cech says he'll give up many outside activities, too, including his work with biotechnology companies. Cech also is a deputy editor of Science, a job he says he may keep without compensation. Cech's salary has not yet been made public; Choppin earned $600,000 last year.

    Cech declined to discuss his plans for HHMI but says he is particularly interested in merging biology with other disciplines, bioinformatics, and science education. He'd also like to explore ways to mesh HHMI's research program, which last year gave its investigators $424 million, with the grants program that spent another $99 million on science education, postdoctoral training of physicians, and an international scholarship program for researchers.

    One perennial management issue is resource allocation. Yale University's Sidney Altman, who shared the 1989 Nobel with Cech, says HHMI is “open to a lot of criticism” about who and what it funds. “They can't help but fund people who would be funded otherwise,” says Altman about its ongoing support for genetics, immunology, and neuroscience. He says HHMI is at its best when it exercises leadership in a field, like it did 20 years ago in helping to build up structural biology.

    Cech concedes that there might be questions about his abilities to manage an organization as big as HHMI, with more than 2500 employees and an annual budget that exceeds half a billion dollars. “I've never run anything larger than my research group,” he says. But he says he's learned a lot in the past decade from sitting on the boards of several large research institutions. A former researcher in his lab, Michael Been of Duke University, says Cech is an excellent manager who “handles a lot at once, and very efficiently.”

    Running HHMI does have its downside, however. For his wife, Carol, vice president at Baxter Hemoglobin Therapeutics, it means leaving her job and moving to the Washington, D.C. area. “I'm really grateful that she's seen it possible to allow us to do this as a family, even though it's not a good career move for her,” he says.

    To Cech, the Hughes presidency is “a bit of a dream job,” allowing him to have a “high impact” on the direction of science without having to raise money. Columbia University neuroscientist and fellow HHMI investigator Eric Kandel says Cech's task will be made easier by HHMI's wealth and the absence of any looming crises. “Cech can do visionary things,” says Kandel. “It's not like walking into your typical academic situation, where you have enormous debts and the faculty is demoralized and worried about health care. The faculty does not bitch at Hughes.”

    Although Hughes's financial stability is part of what attracted Cech, the diversity it fosters may also present him with his biggest challenge. “Tom has to find his way through the forest,” says Altman. “But he's a very creative scientist, and I'm sure he'll do fine.”


    Research Shutdown Roils Los Angeles VA

    1. Jon Cohen

    In a stunning one-two punch from the federal government, all research projects affiliated with the Veterans Administration (VA) in Los Angeles, California, were put on indefinite hold last week. On 22 March, the National Institutes of Health (NIH) announced that the VA Greater Los Angeles Healthcare System could no longer conduct human studies supported by NIH's parent, the Department of Health and Human Services, because of lax procedures for approving and overseeing these trials. Hours later, the VA's home office extended the suspension to all test tube and animal research, citing additional administrative problems. With more than 1000 research projects shut down, investigators are confused, frustrated, and worried about losing the confidence of their patients and funders like the NIH.

    “This is a devastating situation,” says Matthew Goetz, who heads the infectious disease department at the VA Medical Center-West Los Angeles (VAMC-WLA). Hospital management is working overtime to address the government's concerns, and investigators are hoping to win exemptions for many projects. “How grave the impact is will be determined by how the process is handled from here on out,” says Goetz. The federal actions were first reported in the 24 March Los Angeles Times.

    The VAMC-WLA, the largest hospital of the VA's 173 hospitals and an affiliate of the University of California, Los Angeles, had long been scrutinized by the NIH's Office for Protection from Research Risks (OPRR), which oversees experiments that involve humans. Gary Ellis, OPRR's director, explains that several factors led to the decision to suspend ongoing clinical studies and prohibit enrollment of patients into new ones. “It's a very serious step in response to unusual circumstances,” Ellis says. “There are serious, systemic deficiencies. It's a pattern of nonresponsiveness to our concerns over 5 years.”

    A 22 March memo from OPRR faults the VAMC-WLA primarily for the way its Institutional Review Boards (IRBs)—the groups within each VA that approve and monitor clinical trials—conducted meetings. Specifically, OPRR found that these IRBs repeatedly violated procedures by holding meetings without including community representatives, convening a quorum, or adequately briefing members about trial protocols and the like. OPRR further charged that the VAMC-WLA had failed to establish independent Data Safety and Monitoring Boards to review results from ongoing psychiatric research in which the investigators also served as primary care physicians for the patients under study.

    In a memo broadening the suspension to other research, the VA's undersecretary of health, Kenneth Kizer, stressed that “there currently is no evidence to suggest any actual harm to either human or animal research subjects.” He described the suspension, which went into effect on 26 March, as “a preemptive measure.” Kizer offered no clear explanation for the scope of the suspension but said that administrators in Los Angeles had failed “to correct deficiencies in fiscal and personnel management” and had been “unresponsive” to probes by the VA's own investigators. Ultimately, he said, their mismanagement “adversely affects individual investigators” and “jeopardizes the public's perceptions of VA's entire research enterprise.”

    When asked why the restrictions had such a broad scope, John Feussner, chief of research and development at VA headquarters, explained that officials assume human studies are carried out more carefully than any others. “We inferred that since there were difficulties with the human components, we need to verify for ourselves that this was not the case with other studies.” As for the financial problems, he said, “it's a simple matter of poor accounting, poor record keeping, and not being able to follow the dollars as easily as we'd like to.” His office now has teams in Los Angeles investigating the research and financial issues.

    Investigators can ask for exemptions to the suspension if interrupting a research project poses a threat to animals or humans, and they immediately began flooding administrators with requests to spare their studies. “We all have put in exemptions,” says Alan Lichtenstein, a hematologist-oncologist who has worked at the VAMC-WLA for 20 years. But Lichtenstein said none of his colleagues has a clue about the timeline. “We're working with bureaucracies,” he says. “Who knows?”

    A personnel switch has further confused the situation. On 24 March, VAMC-WLA's head of research, Stephen Pandol, was “reassigned to other duties,” according to a spokesperson, and replaced by Peter Eggena, who came from the VA's Sepulveda campus across town. Eggena was scrambling to determine how many research projects were under way and fielding exemption requests.

    Approval for any of the 400 or so human trials to resume would have to come from OPRR, and Ellis says he does not anticipate granting many exemptions. “It would be very rare,” says Ellis. “Patients need treatment. But people don't ordinarily need to be research subjects.” Ellis says that OPRR will evaluate research requests on a project-by-project basis as soon as the VA repairs the IRBs.

    The VAMC-WLA suffered another blow on 25 March, when reporter Terence Monmaney of the Los Angeles Times followed up his original article with a detailed account saying that some VAMC-WLA patients were put at risk in trials to which they had not consented. The VAMC-WLA had documented the alleged infractions in an internal report, but officials at both VA headquarters and OPRR said they had no knowledge of them.

    The article further damaged morale at the VAMC-WLA. “If only part of this is true, it's a terrible blow to the institution,” says Lichtenstein. “One patient brought in the article to the rheumatology clinic asking, ‘Are you guys going to do this to me?’ “Lichtenstein says he and many of his colleagues hope that some good may ultimately come from the added scrutiny, “but we're going to have to go through a very, very difficult time.”

    Feussner said he hopes that with site visits now under way, processes will be in place to correct any problems by 23 April. “I suspect the total suspension may not be lifted at that time,” predicted Feussner, but he said that by then it may be limited to human studies.


    Court Views Engineers as Scientists

    1. Jeffrey Mervis

    When engineers seek to testify in court as expert witnesses, judges should hold them to the same standards as scientists, the U.S. Supreme Court ruled last week. The 23 March decision, in a case called Kumho v. Carmichael, says judges may disallow testimony from engineers that doesn't meet broad scientific standards for reliability. The ruling was applauded by the National Academy of Engineering (NAE) and other organizations that had submitted briefs urging the high court to recognize the scientific basis of engineering. However, legal experts say that it leaves plenty of leeway—and uncertainty—in judging the validity of expert testimony in fields, including clinical medicine and forensics, that often rely on experience rather than scientific practices such as publication and peer review.

    “I feel good about this decision,” says William Wulf, president of NAE, which had argued that although engineering differs from science in trying to modify rather than understand nature, its methods are no less scientific. Adds attorney Richard Meserve, a former physicist who prepared the NAE brief, “It should reinforce the obligation of trial judges to serve as gatekeepers, to look at the background of the expert witnesses and examine how they arrived at their conclusions.”

    The gatekeeper role was spelled out in a 1993 case, Daubert v. Merrell Dow Pharmaceuticals, in which the Supreme Court proposed four factors that judges could weigh in deciding whether expert-witness testimony from scientists was relevant and reliable. The court suggested that judges should consider the testability, error rate, and degree of acceptance in the community of the analysis, including whether results had been peer reviewed and published (Science, 2 July 1993, p. 22).

    The current case (97–1709) began with a suit filed by the Carmichael family of Alabama against Kumho Tire Co. after a blowout in 1993 caused an accident that killed one of their children. The plaintiff's case rested on testimony from a mechanical engineer and tire analyst, Dennis Carlson Jr. who said the blowout resulted from a defect in the tire's design or manufacture rather than from wear or improper care and use. The lower court excluded his testimony, submitted in a deposition, saying the analysis was scientifically flawed. An appellate court reversed the decision, ruling that Carlson's testimony was based on his experience rather than scientific analyses and was therefore not covered under Daubert. The company appealed to the high court, which heard the case in December.

    Last week's decision, written by Justice Stephen Breyer, reverses the appellate court and extends Daubert to engineering. But legal experts say that it still gives judges great discretion to accept or reject expert testimony. “It does not knock out experience [as a basis for expert knowledge], but it emphasizes reliability and relevance,” says Margaret Berger of the Brooklyn (NY) Law School. “I suspect that the way it's applied will vary from circuit to circuit.”

    That variability worries some scholars. “When Justice [Harry] Blackmun wrote the Daubert decision, he was clearly thinking of what it is that scientists do,” says law professor Michael Green of the University of Iowa, Iowa City. “But what about accident reconstructionists? They wouldn't think of publishing their work in a journal or having it peer reviewed. What Breyer did is invite trial judges to look carefully at an expert's methods and reasoning and to throw it out if it's flawed. But what's acceptable to one judge may be unacceptable to another judge. And uncertainty means more litigation.”

    Meserve and others disagree. “I think the ruling sends a message to judges that [weighing expert witnesses] is an important job that they must take seriously,” he says. Berger says she's “amazed” at the detailed discussion of tire composition and tread wear in Breyer's decision and speculates that he may have wanted to show trial judges how to approach such questions. Meserve also hopes the decision may weed out frivolous suits by raising the stakes for plaintiffs' lawyers and experts themselves. “After Kumho,” he says, “they ought to be embarrassed if a judge finds their testimony not acceptable.”


    Shedding Light on Visual Imagination

    1. Marcia Barinaga

    In the past decade, two little acronyms, PET and fMRI, for positron emission tomography and functional magnetic resonance imaging, have permeated the literature of cognitive neuroscience. That's because these powerful techniques allow researchers to see activity in the living human brain. But both have a drawback: Although they can show a correlation between brain activity and a given function, they can't show a causal connection. Now a relatively new, little-known technique called transcranial magnetic stimulation (TMS) may provide that missing link.

    On page 167, Stephen Kosslyn and his colleagues at Harvard Medical School report that they have used TMS, which directs a magnetic field to temporarily disrupt the functions of specific brain areas, to address a decades-old question in cognitive psychology: Does the visual imagery that occurs when the brain imagines an image work the same way as when the brain processes a real image from the retinas? Their results support the hypothesis that it does, because they indicate that the primary visual cortex, the first part of the cerebral cortex to receive retinal information, is necessary for at least some visual imagery as well.

    “This is a very exciting finding,” says cognitive neuroscientist Randy Buckner of Washington University in St. Louis—and not just for its contribution to the imagery debate. If TMS works as it seems to, he adds, it is “exactly what the field needs, an ability to safely manipulate cognitive processing in humans,” partially inactivating brain areas to help pin down their functions.

    Kosslyn began exploring the brain's strategies for imagining—as opposed to viewing—a scene more than 20 years ago. In his early experiments, he measured the time it took people to shift their attention from one feature in an imagined scene to another. That time grew with the distance between the features, suggesting, but not proving, that the brain was panning across an imagined scene, depicted in the brain with the same spatial topography as a retinal image.

    When brain imaging techniques became available, they provided further support for that idea. V1, the primary visual cortex, is “retinotopically organized,” which means that it encodes images in a way that preserves the same spatial arrangement that falls on the retinas. In 1995, Kosslyn and Nathaniel Alpert at Massachusetts General Hospital in Boston used PET to show that visual imagery activates V1. They also showed that changing the size of the imagined image changes the area of activation in V1, further evidence that the image is represented retinotopically.

    But the possibility remained that V1 activation was merely a side effect and that some other brain area actually produces visual imagery. To address that issue, Kosslyn teamed up with Alvaro Pascual-Leone of Boston's Beth Israel Deaconess Medical Center to try TMS, which works by focusing a magnetic field on targeted brain areas, inducing electrical currents that temporarily disrupt their functions.

    The technique has been used for years for mapping brain areas responsible for movement, and in 1997, Pascual-Leone, working with Leonardo Cohen and Mark Hallett of the National Institute of Neurological Disorders and Stroke (NINDS), used TMS to show that V1 plays a role in Braille reading. In that study, TMS was delivered as a rapid barrage, and the subjects were tested during the stimulation. But high-frequency TMS has on rare occasions caused seizures, and Pascual-Leone also worried that magnetic stimulation during testing may generally disrupt attention, casting doubt on the role of brain areas such as V1. A recent study showed, however, that the effects of safer low-frequency TMS on the motor cortex linger for up to 10 minutes. So Pascual-Leone and Kosslyn applied low-frequency TMS to V1, turned it off, and then tested the subjects.

    After treating eight subjects, they had them compare the lengths of pictured bars, either while looking at the picture or while holding its image in memory. TMS impaired the subjects' abilities at both perception and imagery when compared to a sham treatment that focused the magnetic field outside the brain, creating the same scalp sensations as real TMS without affecting any brain areas.

    “Their effect looks very strong,” says neurologist Eric Wassermann of NINDS. He cautions, however, that the effects of low-frequency TMS are even less well understood than those of the high-frequency form used in the Braille study, and warns that the team has not ruled out the same concern Pascual-Leone had for high-frequency TMS—that it may cause a general disruption of brain function.

    Others, including cognitive neuroscientist Nancy Kanwisher of the Massachusetts Institute of Technology, question the technique's ability to uniquely pinpoint V1. It is likely to be affecting adjacent visual areas as well, says Kanwisher. But she adds, “I don't think that matters,” as those areas are also retinotopically organized. “The point is being able to say ‘There is the image, and it is in the retinotopic cortex.’”

    Some skeptics don't agree. Zenon Pylyshyn of Rutgers University in New Brunswick, New Jersey, has maintained for decades that visual imagery is encoded not spatially but in what he calls “the language of thought, a symbolic language.” Even if disrupting V1 reduces performance, he argues, “that still doesn't show that the retinotopic aspect of V1 is being used.” Instead, he says, V1 may encode information in nonretinotopic ways as well. But even if this result doesn't finally settle the imagery debate, it may foreshadow a time when TMS—if its safe form proves reliable—will be as familiar a tool for cognitive neuroscientists as PET and fMRI.


    Dispute Over a Legendary Fish

    1. Constance Holden

    It must have been like spotting a koala in New York's Central Park. Strolling in a fish market on the island of Sulawesi, Indonesia, in September 1997, Mark Erdmann, a biologist at the University of California (UC), Berkeley, and his wife Arnaz caught a glimpse of what appeared to be a coelacanth, just before the hefty lobe-finned fish was whisked away by a buyer. Almost 60 years had passed since the stunning news that a coelacanth—a species believed to have gone extinct 80 million years ago—had turned up off South Africa, 10,000 kilometers from Indonesia. No one thought the living fossil survived anywhere else in the world until Erdmann, almost a year after the initial sighting, at last laid his hands on a live specimen. Now it turns out that Erdmann's find may be not just another coelacanth but a second coelacanth species.

    Erdmann, however, isn't celebrating the announcement, because the report in the April issue of Comptes Rendus de L'Académie des Sciences, published by the French Academy of Sciences, comes not from his group but from geneticist Laurent Pouyaud of the French Institute for Development Research (IRD) in Jakarta and colleagues at the Indonesian Institute of Sciences (LIPI) in Cibinong. Erdmann calls the preemptive strike a “dishonorable act of scientific piracy”; Pouyaud says it was aboveboard.

    Erdmann, who studies shrimps, was no expert in coelacanths when he moved to Indonesia in 1991. But he is an expert now. After spotting the fish, Erdmann spent the next 10 months interviewing fishers, monitoring catches, and gathering temperature and depth data from fishing sites in an attempt to track down another specimen. He finally succeeded in July 1998 and last September published a report in Nature describing the find.

    After taking some tissue samples, Erdmann donated the fish to LIPI. But he claims that in an oral “gentleman's agreement” LIPI had agreed that a team led by David Hillis of the University of Texas (UT), Austin, to whom Erdmann had provided samples, would be the first to publish an analysis of the fish's DNA, after which the LIPI scientists could name the new species—if that's what the Indonesian coelacanth turned out to be. Shortly thereafter, LIPI scientists got Pouyaud, who is advising the Indonesian government on aquaculture, to help them with their own analysis. Pouyaud submitted a report to Nature last January, just days after the UT group's analysis arrived at the journal (where it is still under review). In February Nature rejected the paper from Pouyaud, who then offered a revised version to the Comptes Rendus, which published it a month later.

    Based on an analysis of two swatches of mitochondrial DNA, which is thought to accrue mutations at a regular pace and thus can be used to time how long two populations have been evolving separately, Pouyaud and his group report that the Indonesian coelacanths diverged from their African cousin, Latimeria chalumnae, between 1.2 million and 1.5 million years ago. The genetic and morphological distinctions between the two populations are great enough to merit classifying the Indonesian coelacanth as a new species, they conclude, naming it L. menadoensis, after the volcanic island, Manado Tua, where the fish was found. “We have not only found a new population of coelacanths but a new species,” Pouyaud told The London Sunday Times on 28 March. In a commentary accompanying the Comptes Rendus report, evolutionary biologist Claude Combes of the University of Perpignan in France agrees that the Indonesian specimen falls “outside the range of measures … of the Comorian specimens.” The naming of a new species “appears justified,” he writes.

    The Hillis team doesn't go that far. From their analysis of mitochondrial DNA they conclude that the two coelacanth populations began diverging earlier, around 5 million to 7 million years ago. “We think it is a new species,” says UC Berkeley's Roy L. Caldwell, a co-author. However, he adds, “we did not name it. … We feel it's premature to name a new species based on one specimen.”

    But the fine points of speciation aren't the issue here. “We were unaware there was any other study going on,” says Hillis. “The whole publication process apparently involved stealth and subterfuge.” LIPI scientists could not be reached for comment. But Erdmann says several LIPI co-authors of the paper told him that “Pouyaud went ahead without their consent.” He adds that he would not have complained if the Indonesians had named the fish. But he is outraged that Pouyaud stands to get the lion's share of credit. “All this guy did was stick some meat in a sequencer,” Erdmann says. According to Susan Jewett, an ichthyologist at the Smithsonian Institution in Washington, D.C. “for somebody to move in on such a high-profile thing, where everybody knew who all the key players were, is highly unethical.”

    Pouyaud calls Erdmann's distress sour grapes. “Two scientific research teams were competing,” he told Science. “At the end, little David beat Goliath.” He adds that his group's Nature submission contained only the genetic analysis. Senior French scientists, he says, “urged us … to name the species” in the paper for Comptes Rendus. Pouyaud's employer is squarely behind him. “We know nothing about any agreement between Dr. Erdmann and the rightful owners of the specimen” at LIPI, says IRD's Patrice Cayre. “LIPI has every right to do whatever it wants with the specimen.”


    A Deep Look Beneath Tall Mountains

    1. Richard A. Kerr

    Geologists are adept at scratching the surface, but they have trouble delving into deep mysteries. That's because they can only collect rocks from the topmost parts of the 40 kilometers of crust floating on the underlying mantle, and those surface rocks had seemed unlikely to reveal much about deep-seated geological processes. But now, mineralogists at the University of California, Riverside (UCR), have new evidence that a large chunk of rock high in the Alps may have originated hundreds of kilometers underground. If so, it may have brought some of those deep secrets to the surface.

    Relic from below.

    Half-millimeter grains of diopside (green) contain signs of ascent from hundreds of kilometers down.


    The UCR researchers, Krassimir Bozhilov, Harry Green II, and Larissa Dobrzhinetskaya, base their conclusion on an analysis of the rock's mineral composition. As they report on page 128, it suggests that the rock rode to its current position on a geologic elevator from at least 250 kilometers down. “It's a strong case that these things come from deep in the mantle,” says mineralogist Thomas Sharp of Arizona State University in Tempe. Confirmation that such deep minerals can rise to the surface, presumably during the upheavals sparked by colliding tectonic plates, would open up a new window on the mantle and how it participates in surface geology.

    In the current work, the UCR researchers were pursuing a trail that Green and Dobrzhinetskaya picked up 3 years ago (Science, 29 March 1996, pp. 1811, 1841). While examining rock from the Alpe Arami massif, an 800-by-500-meter mass embedded in the mountains of southern Switzerland, they had found that the rock's magnesium-rich olivine contained 20-micrometer rods of the iron-titanium oxide mineral called ilmenite. These crystals exhibited ordinary enough structures, but they were extremely rich in titanium and displayed an odd variety of low-pressure crystal structures. These strange features, the researchers realized, could be hallmarks of a rock that had risen from the depths: The ilmenite must have sweated out of the olivine and crystallized as the extreme pressures that had kept the iron and titanium dissolved in the olivine eased. Thus, the UCR group suggested that the Alpe Arami rock had once been 300 to 400 kilometers down.

    That provocative conclusion has remained controversial, but Bozhilov and his UCR colleagues are now reporting the discovery of more persuasive relics of high pressures in Alpe Arami rock. When they took a look with a transmission electron microscope, they found thin plates of the mineral clinoenstatite that had been exuded by the surrounding mineral diopside. Within the clinoenstatite plates, which are some tens of nanometers wide, they could image what appear to be boundaries between crystal domains of subtly differing orientation. High-pressure lab studies have shown that such “antiphase boundaries” form when the high-pressure structure of a mineral relaxes to the low-pressure form. In the case of the Alpe Arami rock, that would have occurred at a depth of around 250 kilometers, the researchers concluded.

    “I think the antiphase domains are reasonably good evidence” that high-pressure clinoenstatite gave rise to the present mineral, says mineralogist Charles Prewitt of the Geophysical Laboratory in Washington, D.C. However, to be certain that some fine-scale distortion seen in the clinoenstatite is not fooling them, both Prewitt and Sharp would like to see more study of antiphase boundaries created in clinoenstatite in the laboratory.

    In the meantime, geophysicists are considering how a chunk of rock such as Alpe Arami could have risen from hundreds of kilometers down. Green invokes “the Ivory soap principle.” It requires two colliding continents to drag buoyant continental crust to mantle depths, where it breaks free and bobs back to the surface, like a floating bar of soap, sometimes stealing a bit of the inherently dense mantle as it goes. If so, geologists could delve into very deep matters indeed.


    Laser Light From a Handful of Dust

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    Physicists have sparked laser action in a light-trapping powder, they reported last month in Physical Review Letters. They say the effect, which causes the powder to radiate intense light in all directions, might one day be used to brighten some kinds of flat-panel displays.

    Conventional lasers use a pair of mirrors to bounce a light wave back and forth through a cavity containing a material or gas whose atoms are “pumped” into a higher energy state by an external light source. Each time a photon hits an excited atom, the atom falls back to a lower energy state while emitting a photon with the same wavelength and direction. When enough atoms in the laser cavity are excited, the process can sharply amplify a light beam.

    A team led by Hui Cao from Northwestern University in Evanston, Illinois, has produced a similar effect in a finely ground semiconductor powder. In 1997, a team of Dutch and Italian scientists, including Ad Lagendijk of the University of Amsterdam and Roberto Righini of the European Laboratory for Non-Linear Spectroscopy in Florence, Italy, demonstrated that such a powder can trap or “localize” light. Because of its high refractive index, the powder strongly scatters light waves, bouncing photons back and forth like balls in a pinball machine.

    If the grains are close enough—less than the wavelength of the scattered light—the paths of the photons should form closed loops. “No matter which way [the waves] try to go, they will be scattered,” explains Cao. “Depending on the local configuration of the scatterers, you have different probabilities for loops.” As a result, the light passes many times through the same grains, just as an ordinary laser's light passes many times through the cavity between mirrors. “You can compare this with a cube made of six mirrors; a light wave will then run around continuously—just like in an optical cavity,” says Lagendijk. If the atoms in the grains have been pumped to a higher energy state, the process could amplify light.

    To test this idea, the team at Northwestern University prepared powder films of zinc oxide and gallium nitride, with particles of about 100 nanometers in diameter. They shined laser light onto the films to pump their atoms. Then they directed a probe beam at the sample and measured the total intensity of the scattered light. The team noticed that when the pump laser reached a certain power, the intensity of the light emitted by the sample increased sharply, by 10 to 100 times. They concluded that the light was amplified; the powder had become a laser. “You are actually getting stimulated emission,” says team member Eric Seelig. “Light travels in those loops, and each of these closed loops forms a cavity.”

    Righini says it's the first time researchers have demonstrated that laser amplification can take place in a powder. “The paper is rather convincing,” he says, predicting “this research will trigger more experiments.” One way to exploit the phenomenon, says Cao, might be to shrink the phosphor grains that emit light in flat-panel field-emission displays. In these displays, each pixel consists of a tiny electron emitter placed in front of a tiny screen. The electron emitter, says Cao, excites the atoms of the phosphor; in small enough grains, it might spark laser amplification and brighten the pixels. “We are working on that,” she says.


    University Cash Crisis Blocks Career Paths

    1. Lone Frank*
    1. Lone Frank is a writer in Copenhagen, Denmark.

    COPENHAGEN—A bitter row has broken out at the University of Copenhagen, which has been forced to cut its scientific staff to close a yawning budget gap. The university science department decided in January that, instead of assessing who were its least productive researchers and firing them, it would simply not fill any junior tenured positions that became vacant—in effect blocking the career path of young tenure-track researchers. As the consequences of that policy have begun to bite, aspiring young scientists say they now have little prospect of advancement and a whole generation of young researchers will either have to leave academic science or pursue a career abroad. “There is an atmosphere of hopelessness among students and postdocs whose possibilities for embarking on an academic career at the university now seem extremely limited,” says plant molecular biologist Lars Østergaard, a postdoctoral fellow.

    The crisis was precipitated late last year when the Danish government cut overall funding to Denmark's five universities. The science department at Copenhagen—the country's largest—was hit especially hard because it was already running a budget deficit. Forced to cut 15% of the tenured science positions—which translates into 70 people—the administration eliminated about 12 posts by offering early retirement to older faculty members and will cut the rest over time by not filling junior positions. “[It's] outrageous to prevent the necessary staff renewal and infusion of new ideas by forcing out young, talented, tenure-track scientists,” says molecular biologist Olaf Nielsen, an associate professor.

    The new policy is likely to exacerbate a simmering age problem in Danish universities. During the 1960s and '70s, the university system expanded rapidly and a large number of tenured positions were created. That generation of scientists is now approaching retirement age. “There will be an acute need for replacements when 30% to 40% of the currently tenured staff retire in 5 to 10 years,” says associate professor of zoology Peter Arctander. “But because of what is happening now, there will be a lack of qualified young scientists.”

    Dean of science Henrik Jeppesen defends the policy. Although “it is sad that a number of young researchers have to leave,” he says, “it would have created a very bad atmosphere to fire faculty members who have worked here for many years.” This view is supported by university president Kjeld Møllgaard, who says it would be “an unfair personnel policy to simply get rid of the least productive as if it were a horse race.” Jeppesen and Møllgaard both acknowledge, however, that the reaction of Denmark's powerful unions was a consideration. Per Clausen, president of the academics' union, the Magisterforeningen, says: “In principle we regard the science department's handling of the situation as the only proper response, but at the same time it is clear that blocking staff renewal will badly hurt the university and its research.”

    Indeed, research is already hurting. Several research groups have been reduced to the point that ongoing projects have effectively come to a halt. For example, one group in the department of genetics investigating the silencing of chromatin has been shut down after 4 years of successful work because the tenure of its leading assistant professor has been canceled. Science teaching will also be seriously hit, because courses are largely taught by assistant professors and junior associate professors. Leif Søndergaard, an associate professor of genetics, says that “because of the cuts, many courses will no longer be offered every year and others will be generating waiting lists.”

    Young researchers are now beginning to make their voices heard. Østergaard has sent a highly critical letter, signed by 90% of the graduate students and postdocs in the department of molecular biology, to the Copenhagen University Journal. Among other things Østergaard describes a widespread feeling that “the university is shooting itself in the foot by not identifying and getting rid of those researchers whose scientific contribution is minimal.”


    DESY Puts the Spin Into Gluons

    1. Alexander Hellemans*
    1. * Alexander Hellemans is a writer in Naples, Italy.

    In the microworld of subatomic particles, metaphors quickly reach their limits. Quarks, the building blocks of protons and neutrons, are held together by a haze of force-carrying particles referred to as subatomic “glue.” But it now appears that, unlike any glue in the macroworld, these gluons have a property known—metaphorically, again—as spin.

    The finding, reported last week at a Moriond meeting+ in the French Alps by physicists from DESY, Germany's particle physics lab near Hamburg, is a step toward solving a long-standing puzzle about protons and neutrons, collectively called nucleons: What gives these particles their spin? The three quarks that permanently inhabit a nucleon only appear to contribute a small part of its spin. The swirling sea of “virtual” quarks, which flash in and out of existence inside each nucleon, seem to add even less, and the total contribution by all the quarks is only 30%. That leaves the gluons. “The question of how much of the proton spin is carried by gluons as compared to quarks has been at the forefront of people's minds for the last several years,” says Frank Close of Britain's Rutherford Appleton Laboratory. Now, a new technique for reaching into protons and gauging the spin of their gluons has yielded evidence that gluons do indeed carry part of a nucleon's spin, although the precise amount isn't clear.

    Studying the interior of nucleons is not easy, and some of the world's largest particle accelerators have been involved in this endeavor, including machines at the CERN particle physics center near Geneva, the Stanford Linear Accelerator Center in California, and DESY. Physicists use these accelerators to smash beams of leptons—pointlike charged particles such as electrons, positrons, and heavier electronlike muons—into targets containing protons. Occasionally a lepton exchanges a force-carrying photon with a quark inside a proton and scatters off it.

    In these experiments, both the leptons and the protons are spin-polarized: Their spins are aligned in one specific direction. From the way the scattering probabilities change when the spin of the particle beam or the target is reversed, the physicists can calculate the spin contributed by the quarks. Until now, however, the gluons inside protons have escaped scrutiny simply because they are not charged and so cannot interact with leptons electromagnetically, via a photon.

    But the researchers who operate the HERMES detector on DESY's Hadron-Electron Ring Accelerator (HERA) reported at the Moriond meeting that they had followed a suggestion put forward by other European researchers and looked at a different interaction between gluons and a probe beam of positrons, known as photon-gluon fusion. As positrons enter a proton, some are strongly decelerated by its charge, causing them to shed high-energy photons in a process known as bremsstrahlung. “The photon comes in, materializes as a quark-antiquark pair, and one of these quarks scatters from the glue,” says HERMES spokesperson Edward Kinney of the University of Colorado, Boulder. So a combination of electromagnetism and the strong nuclear force is responsible for the scattering.

    Quarks can't be detected directly, but after scattering they disintegrate into a stream of other particles, which can be picked up by particle detectors. Because the quarks produced by the bremsstrahlung photons retain the polarization of the original leptons, their probability of being scattered in a given direction depends on the gluons' spin. The results so far—an analysis of collision data collected in 1996–97—indicate that gluons do carry spin. But so far, says Kinney, “we cannot conclude from our data what the total contribution of the glue is to the nucleon spin.”

    Dietrich von Harrach of Mainz University in Germany, one of the physicists who is studying similar data from the SMC collaboration at CERN, adds that interpretation of these streams of particles can be tricky. Instead of resulting from photon-gluon fusion, he says, they may be generated when one of the relatively low-energy positrons in HERA's beam exchanges a photon with a quark, which then emits a gluon, a process called Compton scattering. “The predominance of the Compton process over pair production may be a real problem,” he says. However, von Harrach expects that the muon beams in CERN's COMPASS experiment, now under construction, will have high enough energies to remove that ambiguity and make a definitive measurement of the spinning glue of the microworld.

    • + Rencontres de Moriond, QCD and High Energy Hadronic Interactions, Les Arcs, Savoie, France, 20 to 27 March.


    Battle Over a Dying Sea

    1. Jocelyn Kaiser

    Scientists are at odds over whether to save the Salton Sea, an engineering mistake that has become a deathtrap for wildlife; the remedy they choose could influence how environmental debacles are dealt with around the world

    SALTON SEA, CALIFORNIA—The thousands upon thousands of shards of barnacle shells heaped along the shore are one clue that something bizarre is happening to this vast desert lake 150 kilometers east of San Diego. “It's not normal to have barnacles in inland lakes,” says San Diego State University limnologist Stuart Hurlbert. Other dead giveaways are the washed-up tilapia, their eyes plucked out by seagulls, and the moody water: “The color goes from black coffee, to orange, to red, to green depending on the algae,” Hurlbert says. The bitter water stinks in summertime, thanks to rotting algae and fish. And desert gales sometimes stir currents that dredge up pockets of hydrogen sulfide and ammonia, a toxic brew that can kill fish by the ton.

    Nothing, in fact, is normal here at the Salton Sea, a 984-square-kilometer salt-ridden lake that biologists describe as an artificial ecosystem gone haywire. Created by an engineering debacle almost a century ago that redirected the entire lower Colorado River out of its banks and into a depression in the desert, the Salton Sea is now a bird watcher's winter paradise. It is a migratory pit stop or wintering ground for millions of birds, including the brown pelican and several other threatened or endangered species. Lately, however, the sea has become a deathtrap for the birds, too: Over 200,000 have succumbed to avian cholera, botulism, and unknown causes since 1992. “We'll see a lot more birds and fish dying,” predicts Tonie Rocke, a wildlife disease specialist at the U.S. Geological Survey (USGS) in Madison, Wisconsin. No wonder, then, that the Audubon Society has taken to calling the Salton Sea “an environmental Chornobyl.”

    What to do about this widening ecological tragedy has become a major controversy. Last year, Congress ordered the Department of Interior to consider ways to restore the sea to its former glory, but some of the possible fixes could cost billions of dollars. Officials are contemplating such sums because they believe the Salton's salvation would spur tourism and create fisheries, as well as provide crucial protection for wildlife. Developers have drained over 91% of Southern California's wetlands; the sea compensates for lost habitat. “This sea sits in a critical pathway,” says wildlife disease specialist Milton Friend, executive director of the federal Salton Sea Science Subcommittee. The birds can't “just go somewhere else.”

    But others question whether the accidental sea should be saved at all. Given the uncertainties and huge costs of trying to manage it as a stable ecosystem, “it might be a safer place all around if they just let the fish disappear and the lake become salty,” says Ed Glenn, a University of Arizona, Tucson, environmental biologist who studies the Colorado delta in Mexico. The delta's wetlands could become a new and safer home for migratory species, he says.

    The steps taken in the coming months to deal with the Salton mess will test whether the government can undo the environmental damage it has wrought over the past century by damming and diverting scarce Western waters to spur development, observers say. It could also offer lessons for dealing with other calamities, from Florida's Everglades to central Asia's Aral Sea (see p. 30). Like those ailing watersheds, the Salton's “entire ecosystem has been damaged, and we need to bring a lot of science and policy to bear to see if the situation can be addressed,” says David Hayes, counselor to Interior Secretary Bruce Babbitt. “How our country deals with the Salton Sea tells us a lot about who we are as a people,” says fisheries scientist Barry Costa-Pierce of the University of Southern Mississippi in Ocean Springs. “We destroyed all the wetlands and created the sea. Are we willing to let the areas we have left go away?”

    A gruesome soup

    The Salton Sea began life as an accident. In 1905, engineers, trying to tap the Colorado River to replenish irrigation canals, lost control of the swirling waters. The entire river spilled into the Salton Trough for 16 months before engineers managed to steer it back into its bed. Fed by salty runoff from irrigated fields and drained only by evaporation, the lake became a marine ecosystem that wildlife managers in the 1950s stocked with croaker, corvina, and sargo for sport fishing. Seaside hotels prospered.

    In the 1960s, however, the lake's waters—rising as inflow outpaced evaporation—began to flood many of the docks and beaches. Then, about 10 years ago, state officials issued warnings against eating the Salton's fish, found to be tainted with selenium, an essential mineral in tiny doses but a liver toxin in larger ones. Health officials also sounded alarms over the risk of pathogens such as coliform bacteria, dumped into the lake by a polluted stream from Mexico.

    Most of the fish that survive in this soup are tilapia, an African species raised in fish farms and released into irrigation ditches to eat exotic weeds, that found their way into the sea in the early 1960s. But even this hardy breed can be overwhelmed: During frequent die-offs, fish sometimes pile up on the shore as far as the eye can see. With most tourists scared off by the lake's maladies, the boarded-up hotels and gas stations today lend the sea an eerie feeling of failure.

    It took some massive bird die-offs to put the Salton Sea on the nation's radar screen. First 150,000 eared grebes, a black ducklike bird, died mysteriously in 1992. Then about 3 years ago, botulism felled several thousand American white pelicans—10% of their western population—and more than 1000 endangered brown pelicans. Newcastle virus, a paralyzing pathogen better known for its devastating effects on poultry, recently wiped out many double crested cormorants, too.

    Bring out your dead.

    Pelicans poisoned by botulism were carted off for incineration in summer 1996.


    The Salton's morbid reputation stirred federal officials to action in December 1997, when Babbitt launched a multi-agency review of options for restoring the sea. A few months earlier, Congress had formed a Salton Sea Task Force, co-chaired by Sonny Bono, who reminisced about water skiing on the lake as a teenager. Spurred by Bono's death last year, Congress passed the Salton Sea Reclamation Act, which requires Interior's Bureau of Reclamation to prepare a study on how to help the sea and submit it by 1 January 2000. Congress also earmarked $5 million to reconnoiter the lake's biology and geochemistry.

    One point on which most scientists agree is that more data must be gathered on the Salton's ecology before engineers forge ahead with any fixes. After mostly ignoring the lake since the mid-1950s, scientists are now pouring in to sample its sediments and creatures, working out of a six-person trailer in the state park. The lake can be an unsettling place to do science. “It's really strange to be out sampling and see a fish just die in front of you, just pop up,” says Brandon Swan, a graduate student of Hurlbert's.

    Some of the Salton's ailments are obvious. It's 25% saltier than ocean water, which stresses the tilapia. “The fish community is doomed” if the salinity level rises much higher, says Costa-Pierce. The lake is also choked with algae fed by a surfeit of nitrogen and phosphorus from fertilizers and sewage—classic eutrophication that sucks up oxygen and suffocates other life-forms. Fueling the algal blooms is the Salton's peculiar geometry: Just 15 meters at its deepest, the lake's water layers turn over, or mix, every few weeks in the summer (a time when most lakes are stable). When layers of warm water form near the surface and cooler, oxygen-poor water pools at the bottom, winds sweeping across the shallow lake whip up nutrients and a toxic brew of ammonia, hydrogen sulfide, and deoxygenated water. The sea may be edging toward catastrophe, but at the moment, “it's not a dying lake,” says Hurlbert. “It has too much life.”

    Indeed, microscopic life is abundant. “It's sort of a parasite microbial haven,” says Costa-Pierce. Hurlbert's group has identified an algae called Chatonella that's blamed for fish kills off Japan and Australia; another San Diego State group has found a parasite thought to live only in aquaria, Amyloodinium ocellatum, plastered on tilapia gills. The Salton “is like an aquarium that nobody has cleaned,” says USGS water chemist Jim Setmire.

    USGS wildlife biologists are trying to pin down which diseases are felling the birds. The botulism outbreaks are Type C, which is rarely seen in fish-eating fowl (it's usually spread through maggots). Apparently the tilapia, their immune systems stressed from fighting off bacteria, are prone to infection by botulism spores, which grow in their gut and produce toxin. But “we're still really puzzled as to what the mechanism is,” says Rocke. Another utter mystery is the grebe die-off; some suspect toxic algae, but Rocke says nobody has fingered a species.

    No simple fix

    The scientific data are meant to inform a hellishly complex management problem. To lower the salinity, engineers want to give the sea an outlet so that it can be diluted by relatively fresh drainage water. The bureau and the local Salton Sea Authority are now leaning toward diking off part of the lake to form evaporation ponds. By pumping water from the salty lake into these ponds while drainage kept flowing in, the sea's level and salinity could be stabilized. Then, within a couple of decades, engineers would build a canal to import fresher water—perhaps wastewater from San Diego or Arizona. Authorities are also considering carving a second canal to pump Salton water to the Gulf of California. However, this scheme is controversial, because brine and nutrients might harm the ecology of a Mexican biosphere reserve.

    Other problems are waiting in the wings. Choking off the flow of phosphorus and nitrogen compounds that fuel algal growth, for one, would require a politically touchy plan to clean up agricultural runoff and city wastewater. Cleansing the sea itself is a different story. “Even if you significantly control the inflow, you have all these nutrients that can recycle for years and years,” Setmire says. One proposal for removing nutrients is to mount a fishing operation to harvest tons of tilapia, which sequester nutrients in their tissues.


    Thousands of gulf croakers washed ashore last year, apparently victims of anoxia.


    Although the leading plan has several potential showstoppers, experts say fixing the Salton is neither a scientific nor a political quagmire. Restoration proponent Patrick Quinlan, an engineer on the staff of Representative George Brown (D-CA), points to projects like the $8 billion Everglades restoration in Florida and an even more ambitious plan under way to restore wetlands and improve water quality in the vast delta that feeds San Francisco Bay. Like these projects, the Salton Sea restoration would be a major hydrological undertaking buffeted by competing agricultural and ecological interests. “It's going to take a lot of work, but I don't think it's totally intractable,” Quinlan says. Others are more skeptical. “The idea that one can manage something as large as the Salton Sea seems awfully daunting to me,” says USGS hydrologist Roy Schroeder.

    Many experts doubt that it's possible to come up with a scientifically sound solution by January. Scientists “haven't answered some of the key questions,” like the connection between pollution and the die-offs, says Phil Pryde, an environmental policy expert at San Diego State and chair of the California Audubon Society's Salton Sea task force. “They're rushing it.” Others are also worried that the goals laid out by Congress—to lower the lake's salinity and stabilize its level—are skewed, when a more serious threat to ecology may be the high nutrient levels.

    Some observers argue that the crisis has been overblown and that it might be better to allow the sea to grow even saltier. This strategy, coupled with efforts to draw off nitrogen and other nutrients, got major play in a report co-authored by Glenn and released last February by The Pacific Institute, a policy think tank in Oakland, California. Skeptics also point out that previous seas that formed centuries ago in the sizzling Salton basin all evaporated, in a natural cycle. “This is a situation where you're really fighting nature,” says aquatic ecologist Eugenia McNaughton of the U.S. Environmental Protection Agency in San Francisco, who's overseeing the environmental assessment of Interior's evolving plans. She favors “a management strategy that takes into account the history of the place.”

    If the Salton's salinity increases and the fish disappear, the sea would turn into a brine shrimp lake—like Utah's Great Salt Lake—that would still support plenty of wildlife, says Glenn. Other birds, he contends, could find habitat if steps were taken to manage wetlands in the Colorado delta in Mexico. The Salton Sea, agrees grebe expert Joe Jehl of the Hubbs-Sea World Research Institute in San Diego, “wouldn't be a dead lake, it would be a different lake.”

    Arguments that nature should take its course “cause my blood to boil,” says Friend, who contends that the disruption to wildlife would be far greater than Glenn and others believe. The Salton's birds may not do so well in Mexico's delta wetlands, adds Hurlbert, who points out that these waters have a different mix of vegetation, fish, and invertebrates. Friend argues that the issue of protecting important wetlands is larger than the Salton Sea itself, noting that an “explosion” of bird disease across North America in recent years has been linked to birds forced to live in close quarters. “We should be fighting for all the habitat we can sustain,” he says. A Salton success story, Friend says, could set an example for how to use wastewater to provide needed wildlife habitat in other water-scarce regions in the world.

    Although the Salton Sea's fate may remain as cloudy as its water—if Interior does opt for an engineering solution, Congress will have to find money to pay for it—scientists agree that steps must be taken to help the area's wildlife. Setmire recalls a visit to the sea during a die-off in August where he helped collect sick and dead birds and wound up holding an injured white pelican in a pillowcase. “I was holding this big bird. Not pretty, but majestic. It really gave you a feeling about wanting to save this ecosystem.”

  11. ARAL SEA

    Coming to Grips With the Aral Sea's Grim Legacy

    1. Richard Stone

    There's no undoing this sea's demise, perhaps the most notorious ecological catastrophe of human making. But scientists are hoping to soften the impact

    NUKUS, UZBEKISTAN—Standing on the roof of the 10-story Uzbek Academy of Sciences (UAS) building, Yusup Kamalov has watched, more times than he would care to remember, the ground-hugging, dirty gray clouds that churn across the salt-streaked desert beyond the city's outskirts. Sometimes the chubby-cheeked engineer has to beat it indoors before one of these dust storms barrels into Nukus. The screaming grit blots out the bronze statue of famed 15th-century Uzbek astronomer Mirzo Ulugbek in front of the academy's local headquarters and chokes anyone unlucky enough to be caught outdoors.

    The drowning desert.

    As the Aral shrinks, stranding boats, inefficient irrigation from the Darya rivers is blighting the land.


    Dust storms are common in deserts, but here in the Republic of Karakalpakstan, a province in the northwestern corner of Uzbekistan, they may also be harbingers of sickness and death. After decades of zealous Soviet efforts to yoke a huge swath of central Asia to the single-minded task of growing cotton, the locals are reaping an ill wind. It carries sulfates, phosphates, chlorinated hydrocarbons, and their ilk—fertilizers and pesticides whipped up from the bare floor of the shriveled Aral Sea and the poisoned land around it. According to the United Nations Development Program (UNDP), the death rate from respiratory illnesses in Karakalpakstan—167 per 100,000 people in 1993—is among the world's highest. “The level of health and the quality of life are profoundly poor, and deteriorating,” says Ian Small, country manager for Doctors Without Borders (DWB), a medical relief agency (also known as Médecins Sans Frontières) that has logged high rates of anemia among Karakalpaks. “It is a tragic humanitarian disaster.”

    The toxic dust storms are just one symptom of the environmental and social catastrophe that is engulfing this region. After decades of wanton irrigation, once-fertile fields produce next to nothing. And the shrinkage of the Aral Sea from a vast body of fresh water teeming with fish to a salty remnant has marooned ports and killed the fishing industry. Even local officials are resigned to the sea's eventual breakup: “We will be witnesses to the disappearance of the Aral Sea,” says Karakalpak health minister Damir Babanazarov.

    Ignored by Soviet planners for decades, the 35 million people who live in the Aral Sea's watershed have finally caught the attention of the rest of the world. A major campaign, spearheaded by the World Bank and the UNDP, is under way to improve the region's drinking water, revamp its agricultural practices, and sustain its biodiversity. The goal is not to turn back the clock and restore the Aral to its former grandeur. Rather, the massive cash infusion is meant to assuage the disaster's social consequences and avert a scramble—or even a war—over water among the fledgling democracies in central Asia.

    Western water managers are hoping to learn lessons of their own. Central Asia is “attempting to implement many of the sustainable [agricultural] practices that the rest of the world is grappling with,” says Daene McKinney of the University of Texas's Center for Research in Water Resources in Austin. Steps to mitigate the Aral's problems, he says, “can teach us many things” about water management in the United States.

    White gold. The Aral region's plight traces its roots to the early days of the Soviet Union, when communist authorities hatched a plan to grow all the cotton the budding superpower would need by irrigating vast plains in central Asia. The Soviets revved up cotton production in the mid-1920s, then 30 years later began carving hundreds of kilometers of unlined canals from the Aral's two tributaries—the Amu Darya and the Syr Darya —into the surrounding desert to nourish new cotton fields. The strategy paid dividends: The Soviet Union soon joined China and the United States as the world's leading cotton exporters. But by the early 1960s, the first signs of trouble began to appear: The Aral Sea was unmistakably shrinking as irrigation projects sucked billions of liters of water from its feeder rivers.

    By now what was the world's fourth largest lake, slightly bigger than Huron, has lost 80% of its volume over the last 4 decades, exposing 3.6 million hectares of seabed. Evaporation and agricultural runoff have left much of the Aral saltier than the ocean, which in turn has killed off most fish. (All 24 of the Aral's native species have long since perished.) The collapse of the Aral's fisheries and other economic tribulations have displaced as many as 100,000 people, says Small. “We view the Aral situation as a real-life example of how unsustainable planning can cause severe and irreparable damage,” McKinney says.

    One of the grimmest spots is Muynak, the site of a former cannery. Today, rusting hulks of fishing boats litter the sand on the town's northern fringe, because the Aral's shores have shifted 70 kilometers to the north and are still receding. In the early 1980s, “we could still take a bus a few kilometers from Muynak to go swimming at the shore and watch the blue seagulls,” recalls DWB's Valeria Slabolitskaya. Today, scores of unemployed people laze around on the dusty streets. Just outside town, says Small, “you can drive for kilometers and see nothing but thick white salt. It looks like snow.”

    Not even “white gold,” as Soviet officials called central Asian cotton, can bail out the troubled region. Wasteful irrigation schemes that allowed farmers unlimited water have raised the water table, blocking drainage and clogging the fields with as much as 700 tons of salt per hectare, says Kamalov. Although Uzbek scientists have developed salt-resistant cultivars that require half as much water to grow as normal cotton, farmers have resisted planting them. “People have said, ‘If we introduce this new cotton, the authorities will not give us water,’ “says Kalbai Myrzambetov of the UAS's Institute of Bioecology in Nukus.

    Cotton yields continue to plummet as more and more hectares are blighted by salt. Today's yields in Uzbekistan, about 2 tons per hectare, are less than a third of those in Israel, a country with a similarly arid climate, says biologist Saparbai Kabulov of the Institute of Bioecology. The rising water table has also tainted drinking water supplies. “We have not a single hectare in which the water is fresh,” he says. After rainstorms in springtime, adds Kamalov, “that's when the water is the worst. We can't drink tea. It tastes bad.”

    Going, going … After first acknowledging the ecological disaster during glasnost in the mid-1980s, the Soviets drafted grand plans for saving the imperiled Aral, perhaps by diverting water southward from Siberian rivers. Those plans were never realized, and after the Soviet Union dissolved in 1991, five new countries inherited the disaster. Since then, international organizations have tried to forge a game plan for helping central Asia tackle the legacy of Soviet-style cotton production.

    The international community is now rallying around a series of initiatives approved last year, in which the World Bank, the Global Environment Facility (GEF)—an agency managed by the World Bank and the United Nations that funds projects on water, biodiversity, and other environmental issues in developing countries—and several other organizations plan to spend $600 million through 2002 to address a panoply of Aral problems. A large chunk of the money is for engineering projects to purify drinking water and upgrade irrigation and drainage systems throughout the Syr Darya- Amu Darya basin. The stakes are high: According to a 1997 World Bank report, if salt continues to leach into the fields and the water supply, much of the land “will be unfit for irrigated agriculture within a few decades” and the water will become undrinkable. “The economic, environmental, and social impacts would be incalculable,” the report states.

    As for the Aral Sea itself, the prospects are grim. In theory, enough fresh water flows through the Aral's watershed to replenish the sea, says Kamalov. But experts say that restoring the sea to about twice its current size—enough to sustain a diverse aquatic ecosystem—would require stopping all irrigation from the two Daryas for the next half-century. “There is no way to get the requisite water without destroying the countries' economies,” says Aral expert Philip Micklin of Western Michigan University in Kalamazoo. Experts hope that constricting the massive leaks from the region's irrigation network will result in less water being siphoned from the Darya rivers and thus greater inflows to the Aral. More efficient irrigation should also lower the region's water table—a process that could take decades—and flush salt from croplands, boosting productivity and the region's economic vitality.

    But at the present rate of shrinkage, Kamalov predicts, in a decade or so “the Aral will break up into three small lakes and disappear as an ecosystem.” Two lakes, mostly in Uzbekistan, likely would continue to shrivel. Kazakh officials hope to preserve the northern brackish lake by diking it off from the main water body to the south and allowing the Syr Darya to gradually refill it. They hope this will allow native fish to return to the north lake and revive its fisheries. There's a “real possibility” the Kazakhs will succeed, says Micklin.

    A similar project, now gearing up, seeks to save an important wetland in the 28,000-square-kilometer Amu Darya delta, just south of the Aral Sea in Uzbekistan. Half of the delta's wetlands have already dried up. The project focuses on Lake Sudoche, a roughly 500-square-kilometer lake southwest of Muynak that, by international convention, has been designated a critical habitat for waterfowl. Together with the surrounding marshlands, Sudoche is home to several endangered species, including the Bukhara deer, the Dalmatian pelican, the Siberian crane, and the bastard sturgeon.

    The $3.9 million project will include the construction of earthen dams between the dry Aral Sea bed and Sudoche, which is becoming saltier and more oxygen-poor every year. Once completed, the dams should corral as much as 600 million cubic meters of rainwater during the fall and winter. The fresh water is expected to flush the wetlands and raise oxygen levels, improving conditions for wildlife. If the project succeeds in saving Lake Sudoche, GEF managers hope it will stimulate the local economy through increased fishing and hunting.

    Project managers acknowledge, however, that the first big flush could have unintended ecological consequences, such as water temperature changes, which could in turn harm native wildlife. “The level of risk is unknown,” states a GEF report released last year. It points out, however, that “if nothing is done, Sudoche would become even more saline, the oxygen content of the waters would continue to drop, and the wetlands would lose much of [their] biodiversity and fish life.” Adds Micklin, “It's hard to see how the project would make things worse.”

    The human dimension. As engineers and agronomists try to improve water management and stem the environmental destruction, organizations like DWB are focusing on human health. In one project, DWB staff members are collaborating with Muynak doctors to improve drug therapies at a local tuberculosis (TB) clinic. “We're building a brand-new dispensary,” says DWB doctor Darin Portnoy. “They just didn't have any money to do something like this.” Karakalpak officials welcome the foreign intervention. “We might not be able to save the Aral Sea,” says health minister Babanazarov. “But we may be able to save the people living around it.”

    Karakalpakstan has dire health problems besides TB—rampant anemia and high infant mortality rates, for example—that also beg for resources. DWB epidemiologist Joost van der Meer is trying to ascertain the causes. “I'd bet on the toxic dust storms,” he says. In Nukus, the frequency of major dust storms has increased from about one storm every 5 years in the 1950s to about five a year, says Kamalov. Toxic dust storms, van der Meer says, “are the one thing you can't find anywhere else.” But few data exist on the dust's constituents. “I'm not sure of any reliable chemical analysis,” says Ross Upshur of McMaster University in Ontario, who has analyzed the scant Aral health data. “This is one of the key areas for initial research.”

    Van der Meer acknowledges that it will take a lot of outside help to get to the bottom of the region's health woes. “We have no capacity to do this ourselves,” in either labor or lab facilities, he says. Local experts are also appealing for foreign partners. Thus van der Meer hopes to become a matchmaker of sorts, hooking up Western and Uzbek scientists for projects on everything from tracking disease rates to probing the dust's toxicity. What the Karakalpak researchers lack in data or equipment, however, they compensate for in access to a unique research site. As Kabulov points out, “There's no experience for science around the world in which a whole sea has disappeared.”


    Heart Failure Simulated

    1. Robert F. Service

    New computer models suggest why failing hearts show diminished contractility and an increased susceptibility to fatal rhythm disturbances

    Heart attacks may be the most feared heart ailment. But the most common is a slower but potentially equally deadly disorder, a steady weakening of the heart muscle known as chronic heart failure. Every year in the United States alone, more than 400,000 people develop the condition, which often causes fatal disturbances in heart rhythms. New results, some from computer simulations of the heart, are now helping clarify what causes heart failure and makes it so dangerous.

    Heart failure occurs when the cardiac muscle cells contract less effectively, with individual beats becoming longer and less forceful. It often sets in after a heart attack damages the muscle, but exactly what causes the altered contractility has been hard to pin down. The new work, described in the 19 March issue of Circulation Research by a team led by Eduardo Marbán, Raimond Winslow, and Brian O'Rourke of The Johns Hopkins University School of Medicine, suggests that it's largely due to the altered production of two proteins that help control the concentrations of calcium ions in cells.

    The simulations, which mimic the interplay of many different proteins controlling heart muscle contraction, testify to the power of studying cells as systems (see the special section on Complex Systems beginning on p. 79). They may also have important medical implications, because they show how the biochemical changes might trigger the fatal arrhythmias. “If that turns out to be true, that's important, because half of the people with heart failure die from arrhythmias,” says Steven Houser, a cardiac cell specialist at Temple University School of Medicine in Philadelphia. It suggests, he says, that drugs capable of restoring the proper balance of calcium in cardiac cells could be used to treat heart failure and prevent the arrhythmias.

    Researchers have known for some time that in heart failure, cardiac muscle cells produce abnormal amounts of key proteins, although they don't know why. For example, two of the proteins that form the membrane channels that funnel potassium ions in and out of cells drop by as much as 70%. Because potassium ions flowing out of muscle cells help reverse the electrical change, or action potential, that triggers muscle contraction, this discovery led to widespread speculation that reduced potassium outflow is what leads to the prolonged action potential and weaker heart muscle contraction in heart failure.

    But the malfunctioning heart cells also contain higher than normal amounts of a shuttle protein for calcium, another ion that is important for muscle contractility, and less of a protein that helps store calcium within cardiac cells. “So many things are different in [these cells] that it's impossible to sort out the relative importance of one thing versus another,” says Marbán.

    So Winslow and colleagues constructed a computer model of a cardiac cell, incorporating everything known about the various proteins involved in ion movements and their interactions. Then, as they altered the concentrations of the various components to match what's seen in heart failure, they tracked the effect on the cardiac cell's action potential and subsequent muscle contraction. Contrary to expectations, they found that decreased potassium currents “had a minor effect on the action potential duration,” says Winslow. But changes in the calcium-handling proteins dramatically lengthened the action potentials and the contractions.

    That makes sense in light of calcium's role in muscle contraction, says O'Rourke. Its release from an internal storage site known as the sarcoplasmic reticulum in response to an action potential first sets off a contraction, then helps shut off the action potential, resetting the system. What apparently happens in heart failure is that the decline of the calcium storage protein reduces the amount of calcium available for muscle contraction and for the negative feedback on the action potential. As a result, the cell's contraction is weaker and the action potential is prolonged. The cell partly compensates by turning up production of the shuttle protein, which moves calcium into and out of the cell. But this effort fails because less calcium can flow through the cell membrane than into and out of the inner storehouse.

    But that's not all the Johns Hopkins group found. Another, as yet unpublished, computer model—this time of the whole heart—showed that elongated action potentials in a small number of cardiac cells could have grave consequences for the heart as a whole. Previous work has shown that elongated action potentials can lead to an altered electrical rhythm of cardiac cells, known as early after depolarization, or EAD, which in turn has been linked to arrhythmias. In their global heart model, the Johns Hopkins team found that EADs in a small region of the failing heart could have a ripple effect, triggering global abnormal electrical activity typical of arrhythmias.

    “This is really valuable, high-quality work along the way to coming up with new treatments for heart failure,” says Donald Bers, a physiologist and cardiac cell specialist at Loyola University in Chicago. The models, he says, suggest that if researchers can boost the amount of calcium available to cardiac cells, they should see changes in the duration of action potentials. Winslow says they've already begun such studies—in one case by adding a hormone that increases the activity of the storage protein—and that “the preliminary results are looking very promising.”


    Hawking Blesses the Accelerating Universe

    1. James Glanz

    ATLANTA—Stephen Hawking clearly wished to say a word about the cosmological constant, or lambda, the mysterious energy that seems to be permeating space and counteracting gravity on cosmic distance scales. In an overflowing third-floor room at the Ritz-Carlton Hotel here on 23 March, the celebrated cosmologist painstakingly answered a list of written queries from the press, generally with good humor, sometimes with impatience (“That is a ridiculous question,” he responded at one point), and always with a razor-edged wit. But after apparently noticing a short discussion between his assistant, Chris Burgoyne, and the Science reporter about whether a question about Hawking's views on lambda could be added to the list, Hawking interjected with his synthesized voice: “The question about the cosmological constant.”

    It was a question he had answered a year ago, shortly after observations of exploding stars called supernovae began suggesting that lambda was causing cosmic expansion to accelerate (Science, 30 January 1998, p. 651, and 27 February 1998, p. 1298). At that point, Hawking had expressed doubts, calling the results preliminary and apparently regarding lambda as unnecessary in light of his own views of cosmic origins. But the staying power of the results seems to have impressed him along with the rest of the cosmology community. “I have now had more time to consider the observations, and they look quite good,” he said. “This led me to reconsider my theoretical prejudices. I now think it is very reasonable that there should be a cosmological constant.”

    Hawking's new public stance comes a few months after similar statements by Alan Guth of the Massachusetts Institute of Technology, who originally devised the theory of inflation, the most influential explanation for how the big bang expansion got started. The simplest versions of inflation predict a universe filled with far more matter than it appears to hold, so Guth had been exploring alternative, low-density versions of inflation. But the supernova results now have Guth favoring a universe fleshed out, or “flattened,” by a combination of matter and lambda (whose energy is equivalent to matter). “With these observations, I am comfortable with an inflationary universe that is flat,” he told Science during a January meeting of the American Astronomical Society in Austin, Texas.

    No one yet knows just what might produce a cosmological constant of the size indicated by the supernova results. Some theories, in fact, predict that it should be as much as 10123 times larger than that. But such a powerful cosmic repulsion would presumably keep galaxies, stars, and intelligent life from forming. Uncomfortable with the idea that physical parameters like lambda are simply lucky accidents, some cosmologists, including Hawking, have suggested that there have been an infinity of big bangs going off in a larger “multiverse,” each with different values for these parameters. Only those values that are compatible with life could be observed by beings such as ourselves.

    Such “anthropic” reasoning was the subject of another question put to Hawking. “Do you believe that intelligence determines the nature of the universe rather than vice versa?” asked Phillip Schewe of the American Institute of Physics. Hawking first made light of the question, asking, “What intelligence?” before calling the anthropic principle “fairly obvious” and reaffirming his support for it.


    From Lasers, Tabletop Nuclear Bursts

    1. James Glanz

    ATLANTA—Physicists gathered here last week to celebrate longevity—the APS centennial—but one of the most remarkable experiments described was above all a triumph of brevity. The tabletop experiment used laser light concentrated into unimaginably brilliant pulses lasting just 35 femtoseconds (10−15 seconds) to spark nuclear fusion in a chilled gas containing clusters of deuterium, or heavy hydrogen, atoms. The spike of energy caused the clusters to explode; fast nuclei from the explosions collided and fused, creating helium and a burst of neutrons.

    The usual fusion laser is a warehouse-sized behemoth that trains long pulses on pellets of fusion fuel, crushing their atoms together—hardly a tabletop operation. “To my knowledge, it has never been done before,” said Misha Ivanov of the National Research Council of Canada in Ottawa, who chaired the session at which Todd Ditmire of Lawrence Livermore National Laboratory in California presented the results for a team of six researchers. Tabletop laser fusion is an unlikely energy source. But the new work could lead to compact sources of neutrons for testing materials that might be used in real fusion reactors. The virtues of brevity were also illustrated elsewhere in the meeting, when another team described using a larger short-pulse laser to induce fusion's opposite number—fission—and create specks of antimatter called positrons.

    Ditmire did not describe his tabletop laser in the talk and declined to speak to reporters about the fusion result, which will appear in next week's Nature. But his previous work has relied on “chirped-pulse amplification” to produce short pulses, says Howard Milchberg, a laser physicist at the University of Maryland, College Park. The technique uses optical gratings and special amplifiers to compress a modest-energy laser pulse into one that packs trillions of watts of power into tens of femtoseconds (Science, 26 November 1993, p. 1379).

    Ditmire and his colleagues aim the pulses at deuterium clusters that are “bigger than a molecule, smaller than a bread box,” as he put it. Formed spontaneously in a jet of deuterium gas cooled to about 100 kelvin, the clusters probably contain from a few hundred to a few thousand atoms each. In the laser's glare, the clusters explode.

    The blast probably occurs in several stages. First, electrons are stripped from their atoms and then rattle around inside the clusters, heating it into a tiny, superhot ball of charged gas, or plasma. Milchberg explains, “It's the electron pressure that drives the explosion of the plasma ball,” which throws off ions with thousands of electron volts (eV) of energy. The deuterium ions then collide and fuse, producing helium and 2.45-million-eV neutrons.

    The team detected the neutrons directly, estimating that each laser pulse liberated roughly 10,000 of them. The process seemed to convert laser energy into neutrons at about the same efficiency as what Ditmire called a “modest-yield” shot on Livermore's giant fusion laser, Nova. Because the total energy is small, however, his setup probably has no hope of producing net energy from fusion. “The answer is never,” said Ditmire when asked about those prospects.

    Livermore's L. John Perkins notes, however, that with a more reactive mix of deuterium and another hydrogen isotope, tritium, the technique should produce about 100 times more neutrons, at an energy of 14 million eV. “The world fusion program needs a neutron source [for materials testing], and it doesn't have one at the moment,” says Perkins, who adds that neutrons from fission reactor sources generally have less than a million eV of energy.

    One of Ditmire's collaborators in the deuterium experiments—Livermore's Thomas Cowan—showed that fusion is not the only nuclear reaction that short-pulse lasers can drive. Cowan and his colleagues on a multi-institution team blasted a layered, solid target of gold and uranium with the Petawatt laser, a device that applies chirped-pulse amplification to one of Nova's beamlines. The intense pulse kicked up a storm of fast electrons, which then drove a cascade of energetic processes. The electrons bounced off gold nuclei, producing gamma rays, which banged neutrons out of other gold nuclei and cracked uranium nuclei into smaller pieces.

    The gamma rays also occasionally split into particle pairs consisting of electrons and their antimatter counterparts, positrons, which Cowan says is “the first time laser energy has been converted to antimatter.” He adds that the work shows that lasers have crossed an energy frontier into a domain that was once the sole province of large particle accelerators. The initial results had a kinetic effect on more than the targets, says Cowan: “They floored us.”


    Baby Giants of the Cosmos

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, California.

    ATLANTA—A supercomputer has opened the baby photo album of stars in the universe to page one, and this is what it shows: brilliant giants up to 100 times bigger than our sun. These stars began lighting up the cosmos about 50 million years after the big bang, according to research presented at the meeting last week.

    Astronomers have long wondered what the first objects in the universe looked like. Theories predict that after the big bang, gravity slowly pulled parts of the expanding gas into clumps, but the agreement ends there. Some models show the clumps coalescing into Jupiter-sized bodies or small dim stars. Others predict titanic stars or even black holes. Computer simulations—often used to model clusters of galaxies later in cosmic history—were little help, because they lacked the three-dimensional resolution needed to track the collapse of myriad parcels of gas into small primordial clouds.

    Now a team led by cosmologist Michael Norman of the National Center for Supercomputing Applications (NCSA) in Urbana-Champaign, Illinois, has broken that barrier. The team used “adaptive mesh refinement,” in which the program zooms in on developing clumps of gas and increases the resolution only in those areas. Each simulation starts in a vast cube of space measuring 18,000 light-years on a side. But by the end, it can resolve details as small as 0.3 light-year—about the size of the cloud of comets thought to surround our solar system. “The higher resolution allows them to follow the process in far greater detail, essentially to the stellar scale,” says astrophysicist Jeremiah Ostriker of Princeton University.

    In this way, the team can track the growth of a nebulous blob until it forms a tight knot ready to spawn a star. The simulation indicated that most such knots were a few hundred times more massive than our sun. The program can't yet track the gas until it becomes dense enough to ignite nuclear fusion. But Norman says, “The most likely result is a star with 10 to 100 times the sun's mass.”

    Such stars were giants that lived fast and died young, consuming their fuel within a few million years. Then they blew up and began seeding the cosmos with the heavy elements forged in their cores, such as carbon, oxygen, and iron. Those elements grew more abundant with each stellar cycle of birth and death. That explains why subsequent generations of stars were smaller, says Norman. Heavy molecules such as carbon monoxide radiate heat far more efficiently than the molecular hydrogen that filled the infant universe. That allowed smaller masses of gas to lose energy and collapse.

    It's still possible that tiny stars could have formed in the first generation if turbulence—a process the current simulation does not capture—split some of the gas into smaller clouds. “That's a possibility, but we have every indication that most of the initial stars were massive,” says Norman.

    So far, observations support him. Stars smaller than 80% of the mass of our sun would have survived to this day, rationing their nuclear fuel in long, slow burns. But astronomers have searched without success for these primitive stars within the ancient globular clusters that swarm around the Milky Way. A few dim stars contain just a dash of heavy elements—about 1/10,000th as much as the sun. That makes them old, but not first-generation objects.

    Despite the uncertainties, the work by Norman's team impresses Ostriker. “This is the best work that has been done on seeing the conditions that led to the formation of the first stars,” he says.


    A Species' Fate, By the Numbers

    1. Charles C. Mann*,
    2. Mark L. Plummer*
    1. *Mann and Plummer are the authors of Noah's Choice.

    A popular approach for predicting a population's survival is coming under scrutiny now that its use in critical decisions on endangered species is on the rise

    SAN DIEGO—When the National Marine Fisheries Service (NMFS) announced on 16 March that it was adding nine populations of Pacific Northwest salmon to the endangered-species list, the agency had barely begun to consider the question of how, exactly, to save this regional icon. The fish face threats from many quarters, including water pollution, dam spillways, and logging practices that harm river ecosystems. Which threats should government officials spend precious dollars trying to address? No single field study can provide the data needed to answer this question. Instead, NMFS scientists must rely, at least in part, on a technique called population viability analysis (PVA).

    First developed more than 20 years ago, PVA has become “conservation biology's greatest scientific contribution,” according to Steven R. Beissinger, an ecologist at the University of California (UC), Berkeley. The technique focuses on the likely fate of a population and what factors can determine or alter that fate. In its most common form, PVA combines stochastic models of population dynamics with field data on a species and its habitat—everything from birth and death rates to the frequency of natural disasters—to predict how long a given population will persist under given circumstances. PVA has had some notable achievements, such as helping to identify measures for boosting grizzly bear populations in Yellowstone National Park. And as one of the few predictive tools ecologists can call on, PVA has become “practically mandatory in planning for endangered species,” says Michael Gilpin of UC San Diego.

    But increasingly, PVAs are being attacked as too simplistic, overly demanding of data, error-prone, and hard to verify. “Even good PVAs are almost always fraught with very serious statistical problems,” says Mark S. Boyce, an ecologist at the University of Wisconsin, Stevens Point. “The confidence intervals are enormous, the error bars explode into the future, and they're very rarely field-tested.” Still, Boyce says, ecologists must make do with this imperfect approach to predicting species survival, because it's the best they've got. “At the moment,” he says, “there's no other choice.”

    To assess the state of the art of what PVA pioneer Michael Soulé of the Wildlands Project in Hotchkiss, Colorado, calls conservation biology's “flagship industry,” 330 scientists gathered here last month for the first-ever major conference on the technique.+ They discussed hurdles facing attempts to extend PVA to cover a wider range of species, and how to factor in the behavior of our own species. And, in an important development, one scientist described how he crash-tested PVA models in the lab, a practice that could help ecologists refine the technique.

    Growing pains. For decades empirical studies of wildlife populations resembled stock market analyses, with graphs projecting future trends based solely on historical upturns and downturns. That began to change in 1978 when Mark Shaffer, then a Ph.D. student at Duke University, examined the fate of Yellowstone's grizzly bears, which had been on the endangered list since 1967 and were under increasing stress from tourists. His analysis for the first time incorporated randomly occurring demographic and environmental events, such as unusually low birth rates or sparse food supplies, into a computer model of population growth. From this hybrid, Shaffer, now at the nonprofit Defenders of Wildlife, estimated the likelihood that bear populations of a given size would survive over given periods of time. From this he derived the “minimum viable population,” which he defined as the smallest bear population with a 95% probability of surviving 100 years.

    Shaffer's analysis, coupled with field data, revealed that the foremost factor determining how long a bear population would survive is the death rate of adult females. Spurred by this finding, federal officials adopted new rules in 1983 that, among other things, blocked tourists from areas in Yellowstone frequented by mothers with cubs. Since then, the grizzly population has increased 5% per year, and some experts say the bear could be removed from the endangered list. “By all accounts,” says Boyce, “the success has a lot to do with the PVAs.”

    Shaffer's approach appealed to the U.S. Forest Service, which by law must maintain “viable populations” of vertebrates in the national forests (Science, 26 March, p. 1996). The agency held PVA workshops in the early 1980s, kindling broader interest in the technique. Subsequently, the appearance of software—freeware, at first, then commercial programs—for running PVAs on a personal computer got everybody doing it. Some ecologists, however, started using the models willy-nilly, even when data were lacking. Barnstorming consultants “would fly in, get together local experts in a species, try to take the information and plug it into the software, and get a result,” Beissinger complains. These often slapdash efforts, he says, triggered a backlash from wildlife managers and “did not do PVA any good.”

    Beissinger and fellow Berkeley ecologist Dale R. McCullough set up last month's conference in part to address these concerns. A main thrust was stretching PVAs to cover a broader range of life-forms—a daunting task, argued Daniel F. Doak of UC Santa Cruz. Endangered plants, he noted, might seem to be obvious subjects for PVA, but their life cycles can defy analysis. For one, legions of individuals can lie dormant for years in natural seed “banks.” Impossible to measure and therefore invisible to a routine PVA, these banks can germinate en masse after unpredictable events like floods or fires, confounding PVA predictions. Doak was “not optimistic” that the problem could be solved.

    Scientists have also struggled with what some regard as a glaring oversimplification in most PVAs: The models ignore the genetic problems of small populations, such as the accumulation of harmful genes or the loss of beneficial gene variants. Zoos and captive-breeding programs, which work with very small populations, would like to include these factors in PVAs. Russell Lande of Oregon State University in Corvallis argued that ecologists should seek ways to modify PVA equations to reflect the loss of genetic fitness. Indeed, he suggested, negative “genetic factors may be operating at larger population sizes than we thought.” But Beissinger and Berkeley colleague M. Ian Westphal argued last year in the Journal of Wildlife Management that eroding genetic fitness is unlikely to be the deciding factor in a population's fate—thus modelers should continue to ignore it. The importance of genetics in PVA, says Boyce, is the “hottest debate” in the field.

    In perhaps the most ambitious effort to broaden PVA, ecologists are trying to factor humankind into their models. Because hunting, habitat loss from development, and other human acts often are the biggest threats to a species, these researchers believe that neglecting social forces in modeling a species' fate is nothing less than foolish. In one ongoing effort, Philip S. Miller and his colleagues at the World Conservation Union are augmenting PVAs with estimates of human population growth and land use to model how the habitat destroyed in Rwanda's civil war may affect mountain gorilla populations.

    Test to extinction. Even as modelers seek ways to make PVAs more complex and realistic, critics decry a lack of empirical verification. A key problem is that no ecologist wants to experiment with natural populations in danger of extinction. If PVAs predict a rapid demise, says Gilpin, “we immediately take steps to make sure those predictions don't come true.”

    One way to sidestep this constraint is to arrange lab extinctions. At the meeting, ecologist Gary E. Belovsky of Utah State University in Logan unveiled results from the first long-term extinction experiment. For 4 years, Belovsky's team monitored more than 600 containers of brine shrimp, keeping track of how long each population would last before going extinct. Inventorying each population every 2 to 3 days, they eliminated the sampling errors that plague ecologists in the field. The experiment hinged on the initial complement of adults in each population, as well as a container's carrying capacity, set by the food supply. Belovsky spiced some containers with environmental stochasticity by randomly varying the food supply, creating more true-to-life conditions.

    After the final population winked out last December, the Utah State team compared the results to predictions from five PVA models, including two software packages. Under stochastic conditions, for example, most models underestimated the average time a brine shrimp population lasted—that is, the models predicted that the populations were less resilient than they actually were. In addition, the initial population size, a focus of many PVAs, turned out to be less important than the container's carrying capacity in determining persistence. Belovsky found that simpler models performed better, which some ecologists took as a hint that making PVAs more complicated may backfire.

    Despite PVA's spotty showing, Belovsky says he's not ready to abandon the technique. “We might like to think these results would carry over to other species,” he says, “but we just don't know.” He plans to take his extinction experiments a step further by adding corridors between the containers, creating a set of distinct brine shrimp populations in linked habitat patches—a metapopulation—that will allow migration and will resemble real life more closely.

    Scientists agree that they must improve PVAs quickly, as the technique is likely to gain wider use in federal policy-making on the fates of species. In deciding to add the nine salmon populations to the endangered list, says Robin Waples, director of NMFS's Conservation Biology Division in Seattle, the agency used the “smattering” of existing PVAs but relied on other information, including expert opinion. Restoring the populations, an undertaking that could take years and cost billions of dollars, will require examining all the factors that limit recovery. Waples hopes PVAs will play a bigger role as the agency ferrets out the “the baddest factors on the block.”

    With the future of Pacific salmon and other important species hanging in the balance, Katherine Ralls of the Smithsonian Institution, Beissinger, and two other ecologists issued a plea to the community to help draft guidelines for conducting PVAs and quality standards for evaluating them. Ralls suggests, for instance, that ecologists make it a common practice to ignore all models lacking error bars or discussions of the limits of their data and conclusions. It may take some time to forge a consensus, but Beissinger says he's confident that guidelines and standards will emerge. No doubt, he concedes, it's time to refurbish the 20-year-old technique. But, he adds dryly, “it is also notable that none of us are ready to get rid of PVA.”

    • + Population Viability Analysis: Assessing Models for Recovering Endangered Species, 15 to 16 March.


    Exploring the Systems of Life

    1. Robert F. Service

    No longer content to inventory cells' molecular parts, biologists are teaming up with physicists and engineers to study how these parts work together

    A rule of thumb among drugmakers is that the more tightly a compound binds to its molecular target, the more potent it will be. But not always, it turns out. Take cytokines, natural protein messengers that bind to receptors on cells and cause them to proliferate during wound healing or an immune response. A cytokine molecule follows a complex life history before and after it binds to its receptor. It shuttles in and out of cells, risking destruction by proteases, and eventually finds its way into a recycling bin once its work is done. These steps interact, adding to the complexity. When proteases destroy a cytokine molecule, for example, they can also wipe out its receptor, in a feedback that further reduces the compound's effectiveness.

    By modeling these and other interactions on a computer, Douglas Lauffenburger and his colleagues at the Massachusetts Institute of Technology have found that in many cases the best way for genetic engineers to boost the potency of a cytokine drug is not by remodeling it to bind more tightly to its receptor but by altering other steps in the chain. Tweaking the structure to help it avoid destruction within the cell, for example, increases its chances of being recycled. “You would think that the stronger the binding, the more potent it would be,” says Lauffenburger. “But that's often not the case.”

    As he and his colleagues have realized, understanding how parts of a biological system—genes or molecules—interact is just as important as understanding the parts themselves. It's a realization that's beginning to spread. Leading research universities around the United States have begun shelling out tens of millions of dollars to set up new interdisciplinary institutes and departments that will bring together specialists from physics, chemistry, engineering, computer science, mathematics, and biology to document how all the different cellular players work together in complex tasks such as determining when a cell divides and how gene expression is regulated. Says Lucy Shapiro, a developmental biologist at Stanford University: “The convergence of chemistry, physics, biology, and engineering is upon us.”

    View this table:

    the new centers will take a variety of approaches to exploring the complex systems of life. A proposed center at Stanford, for example, is likely to focus on biophysics, while one at Princeton will lean toward probing networks of genes and proteins. Drug companies, too, such as the Palo Alto, California-based start-up Entelos, are turning to computers in the hope that “in silico” biology will lead to improved therapeutics. All these efforts are a response to the growing sense that gene sequencing and other techniques will soon have isolated all the cell's individual parts and spelled out their isolated functions. Now, it's time to move beyond reductionism.

    Complex system.

    A web of interactions among a virus's genes and promoters determines whether it will lie dormant or replicate.

    SOURCE: TRENDS IN GENETICS 5 (2), 67 (1999)

    “We have generated an enormous mass of information on the molecular events that occur in cells,” says Marvin Cassman, director of the National Institute of General Medical Sciences (NIGMS) in Bethesda, Maryland. “Now we need to know how all these things are integrated.” John Doyle, an electrical engineer at the California Institute of Technology in Pasadena who is turning his attention toward biology, puts it this way. “Biology has spent decades trying to be like physics,” trying to understand complicated systems by understanding each part at its most basic level. “Now they're interested in putting it all back together.”

    Doing so, says Shapiro, will take “physicists, engineers, and biologists at lab benches next to one another working on the same problem.” Foremost among these problems, say Shapiro and others, will be understanding the complex chemical networks that govern cell functioning. Genome analysis, for example, has already isolated hundreds of genes that code for transcription factors, proteins that help regulate the expression of other genes. “The expression of individual genes is not being regulated by one, two, or five proteins but by dozens,” says Shirley Tilghman, a molecular biologist at Princeton University. Some regulate specific genes; others work more broadly. Some sit on DNA all the time, while others bind temporarily. “The complexity is becoming mind numbing,” says Tilghman.

    Simply determining the individual role of each protein only gets you so far. “Even if you have all the chemistry, it's hard to understand how the cell functions,” says Adam Arkin, a physical chemist at Lawrence Berkeley National Laboratory (LBNL) in California. That's because interactions between different molecules can have a feedback effect that increases or decreases the expression of other compounds. “When we get to a certain network complexity, we completely fail to understand how it works,” says Arkin.

    Such complexity is well known in fields such as engineering. Take the latest Pentium chip in your desktop computer. The chip contains millions of individual elements, such as transistors, connecting wires, and gate arrays. The behavior of each element is understood to many decimal places. But for the engineers designing the chip, predicting how all the different elements would interact was a trickier proposition. Chip designers have to rely on sophisticated modeling programs to simulate how different collections of the elements interact and predict their collective behavior, so that they can iron out bugs in advance.

    Now researchers are hoping to bring similar types of analyses to bear on understanding biological networks. At LBNL, for example, Arkin and his colleagues have begun using computer models together with experiments to track how viruses that infect bacteria “decide” whether to replicate inside their host or lie dormant, waiting for a better opportunity. Years of painstaking experimental measurements by numerous teams have shown that the five genes that push the virus either to replicate or lie dormant are controlled by six other genes: four promoters that turn on gene transcription, and two terminators that either partly or entirely shut it off. Embedded in this gene play are numerous positive and negative feedback loops: When one gene called C1 that promotes the dormancy path is expressed, for example, it feeds back to amplify its own expression while diminishing the output of Cro, a gene that pushes immediate viral replication and release. Outside factors, such as the availability of nutrients and the presence of competing viruses, also act as inputs controlling which promoters are turned on and off.

    In most cases that feedback leads to predictable results: If food is present and competition is absent, the virus proliferates. But by modeling the entire network of interactions on the computer, the LBNL researchers found that the feedback control is inherently “noisy,” so not all the viruses make the same decision under identical conditions—an adaptation that ensures some viruses will survive should the other path prove fatal. Understanding how to control such genetic switches could ultimately lead to new ways to control infections, says Arkin.

    Still, even with these and other initial modeling efforts (see sidebars on pp. 80 and 82), many researchers argue that biological models have a long way to go before proving themselves. “[Models] haven't had a lot of respect among biologists,” says Marc Kirschner, a cell biologist at Harvard Medical School in Boston. “They don't have enough of the biological character built in,” and thus often don't reflect the true complexities of real biological systems. Arkin, Lauffenburger, and others say, however, that the new research in this area will improve the sophistication of the models by identifying common circuit motifs used in biological networks and incorporating more complex and realistic feedback mechanisms. Over time, the models will also benefit from better inputs, such as the amount of each protein present in real cells and their reaction and diffusion rates.

    Other challenges loom. Among the biggest concerns, say researchers and administrators, are differences in research cultures. In physics, for example, postdocs are often treated like junior faculty, whereas in biology they typically have far less autonomy. Ironing out such differences is “one of the biggest problems we face,” says Shapiro.

    Promotions and tenure decisions could also prove to be sticking points. “People who work at the boundaries between disciplines are at a real disadvantage,” says Chris Overton, who directs Princeton's bioinformatics center. “Who evaluates you for tenure and the quality of your work?” he asks rhetorically. Often, he says, people in one discipline or another fail to appreciate the work's full scope. What's more, discipline-bound funding departments within agencies such as the National Institutes of Health (NIH) or the National Science Foundation can be reluctant to fund interdisciplinary work seen as lying largely outside their area, and grant review panels made up of researchers in a single discipline may not fully understand an interdisciplinary project. Whether the money will be there to support new interdisciplinary programs “is a question we are all worried about,” says Carlos Bustamante, a biophysicist at the University of California, Berkeley.

    But NIGMS's Cassman says that his agency and others are creating niches for interdisciplinary science. Last year, for example, NIH announced a new bioengineering initiative to fund multidisciplinary research (Science, 5 June 1998, p. 1516). And interdisciplinary review panels, he says, are likely to follow. “When we've been able to promote an area of science, it is because it is ready,” says Cassman. “From everything I hear about [the systems approach to biology], I think it is.”


    Building Working Cells 'in Silico'

    1. Dennis Normile*
    1. With reporting by Elizabeth Pennisi.

    Cells provide living proof of that old saw about the whole being greater than the sum of its parts. “Even if you construct a complete list of all the processes known to occur within a cell, that won't tell you how it works,” says Masaru Tomita, a professor of bioinformatics at Keio University in Fujisawa, near Tokyo. But Tomita, who is a computer scientist as well as a biologist, has a scheme for exploring the effects that only emerge when those many processes interact: a simulation program that can reproduce, in simplified form, a cell's biochemical symphony.

    His group's E-CELL simulation software will go on the Web for PUBLIC “beta” testing this June ( Other computer models of the cell are being developed, but they often try to reproduce individual cellular processes in detail. E-CELL, in contrast, is designed to paint a broad-brush picture of the cell as a whole. Such efforts “are a next logical step” now that genome sequencing is giving biologists the complete parts lists for living things, says Peter D. Karp, a bioinformaticist at Pangea Systems, a bioinformatics software company in Menlo Park, California.

    E-CELL is actually a model-building kit: a set of software tools that allows a user to specify a cell's genes, proteins, and other molecules, describe their individual interactions, and then compute how they work together as a system. It should ultimately allow investigators to conduct experiments “in silico,” offering a cheap, fast way to screen drug candidates, study the effects of mutations or toxins, or simply probe the networks that govern cell behavior.

    Stripped-down cell.

    Biochemistry simulated by E-CELL Software.

    Written to run under the UNIX or Linux operating systems, the software relies on the user to input a cell's molecules, their locations and estimated concentrations within the cell, and the reaction rules that govern them. E-CELL then computes how the abundance of each substance at a particular location changes at each time increment. With a single mouse click, the user can knock out particular genes or groups of related genes, expose the cell to a foreign substance or deprive it of a nutrient, and then run the simulation again. Graphical interfaces allow the user to monitor the cell's changing chemistry.

    Tomita's group has used early versions of E-CELL to construct a hypothetical cell with 127 genes, which they figured was a minimal set for a self- sustaining cell in their system. Most of the genes were based on those of Mycoplasma genitalium, a microbe that has the smallest known gene set of any self-replicating organism. But the genes for some vital cellular processes still have not been identified in the mycoplasma, so the group added genes from other organisms. The virtual cell “lives,” maintaining a simple, stable metabolism: It takes up glucose from the virtual culture medium, generates the enzymes and proteins to sustain internal cell processes, and exports the waste product lactate.

    This bare-bones cell has already delivered one surprise. As expected, starving it of glucose causes a drop in levels of adenosine triphosphate (ATP), a key compound that provides the energy for many intracellular processes. But unexpectedly, before ATP levels drop they briefly rise. The reason, Tomita suspects, is that the early part of the ATP-producing pathway itself consumes ATP. Cutting the supply of glucose shuts down the early stages of the pathway, stopping ATP consumption there even while ATP continues to be produced from intermediary metabolites further down the pathway. Tomita thinks the effect may eventually be confirmed in living cells.

    More surprises could be forthcoming when E-CELL is eventually put to work simulating whole cells of real organisms. Tomita admits that because building model cells with E-CELL depends on understanding the functions of large numbers of genes, the software is not likely to prove really useful for molecular biologists for some time. But he and his colleagues designed the program so that it should easily scale up to simulating the thousands of genes in a real cell. “Tomita and his group have done a fantastic job of engineering a ‘graphical cockpit’ for initializing and monitoring a whole-cell simulation,” says Karp.

    For greater realism on a smaller scale, users can turn to a different model-building kit: the Virtual Cell developed by physiologist Leslie Loew and computer scientist James Schaff of the University of Connecticut Health Center in Farmington. Rather than downloading software to run on their own computer, Virtual Cell users will simply run their simulation on Loew's host computer via the Internet. More important, rather than simulating an entire cell at once, as a biochemical system, Virtual Cell will eventually enable cell biologists to study how a cell's shape, volume, and other physical features affect individual biochemical processes.

    Loew's team builds its Virtual Cell models using precise measurements of how molecules diffuse and react within living cells, which they make by labeling key molecules and observing them with a video microscope. The result is a computerized cell with physical properties resembling those of real cells—a framework in which users can unleash specific biochemical reactions. For example, a researcher can add a certain amount of calcium—a key intracellular messenger—and then sit back and let the Virtual Cell solve equations describing reaction and diffusion rates for each of the molecular participants affected by calcium. Then the program generates a movie of the process. “The simulations are comfortable for the biologists to use because they are based on real image data,” Loew explains.

    In the case of calcium, the simulation not only looked much like the calcium waves measured in actual cells—indicating that the simulation was realistic—but it also predicted the dynamics of an intermediary molecule called IP3, which cannot be monitored inside the cell itself. (Demonstrations of Virtual Cell can be accessed at

    “These two approaches can complement each other very well,” Tomita says. And both are attracting growing interest from other biologists. Tomita says that when he first starting describing his plans for E-CELL, “I was dismissed as a naïve computer scientist.” Now he gets e-mail requests for information on his simulation software nearly every day. Loew, too, has found that “interest has begun to mushroom.” He adds, “[Cell biologists] are getting to the point that they are realizing that without computers we are never going to be able to organize all this information.”


    Unraveling Bacteria's Dependable Homing System

    1. Elizabeth Pennisi

    For more than a century, microbiologists have marveled at the ability of bacteria—seemingly simple organisms—to home in on a food source and navigate toward it. Since then they've picked the process apart, identifying some proteins that “smell” the nutrient source, others that propel a microbe toward it by driving flagella, and still others that convey the necessary signals. But they never quite understood how this process could work reliably in spite of variations in the microbes' own genetic makeup or in their environments.

    That's where Stanislas Leibler decided he might be able to make a contribution. Several years ago, this Princeton molecular biologist and Princeton colleague Naama Barkai brought skills from their former lives as physicists to bear on the problem. Today, their success in mathematically representing how this robust behavior arises from the complex interactions of proteins and pathways has earned kudos from both theorists and experimentalists. “They've taken a biological pathway and tried to ask something about its fundamental properties as a unit,” says Leland Hartwell, a yeast geneticist at the Fred Hutchinson Cancer Research Center in Seattle. That approach “is really fundamental for the next step in biology.”

    Leibler and Barkai started with the broader question of how organisms could be different, biochemically speaking, and still carry out the same behavior. Most cellular processes depend on interactions among many different proteins. And many biologists had thought that the cell had to keep tight control over the concentrations and activities of these various molecules to keep everything functioning smoothly.

    Yet the more DNA geneticists have sequenced, the more they have realized that the same gene often differs slightly from one individual to the next—differences that affect how much of its protein product a gene produces, or how well the protein works. And even when genes are identical, protein concentrations can vary for other reasons. Yet more often than not, the organism functions just fine in spite of the variations.

    When the Princeton duo pondered this puzzle, they wondered whether the biochemical details are less critical than the way the details fit together. Perhaps organisms have evolved networks of interactions that work reliably in spite of either overactive or underactive genes or proteins. “One cannot understand this by looking at one protein,” Leibler realized. “One has to consider the whole system … to see if [this robustness] comes from systemic properties.”

    They decided to look at this question by trying to make sense of chemotaxis. “There is no simple system,” Leibler explains, “but we were able to build on many years of beautiful work done by other people. That made this one the best known and best studied system.” Typically, chemotactic microbes zigzag as they swim, changing direction by tumbling periodically in random directions. However, when a bacterium senses a desirable substance, such as an amino acid, it follows a steadier course toward this target.

    Historical circle.

    A 1966 experiment showed that bacteria will move from the center of a petri dish outward toward undepleted nutrient supplies, forming a ring.

    PHOTO: JULIUS ADLER, SCIENCE 153, 708 (1966)

    In one such microbe, Escherichia coli, chemotaxis gets kicked off when an attractant links up with a receptor protein that sits in the cell membrane. Then several Che (for chemotaxis) proteins get involved and alter the movement of the rotating flagella to stop or start a turn. The result is that the bacterium tumbles less frequently, and it moves in a relatively constant direction toward a greater concentration of the attractant. When it no longer senses a rising concentration gradient, it returns to the original tumbling rate, thereby ensuring it can detect further changes in the gradient.

    For years, biologists have thought that most aspects of cell function, including this ability to return to a steady tumbling rate over a wide range of attractant concentrations during chemotaxis, depended on precise titration of the various molecular components of the system. If that were the case, too much or too little of any of the Che proteins would throw the system off.

    To find out if this is indeed the case, Barkai and Leibler built a mathematical model of the interactions. Like others who had modeled chemotaxis before them, they assumed that the receptor was either on or off, depending mainly on whether an odor molecule had docked at the receptor. They translated this “two-state” model into a series of differential equations that describe the interactions between the various Che proteins.

    “The model correctly reproduced the main features of bacterial chemotaxis” when first tested 2 years ago, Leibler recalls. The simulated microbe responded and adapted to changes in the concentration of the attractant much as the real bacterium does. Moreover, it was able to do so even when the researchers changed the amounts and activities of Che proteins by several-fold. These simulations showed “there are some properties which are not sensitive [to perturbation],” Leibler explains.

    He and Barkai then teamed up with Princeton physicist-turned-microbiologist Uri Alon and with microbiologist Michael Surrette to examine if this was the way real bacteria worked. They created mutant bacteria that either underproduced or overproduced several Che proteins. Comparing strains that made one protein, Che R, at levels ranging from less than normal to 50 times the normal amount, they found that the time it took the bacteria to return to their usual tumbling rate after sensing an attractant dropped from 23 minutes to less than one. Yet as the model had predicted, all of the mutants, no matter what their Che R activity, were able to return to those precise tumbling rates. The work “shows that for some properties, the cell doesn't seem to care” about the amount of these proteins, says Leibler. A feedback loop that enables the cell to measure the tumbling rate and adjust accordingly must be responsible for this robustness.

    Although robustness in chemotaxis may not seem all that important in the grand scheme of cell biology, the work is impressive because “it shows how variability can be accommodated in a circuit,” says Hartwell. Some “emergent property” of the chemotactic pathway buffers it against variation in its individual components. Thus each individual can function just fine while being a little different.

    This mix of sameness and variation is an asset in the game of evolution. As Harvard cell biologist Marc Kirschner points out, “If you have flexibility, you've essentially designed something that is capable of being modified, [and that's] evolvability.” That's a level of understanding that could only come from incorporating the biochemical details of the system into a bigger picture. And, says Hartwell, “this is something that all of us are going to be trying to do.”


    Life After Chaos

    1. Carl Zimmer*
    1. Carl Zimmer is the author of At the Water's Edge.

    After years of hunting for chaos in the wild, ecologists have come up mostly empty-handed. But the same equations that failed to find chaos are turning up stunning insights into how environmental forces and internal dynamics make populations rise and fall

    The complexity of nature may be a beautiful thing, but it came pretty close to crushing Maria Milicich's spirit. On a typical morning 10 years ago she would take her motorboat out to the Great Barrier Reef, where she was studying the ecology of damselfishes. These brightly colored aquarium fish lay their eggs in nests at the reef's bottom. Each month the full moon triggers the larvae to hatch and emerge; they leave the reef and 19 days later return as mature larvae. Milicich wanted to figure out what determined how many larvae reached maturity, so she set up 2-meter-tall traps floating from buoys, each rigged with a light to attract the fish.

    You might expect that Milicich would have found a regular pulse of new adults every month. Instead, she logged a wild gyration. When she checked her traps during some pulses, she found only a few fish, but during other months she would find thousands. On one visit to the reef she discovered that the trap had been dragged to the sea floor by a load of 28,000 fish.

    Milicich searched for a cause for the fluctuations, seeking a link between the number of new adults and measurements she had made at the reef—everything from rainfall to the brightness of the moon. She tried hundreds of variables but came up empty- handed. Of course, many marine biologists had failed before her and simply labeled the supply of mature larvae as nothing more than random. That wasn't much consolation to Milicich. “To say that I felt depressed is an understatement,” says Milicich, who now works as an ecological consultant to the Hong Kong government and private companies. “Something was clearly wrong.”

    Then Milicich had an epiphany. In 1990, she stumbled onto a paper in Nature that had invoked a strange kind of math to describe the abundance of phytoplankton off the coast of California. To decode her damselfish, Milicich had been trying to use linear equations—which produce results that are proportional to the values that go into them. But the paper's author, ecologist George Sugihara of Scripps Institution of Oceanography in La Jolla, California, had exploited nonlinear equations. What comes out of a nonlinear equation isn't proportional to what goes in; unlike linear equations, they may contain feedbacks and thresholds and other features that can yield complicated results. Sugihara's data looked as intractable as hers, yet they surrendered to his analysis. “When I read the paper, I thought, ‘Bingo—this is what my data is, and this is what it needs,’” says Milicich.

    Last month Milicich published a report in Science (5 March, p. 1528) with Sugihara and his graduate student Paul Dixon in which they cracked the damselfish cycle. Modeling it with nonlinear equations, they could account for the maddening dynamics with three factors: the moon's phase, turbulence around the reef, and winds blowing over the water. “From hundreds and hundreds of potential correlates, all of a sudden three dropped out, and they made perfect ecological sense,” says Milicich. “An awesome feeling for an ecologist, I have to say.”

    Ecologists first began applying nonlinear dynamics to understanding the ups and downs of populations almost 30 years ago, and the field has gone through some drastic changes in recent years. When researchers began building nonlinear models of the ways that organisms might interact, they stumbled across what people in other fields were already calling chaos—a random-looking pattern produced by simple, nonrandom equations. Models were so rife with chaos that ecologists began searching for it in the real world, because it promised to overturn the old ideas ecologists had about the balance of nature. But it's a sign of the times that nowhere in the damselfish paper does the word “chaos” appear. Although chaos has become well established in other sciences such as physics, in ecology it remains elusive. “It's this great idea that really hasn't panned out all that well,” says Dixon.

    Yet chaos isn't the be-all and end-all of nonlinear dynamics, but only one type of pattern it produces. The same nonlinear equations that have failed to prove chaos in ecosystems are now helping researchers uncover how the fiendishly complex interactions of organisms with their own kind, with other species, and with weather send populations on erratic trajectories. “This is an area whose time has come,” says ecologist Stuart Pimm of the University of Tennessee, Knoxville.

    The rise and fall of chaos

    The jagged oscillations in populations are nothing new to ecologists, but before the 1970s, they put most of the patterns down to the unaccountable effects of weather, disease outbreaks, and other sources of so-called environmental noise. If not for noise, they assumed, a population should naturally hover at an equilibrium. That assumption was shaken by the work of Sir Robert May in the 1970s. May was originally trained as a physicist, but while at the Institute for Advanced Study in Princeton, New Jersey, he was drawn to the thorny complexities that ecologists and biologists have to cope with. He started to explore simple ecological models, tracking how populations changed generation after generation. In a typical model, a population would swell toward an equilibrium level at a set rate; above that level, the population would decline.

    May's model was simple, but the population of a preceding generation wasn't directly proportional to the current one. It might be more, it might be less, it might be the same. In other words, it was nonlinear. And May discovered that a nonlinear model of ecology could produce complex patterns even if it was far simpler than anything in nature. When May ran his model at low growth rates, the population would hit equilibrium and stay there. But when May had the population reproducing like bunnies, it overshot its carrying capacity, triggering a population crash, followed by another rise. Rise and fall would then follow regularly, in a pattern known as a limit cycle. At even higher growth rates, the population tripped into a more complicated cycle. Instead of moving between one high and one low, it might hop between two of each, or four, or more. Finally, when the growth rate soared above a threshold, the population went berserk. From generation to generation, it hopped around in what looked like a purely random fashion.

    Chaos was the name bestowed on this sort of random-looking pattern produced by a nonrandom equation. As other scientists were seduced by the erratic charms of chaos, they invented a more formal way to recognize it: By nudging the initial conditions of a chaotic system just a hair, you will drastically alter its future path. The rate of this divergence is called the Lyapunov exponent. A negative exponent means limit cycles and other at least somewhat regular behavior. A positive exponent means chaos.

    May's work was “hugely influential,” says Pimm, “because it showed if you take the simplest population model you can imagine that you'll get cycles and this special thing called chaos. What that told us immediately was that lurking in these descriptions that looked simple you've got very strange dynamics.” Simple intrinsic factors such as growth rates might alone be enough to produce a lot of nature's complicated signal. It was so easy to find chaos in models, in fact, that it seemed likely that strong cases of chaos could be found in nature.

    The excitement that many ecologists felt over the possibility had two sides. There was hope that the jagged oscillations found in nature could be explained by a few basic ecological rules. On the other hand, the sensitivity that chaotic systems had to their initial conditions meant that it would never be possible to predict what an ecosystem would do very far into the future. “You may even find all the simple rules, and yet prediction may be impossible,” says May.

    But from the start, May warned his colleagues that they would have a hard time finding chaos in the wild. By definition, it would look like a random pattern produced by the pushing and shoving of environmental noise. Ecologists struggled to find ways to filter out the noise in their data to get at the underlying dynamics, but they were not the ones to get the first strong signal of ecological chaos. Instead it came from the laboratory, where scientists can keep noise at a minimum. In 1997, biologist Robert Costantino of the University of Rhode Island, Kingston, and his colleagues reported bona fide chaos in captive flour beetles (Science, 17 January 1997, p. 389).

    Costantino's lab has been raising the beetles in flasks of Blue Bonnet flour and brewer's yeast for over 20 years. After they hatch, the beetle larvae need about 2 weeks to grow into pupae, and another 2 weeks to reach reproductive age. Flour beetle dynamics are drastically nonlinear, because the beetles are cannibals, the adults eating eggs and pupae (and the larvae eating eggs as well). Cannibalism undermines the younger generation of beetles and can trigger a population crash. But it eventually leaves fewer adults around, which in turn means less cannibalism. A new batch of larvae can then reach adulthood in such high numbers that the population rebounds.

    The researchers built a mathematical model of flour beetle population dynamics and tinkered with it, changing variables such as the adult mortality and the number of larvae each adult produced, and watched what kind of dynamics played out. They discovered that if adult mortality was high, the model became very sensitive to the rate of cannibalism, in some cases jumping to cycles and in others to chaos as they changed the rate. The researchers next turned to the actual beetles to see if they could create this behavior. They raised the adult mortality rate simply by regularly plucking out mature beetles. Then they mimicked different cannibalism rates by removing pupae from the flasks. At some rates the flasks reached an equilibrium; at others they fluctuated through cycles; at others they raged chaotically. Those were exactly the dynamics that Costantino's group had predicted from their model.

    Outside the comfortable confines of the lab, though, things haven't gone so well. To find chaos in the wild, ecologists usually resort to historical records consisting of a few dozen data points. “Most of the data sets are really very bad; they're just awful,” says Costantino, “and I don't mean to discredit any of the researchers who did the work.” To try to make sense of them, researchers sometimes build a model out of the biology they consider important to the case they're studying—such as the rate at which a predator eats its prey. They can then turn these rates up and down like stereo knobs so that the equations produce a pattern like the real one. Other times they fit the data to an equation without bothering to figure out its biological meaning first. Then, by perturbing the model, they can find its Lyapunov exponent and determine whether the wild population is chaotic or not. For over 20 years ecologists have been using methods like these to hunt for chaos. And the result? “There is no unequivocal evidence for the existence of chaotic dynamics in any natural population,” declares ecologist David Earn of Oxford University.

    In 1995, for example, theoretical ecologists Stephen Ellner of North Carolina State University in Raleigh and Peter Turchin of the University of Connecticut, Storrs, surveyed all the long-term observations of wild populations they could find in the scientific literature and measured their Lyapunov exponents. They concluded that some were stable, many were verging on chaos, and only a few ambiguous, weak cases of chaos turned up.

    To some ecologists, the way nature seems to sit on the edge of chaos, and not plunge deep into it as models might predict, is a fascinating puzzle. “I haven't seen any theory I believe that would predict this,” says Turchin. It may be that a population's tendency toward chaos is buffered in some way that the models have missed. To study food webs, for example, ecologists often simplify them into linear chains. All the primary producers get thrown into one level; next up the chain are the herbivores, then the intermediate predators, and so on up to the top predators. These models can turn chaotic because oscillations in population density at one level generate oscillations at other levels. But some researchers argue that these chains ignore some important messiness in nature. A predator may depend strongly on a single species of prey, but it may sometimes switch to other species. Killer whales, for example, can switch from sea lions to sea otters (Science, 16 October 1998, pp. 390, 473). Or they may be omnivores like people or bears, picking their meals from many levels. Some predators may even snack on other species at the same rank in the food chain, or on their own species.

    Last August a group of ecologists at the University of California (UC), Davis, showed how these additional connections could tame the tendency toward chaos. Ecologist Kevin McCann and his colleagues looked at the dynamics of a predator in a simple food chain. Next they compared this model to more complicated chains in which the predator switched between two prey species, or the prey had to compete with another species for its own food. They spent a lot of effort giving these more complicated models a realism that many earlier models lacked. For example, the efficiency with which their predators could catch prey was based on actual animal metabolism. A predator could only boost its success at hunting one prey species at the expense of its ability to hunt others.

    Predator populations that depended solely on one prey species slipped into chaos. But if the ecologists added in other connections between species—even if they were weak—the chaos disappeared. Changes in the population of one species no longer hit a linked species with full force. “It seems that for species to persist, nature is biased toward inhibitors and away from oscillators,” says McCann. “That's just going to decrease the likelihood of chaos, no matter what.”

    At the mercy of weather.

    Harsh gales on islands off Scotland can synchronize fluctuating populations of feral sheep.


    Other ecologists don't take such a dim view of chaos. They still think it's out there in nature but playing hard to get. “There aren't a large number of examples that you can catalog, because there aren't a large number of systems out there for which we have long runs of data for all the variables,” says May, who is now the Chief Scientific Advisor to the U.K. government. But if finding chaos means tracking a species for decades or centuries—as well as all its predators and pathogens and prey and the rainfall and so on—few ecologists may have the stamina (or the funding) to keep up the hunt.

    The powers of prediction

    Whatever the final verdict on chaos in nature may turn out to be, the success of nonlinear dynamics won't stand or fall on it. “In the last few years we've been using the nonlinear techniques, but not focused on ‘chaos versus nonchaos,’ “explains Turchin. “We are now more interested in what are the forces that drive the spectacular population dynamics” seen in many species. Ecologists probing these forces were once limited to cumbersome experiments, such as closing off parts of a forest to predators. With the help of nonlinear mathematics, they can now get additional information from historical records.

    Turchin, for example, is studying a pest known as the larch bud moth, which denudes larch trees in the Swiss Alps. The bud moth goes through cycles of 8 or 9 years in which its numbers can multiply 100,000-fold. Ecologists have been debating the cause of the cycles for as long as they've known about them. At one point a bud moth virus seemed to be the best candidate, but more recently the larches have taken the lead. An exploding moth population destroys larch needles faster than the trees can recover; the following year the trees muster only stubby needles that are a poor energy source for the moths.

    Turchin and colleagues, working at the National Center for Ecological Analysis and Synthesis at UC Santa Barbara, have sifted through a 40-year bud moth census, as well as related ecological records. They then wrote out nonlinear equations representing the possible effects of each ecological factor—viruses, food quality, and so on—on the bud moth and tested them to see how closely they fit the bud moth's actual history. Their preliminary findings suggest that the plants have something do with the cycle, but they're not powerful enough on their own to produce it. The collapse of the needle supply does bring the explosion of bud moths to a stop. But a parasitoid wasp that lays its eggs in the caterpillar then seems to take over. The rise of the wasp lags behind the moths, and it continues after the moths have stopped their ascent. As a higher and higher proportion are parasitized and killed, the moth population crashes. When the moths bottom out, a window opens for the larches to recover. The bud moth crash spurs a wasp crash, then the cycle starts all over again.

    The same kind of interaction from above and below in a food chain emerged when Nils Stenseth, an ecologist at the University of Oslo, looked at the snowshoe hare of Canada. Stenseth used a different method: Rather than make biologically plausible equations from the data, he let the actual data guide him through a statistical search for the best nonlinear equations. After he had a robust model, he looked at the variables. The animals, he discovered, were controlled by two factors; changes in food supply and populations of predators (mainly lynx) fit the job descriptions best. “People tend to belong to different schools—either it's the food supply or predation,” says Stenseth. “But you really have to have both.”

    Bud moth boom and bust.

    The supply of the caterpillar's food, larch needles, and the depredations of a parasitic wasp interact to produce its population cycles.


    You also have to have noise in the environment, ecologists are learning. Most ecological models (including nonlinear ones) have only looked at a particular species, or perhaps its food supply and predators. They haven't taken into consideration the effects of random variability coming into the model from the outside. In these models, every day is sunny. Now researchers are getting a better understanding of population dynamics by bringing noise into nonlinear models.

    Bryan Grenfell of Cambridge University and his colleagues have been studying feral sheep on islands off Scotland using methods similar to Stenseth's. They found that at low populations, the sheep multiply in a straightforward, linear fashion. But above a certain threshold, as the sheep overgraze their island, they suddenly fluctuate in a nonlinear fashion. Randomly adding or subtracting a few sheep to a crowded island brings big changes to the dynamics of the population.

    Their records also show that the population of sheep on neighboring islands has been rising and falling in tight synchrony since ecologists first started their census in the 1950s. Researchers have suspected that weather might synchronize separate populations, in the same way adjusting a slow clock every hour keeps it in synch with a faster one. But the sheep populations are so sensitive to random noise that weather ought to have the opposite effect, throwing them out of sync.

    Grenfell and his colleagues resolved this paradox by incorporating weather into their nonlinear models, adding variables to their equations that described the harshness of the March gales that scour the islands, as well as the respite of calm Aprils. Their analysis showed that the weather is so intense that it can overcome the sensitivity of the sheep's dynamics. Not only does it bring down the sheep's numbers on neighboring islands at the same rate, but both populations subsequently cross the crucial threshold in the same year.

    A powerful interaction between animals and their environment is responsible for the damselfish cycle as well, according to Dixon and his colleagues. Three days after hatching, the larvae have depleted their yolk sac and must start feeding in the outside world. Unable to swim far, they depend on turbulence to sweep them into contact with zooplankton. Too little turbulence won't give them enough food to survive; too much won't give them enough time to get it in their mouths. The full moon that triggers the larvae to hatch also brings with it high tides, which sweep the larvae away from the reef, letting them avoid predators while they mature. They return as mature larvae, but to get back, they need favorable winds to set up the right currents. Because their survival depends on several interacting factors, the fish can react dramatically to what looks like small amounts of noise.

    If the turbulence and wind both jibe with the fish's needs at the right time relative to the full moon, they can reach adulthood in vast numbers. But if the factors go against the fish, their individual effects are multiplied. Say 90% of the fish get killed because of turbulence. If the returning winds also create a 90% mortality rate in the survivors, only 1% of the fish will reach adulthood. “If you play around with these losses, that alone can produce huge fluctuations,” says Dixon.

    Damselfish and feral sheep are only two examples of a growing list of organisms in which nonlinear dynamics seems to amplify noise. “The emergence of noise amplification as a very general factor is very exciting,” says Ellner of North Carolina State. “Apparently there is some generality after all, even if it isn't the one that we looked for initially—that is, deterministic chaos.”

    Although nonlinear models are flexing their muscles at explaining the ebbs and flows of wild populations, experts say it is far too soon to apply them to conservation biology—designing reserves, for example, or understanding when a population drop is a natural fluctuation and when it's a sign of trouble. But the models are showing promise for helping scientists destroy unwanted organisms. Grenfell, for example, applies the same approach he brings to island sheep to diseases like measles. His work suggests that vaccination campaigns might work better if the constant low-level efforts now mainly practiced were punctuated by massive spurts. That would tend to synchronize disease levels in all regions of a country in the same way that March gales synchronize sheep, so that the crests and troughs of its cycle would be the same everywhere. If every town hits a low part of the cycle together, neighboring towns won't reinfect each other, and chances are better that the disease won't resurge.

    For now, though, ecologists are just enjoying the fact that their models are working. “No one ever thought that the models were that good,” admits Alan Hastings of UC Davis. “That to me is the biggest sign of progress.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution