News this Week

Science  10 Jul 1998:
Vol. 281, Issue 5374, pp. 148
  1. HUMAN GENOME PROJECT

    A Planned Boost for Genome Sequencing, But the Plan Is in Flux

    1. Elizabeth Pennisi

    The National Human Genome Research Institute (NHGRI) last week awarded $60.5 million to seven centers across the United States to scale up their efforts to sequence the human genome. The awards should enable the centers to crank out 117 million bases next year—almost double the total produced so far by all U.S. groups combined. But exactly what the centers will do with the money isn't clear, for NHGRI has asked them to spend the next 2 months evaluating a proposal for a radical change in the plan to sequence all 3 billion base pairs that make up our genetic code.

    Until now, the Human Genome Project has been marching methodically toward producing a highly accurate sequence of the entire genome by 2005. For the past 2 years, eight groups in the United States have been honing their techniques and procedures, and last week's awards were originally designed to permit the best of them to start churning out sequence in earnest. Next year, according to the plan, the field would be narrowed to perhaps five groups that would spearhead the final assault on the genetic code. At the same time, Britain's Wellcome Trust has been funding a program at the Sanger Centre near Cambridge, U.K., that would complete one-third of the genome; the NHGRI grantees are expected to sequence about 60%. But now, NHGRI is considering backing a crash effort to create a “rough draft” of the genome by 2001, with the final, detailed blueprint completed perhaps 4 years later. “We're in a period of ferment,” says Eric Lander, who heads the sequencing effort at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. “We know [the sponsors] are entertaining all sorts of changes.”

    The ferment began in May, when J. Craig Venter, president of The Institute for Genomic Research (TIGR) in Rockville, Maryland, stunned the genome community by announcing that he will team up with Perkin-Elmer Corp. in Norwalk, Connecticut, to launch a private effort to sequence all the important parts of the genome in 3 years (Science, 15 May, p. 994). Venter insists his results will be about as complete and accurate as those the government-funded program will produce, but many fellow sequencers are not convinced. They also worry that the effort could lock up large amounts of genetic data in proprietary claims. That concern prompted leading genome researchers to call on NHGRI last month to consider ways to generate sequence faster. Thus was born the idea of producing a rough draft, which would help pinpoint genes and provide important information on most of the coding regions, followed later by a rigorous, completed genome.

    View this table:

    That approach, however, would require a shift throughout the program from a slow and narrow analytical style to a broader—but riskier—attack. So far, the U.S. centers and the Sanger Centre have been advancing clone by clone—completely sequencing DNA in about 150,000-base chunks, the amount contained in the bacterial clones used to replicate human DNA. Researchers cut each chunk into many smaller, overlapping bits, which are sequenced and then pieced together by computer programs and by experts called finishers. Venter, in contrast, plans to chop the entire genome into small bits, sequence them with a new generation of Perkin-Elmer machines, and use supercomputers to fit the data together—an approach called whole-genome shotgun sequencing.

    The NHGRI-funded centers are not planning to try whole-genome shotgunning, but they are looking at ways to speed up their processes. One possibility is not to wait until one chunk of DNA is finished before going on to sequence the next. “Ideally the [sequencing and finishing] would ramp up equally,” says Richard Gibbs, who runs the sequencing center at Baylor College of Medicine in Houston. But this summer, he and others plan to uncouple the two processes. In this way they will determine just how difficult it is to generate data and finish the sequence independently.

    Other awardees are looking to reduce the amount of redundant data needed for each piece of DNA sequenced. Currently, most bases are sequenced about 10 times in overlapping stretches of DNA. The redundancy helps in piecing the stretches together, and it reduces errors by pinpointing aberrant sequences. Bruce Roe, a biochemist who directs the sequencing center at the University of Oklahoma, Norman, among others, is now evaluating how well his team can piece together sequence where each base is represented only two, four, or six times.

    At the University of Texas Southwestern Medical Center at Dallas, human geneticist Glenn Evans is investigating just how useful various rough drafts would be. “The issue of completeness and utility has never been determined,” says Richard McCombie, a sequencer at Cold Spring Harbor Laboratory in New York. “We don't know anything about intermediate sequencing products,” notes Jane Peterson, an NHGRI cell biologist who oversees the human genome sequencing effort. “We have to be sure it will be useful.” And the first requirement, adds Lander, is that a draft “can't impede our ability to finish.”

    In early September, NHGRI will evaluate the results of these efforts and decide whether to proceed with its original plan—to produce a detailed sequence over the next 6 years—or shift gears to focus on an interim rough draft. The awards announced last week (see table) are based on the original plan. They assume that the seven centers—which include all those that took part in earlier phases, except for TIGR—will be generating 117 million bases of detailed, finished sequence. If the program is refocused, next year's sequence output should be considerably higher.

    That possibility pleases Evans. His team was dismayed by the prospect of being beaten to the complete genome by Venter, and he says doing a rough draft would be “a legitimate way of not being scooped.” And, he adds, “it's politically the right thing to do.”

  2. NEUROSCIENCE

    First Images Show Monkey Brains at Work

    1. Marcia Barinaga

    Monkey brains have gotten plenty of close scrutiny from researchers studying functions such as perception and memory. But monkey researchers have been unable to use one promising technique: functional magnetic resonance imaging (fMRI), which maps out active brain areas and has revolutionized the study of human brain function. The problem is convincing a monkey to sit perfectly still and perform a thought task inside the claustrophobic banging magnet that creates the magnetic resonance images. Now Tom Albright and his colleagues at the Salk Institute and the University of California, San Diego, have overcome the difficulties. In the June issue of Neuron, they have published the first fMR images of activity in a monkey's brain. A second team, headed by Richard Andersen at the California Institute of Technology (Caltech) in Pasadena, has a similar study coming out next week in NeuroReport.

    These successes, achieved by patiently training the monkey and designing a special seat to restrain it in the magnet, could ultimately help neuroscientists get more out of human fMR images. This noninvasive technique, based on the magnetic signal of oxygen in the blood, records the increases in blood flow that result from changes in neural activity. But what individual neurons are doing in the areas that light up on an fMR image is open to interpretation, because researchers can't stick electrodes into the brains of healthy humans. “Monkey fMRI will allow us to test our interpretations,” says neuroscientist Robert Desimone, who directs intramural programs at the National Institute of Mental Health.

    The technique should also benefit traditional electrode studies of monkey brain activity. “Say I am interested in a perceptual phenomenon, but I don't have much evidence about the part of the brain that underlies it,” says Albright. “fMRI gives me a way of identifying the relevant parts of the brain, which will then guide my microelectrode studies.”

    The first monkey fMR images, which show activation of the visual system as the animal watched a children's cartoon, don't offer any new scientific insights—just a proof of principle. To make them, both Albright's and Andersen's groups designed chairlike apparatuses made of nonmagnetic materials that hold the monkey still inside the magnet, in variations of a position Andersen describes as “sphinxlike,” on haunches and elbows and looking forward, down the length of the magnet. “We worried that it would be difficult to get the monkey to cooperate,” says Albright, but they found that by rewarding the monkey with juice, they were able to train it to relax in the magnet.

    Both groups worked with ordinary hospital MRI machines—horizontal magnets designed to accommodate a prone human. But several labs, including those of Nikos Logothetis at the Max Planck Institute in Tübingen, Germany, and Carl Olson at Carnegie Mellon University in Pittsburgh, are working with manufacturers to develop a new generation of MRI machines specifically for monkey research. The magnets are vertical, allowing the monkey to sit upright, and have greater magnetic fields, which will increase the resolution of the images, says Andersen, who hopes to have such a facility at Caltech within 2 years.

    In the meantime, Albright's and Andersen's teams are using the hospital magnets to test the relation of fMR images to neural activity in monkey brains and look for new brain areas to explore with electrodes. Eventually, Albright says, the researchers will need the better resolution—and the additional research time—available with the dedicated machines. But for now, he says, “the important thing was to show we could do it.”

  3. DRUG DEVELOPMENT

    Small Molecule Fills Hormone's Shoes

    1. Marcia Barinaga

    As diabetics who must inject themselves daily with insulin know only too well, a major disadvantage of protein drugs is that they can't be taken orally. Drug companies would love to find small compounds that mimic the effects of these drugs yet evade breakdown in the digestive tract. But for many protein drugs—including insulin and other hormones known as cytokines—that has seemed a forlorn hope. These proteins stick snugly to large surfaces on their receptors, and small chemicals—which might be just 1/50 of a protein's size—seemed too puny to turn on such receptors. “People didn't feel that a small molecule would have enough contact points,” says James Ihle, who studies cytokine receptors at St. Jude Children's Research Hospital in Memphis, Tennessee.

    But on page 257, researchers at Ligand Pharmaceuticals in San Diego and SmithKline Beecham Pharmaceuticals in Collegeville and King of Prussia, Pennsylvania, prove the skeptics wrong. They report their discovery of a small molecule that activates the receptor for granulocyte-colony-stimulating factor (G-CSF), a cytokine that triggers the growth of white blood cells. G-CSF is commonly used to boost patients' immune systems after chemotherapy, which kills off immune cells. Although the new compound works in mice but not humans, it is being heralded as evidence that the right small molecule can indeed fill a protein hormone's shoes.

    “A lot of people have been trying to do this,” says Alan D'Andrea, who studies cytokine receptors at the Dana-Farber Cancer Institute in Boston, “and this is proof that it really can happen. It has generated a lot of excitement.” The discovery will also spur the pharmaceutical industry's search for protein-mimicking drugs, says Mark Goldsmith, who studies cytokine receptors at the University of California, San Francisco. It suggests, he adds, that with small molecules, “we can mimic many interactions that just a few years ago would have been expected to be much more difficult.”

    The Ligand-SmithKline Beecham team had some grounds for optimism when they began their search for a G-CSF mimic. When cytokines bind to their receptors, they drag the receptors together into pairs called dimers. That aggregation seems to be what turns on a receptor, as researchers have shown by using antibodies to bind receptors together or mutating the receptors so that they stick together of their own accord. And linking receptors was a job that a small molecule might be able to do.

    In the past 2 years, moreover, two research teams showed that peptides—small fragments of proteins—can activate the receptors for the blood cell cytokines thrombopoietin and erythropoietin. Peptides can't be given orally, but they are considerably smaller than proteins. That peptide finding was “encouraging,” says Ligand researcher Peter Lamb, who with his teammates was already at work to find a small molecule that could link G-CSF receptors.

    To speed the search, the team had engineered a line of mouse cells as a biological test for G-CSF-like activity. When G-CSF activates its receptor, a cascade of cellular events begins that ultimately activates proteins called STATs, which turn genes on and off to stimulate growth or cause other changes in the cell. To monitor STAT activity in their test cells, the team added a gene for the light-generating protein luciferase, engineered to be turned on by STATs. They then treated the cells with thousands of compounds and picked out the ones that made the cells literally light up.

    Togetherness.

    Two receptors for the cytokine growth hormone (blue and green) are held together by the growth hormone protein (red). Small molecules may produce the same effect.

    A. M. DE VOS, M. ULTSCH, A. A. KOSSIAKOFF, SCIENCE

    After further tests to make sure that the candidate compounds really were activating STATs and that the G-CSF receptor was playing some role, the group was left with a small organic compound called SB 247464. This compound actually stimulated the growth of mouse white blood cells in culture and in living mice. “As far as we know, this is the first example of a [nonpeptide] cytokine mimic,” says Lamb. The data suggest that the compound binds and dimerizes the G-CSF receptor, but more tests are needed to prove that is actually how it works.

    The researchers admit that they were disappointed to find that the compound works in mouse but not human cells. It remains to be seen whether chemical alterations will make the compound capable of acting on the human receptor. “That is a lesson about the importance of using human [receptor] molecules for screening for human compounds,” says Goldsmith. But despite that weakness, he says, the paper has taken the field “a quantum leap” along the path toward developing oral substitutes for protein drugs.

    Ligand plans to use luciferase-gene assays to search for substitutes for other cytokines that work through STATs, Lamb says. And the drug industry as a whole is likely to mount a broader search. “The insulin receptor is fundamentally no different” from the G-CSF receptor, Goldsmith notes, pointing out that it too is activated by dimerization. “Why not consider [searching for] a synthetic compound that can be taken as a pill and will activate your insulin receptors?” If drug companies aren't looking already, it certainly won't be long before they start.

  4. BIOMEDICAL POLICY

    NIH Urged to Involve the Public in Policy-Making

    1. Eliot Marshall

    With more than 100 standing committees already providing tons of advice, the National Institutes of Health (NIH) wouldn't seem to need more advisers. But that's what the doctor has ordered. To cure a “major weakness” in communication between NIH's leaders and the public, a group of experts at the Institute of Medicine (IOM) chaired by molecular biologist Leon Rosenberg of Princeton University has recommended that NIH create a new committee and staff that would enable public representatives to communicate more directly with NIH's brass about research policy.*

    The 18 to 25 people named to the new panel, the 8 July IOM report** says, should represent “a broad range of public constituencies,” including “disease specific interest groups, ethnic groups, public health advocates, and health care providers.” Chosen by NIH for 3-year terms, these tribunes would sit on a Council of Public Representatives in the NIH director's office. Others would join councils in the offices of the directors of each of NIH's 21 institutes and centers. They would be supported by a new permanent staff of “public liaison” agents, who would also solicit information and help citizens understand NIH. Acknowledging one possible undesirable outcome of this scheme, the IOM report warns that the council “is not intended to serve as a forum for advocacy groups to lobby the NIH director for research dollars” for their special interest. Instead, it says, members would set aside the targeted politics that got them to the table and offer “valuable and thoughtful perspectives on [NIH's] research programs.”

    These are the most weighty recommendations in a list of a dozen issued last week by IOM, part of a report commissioned by Congress at the request of Republican Senators William Frist (TN) and Dan Coats (IN). Congress asked IOM to carry out this 6-month review of how NIH goes about ranking its funding priorities to help clarify its own decision-making. The assignment, as the IOM report notes, grew out of a contentious debate in recent years over whether AIDS research ought to get the large set-aside it has been receiving—currently about $1.6 billion of the total $13.6 billion NIH budget. The debate heated up when breast cancer activists copied the AIDS lobby and also began to win big funding set-asides from Congress. Next, the traditional groups for research on heart disease and diabetes appealed for attention, arguing that NIH spends less per patient on their diseases than on AIDS. Advocates for Parkinson's and Alzheimer's patients pushed for a bigger share. Congress held some hearings (Science, 18 April 1997, p. 344) and in 1997 tossed the problem to IOM.

    After holding a couple of public meetings at which NIH officials and disease advocacy groups gave their views, the IOM panel apparently decided—like Congress—to finesse the debate on how best to rank biomedical research needs. Rosenberg says that the panel studied NIH's priority-setting methods, based chiefly on scientific opportunity, and found them “sound.” However, the panel felt that NIH does not clearly explain how it uses these criteria. In particular, the Rosenberg panel concluded, NIH is lax in the way it collects and provides data on “disease burden”—indicating the relative impact of different illnesses—and on the amount of money it spends in various disease categories. NIH leaders may not like using such data, Rosenberg says, but “those numbers are used in Congress … so they should be better.” The report makes several recommendations for strengthening the data.

    The “hardest part” of preparing the report, Rosenberg says, was figuring out how relations between the public and NIH leaders could be improved. The NIH director's office “currently does not have any mechanism for regular exchange with the public at large,” says Rosenberg, adding “we heard quite a lot about that in our public meeting.” Asked whether the proposed new advisory councils might not become outposts for lobby groups, Rosenberg said he takes an optimistic view: “If you offer people a special responsibility, they generally rise to the occasion. … I believe they would educate each other and elevate the entire debate about priority setting and earmarking.”

    NIH director Harold Varmus, who was being briefed by IOM panel members at press time, could not be reached for comment.

    • * This is a corrected version of the story published in the print edition of Science, which misreported that the IOM panel recommended that new committees be established in each institute. The report called for a Council of Public Representatives only in the Office of the NIH director, together with new, NIH-staffed public liaison offices in each institute.

    • ** Scientific Opportunities and Public Needs, Institute of Medicine, 8 July.

  5. ASTRONOMY

    Hints of a Nearby Solar System?

    1. Govert Schilling
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    A ring of dust, probably kicked up by a swarm of comets, has been spotted around Epsilon Eridani, the nearest sunlike star, just 10 light-years away. “What we see looks just like the [dusty] comet belt on the outskirts of our own solar system,” says Jane Greaves of the Joint Astronomy Center in Hawaii. The appearance of the dust ring also suggests that planets are orbiting nearby, says Greaves, who announced the discovery this week at the Protostars and Planets Conference in Santa Barbara, California.

    Greaves and her colleagues imaged the ring with the Submillimeter Common User Bolometer Array (SCUBA), a sensitive camera built by the Royal Observatory in Edinburgh and mounted on the 15-meter British-Dutch-Canadian James Clerk Maxwell Telescope at Mauna Kea, Hawaii. They had already used SCUBA, which is sensitive to the short radio wavelengths at which dust radiates strongly, to detect similar disks around the hotter and brighter stars Vega, Fomalhaut, and Beta Pictoris, and other astronomers have detected disks as well (Science, 24 April, p. 523 and this issue, p. 182). But Epsilon Eridani is cooler and more sunlike than the other stars with disks, although it is just a tenth of the sun's age.

    Jewel in the crown?

    The bright knot at the 8 o'clock position in Epsilon Eridani's dust disk might signal a planet

    GREAVES ET AL.

    Because the star is so close, the dust ring—which can be seen face-on—shows unprecedented detail. The dusty doughnut is about the size of our solar system's Kuiper belt, a flattened disk of comets outside Neptune's orbit. But the strength of the submillimeter waves implies that the dust is far denser than it is in the Kuiper belt. If it really is cometary debris, the number of comets orbiting the star must be 1000 times larger than in our solar system.

    The inner region of the disk, comparable in size to our own planetary system, contains little material, perhaps because it has been swept clean by planets forming from the dust. A bright spot in the ring is probably “either dust trapped around a planet or dust perturbed by a planet orbiting just inside the ring,” says Greaves.

    It is “good evidence but not convincing proof” of a planet, agrees theorist Jack Lissauer of the NASA Ames Research Center.

    Any planets around Epsilon Eridani are likely to be either relatively small or far from the star, says Geoff Marcy of San Francisco State University. Marcy has observed Epsilon Eridani for the past 11 years, looking for the wobbles that might betray the presence of a massive planet. The absence of detectable wobbles implies, he says, that “no companion having a mass greater than three Jupiter masses is likely to exist” within five times the Earth-sun distance. That, of course, leaves a comfortable margin for planets like our own.

  6. MICROBIOLOGY

    Bacteria to Blame for Kidney Stones?

    1. Gretchen Vogel

    Tiny bacteria have been fingered as possible culprits behind kidney stones and abnormal calcium deposits in other tissues. The bacteria, described in the 7 July Proceedings of the National Academy of Sciences, are among the smallest ever found, barely bigger than some viruses.

    Physician Olavi Kajander of the University of Kuopio in Finland first noticed the bacteria more than 10 years ago as a white film in his mammalian cell cultures. From the film, he was able to culture the slow-growing bugs, which he dubbed nanobacteria. At 200 to 500 nanometers wide, they are one-tenth the diameter of a typical Escherichia coli. So far, Kajander and his colleagues have found the nanobacteria in cattle blood, in 80% of samples of commercial cow serum in which mammalian cells are grown in the lab, and in the blood of nearly 6% of more than 1000 Finnish adults tested. The organisms had not been implicated in any diseases, however—until now. Kajander and clinical microbiologist Neva Çiftçioglu report that they have cultured nanobacteria from all 30 human kidney stones they examined.

    Seeds of kidney stones?

    Tiny bacteria form calcium shells that may trigger larger deposits. (Scale bar is 1 μm.

    PNAS95, 9274 (1998)

    Kajander and his colleagues suspected that the bacteria may play a role in the formation of kidney stones because, under certain growing conditions, they build calcium-rich spherical shells around themselves. Now the team has found that the structures are made of apatite, a primary component of kidney stones and other calcified deposits in tissue but different from the calcium compound in teeth and bones. Blood contains several proteins that inhibit the formation of apatite crystals, but Kajander speculates that the bacteria might be free to form shells if they leave the bloodstream and take up residence in tissues. The small spheres, he says, may be seeds for larger calcium deposits, such as kidney stones or the abnormal calcifications found in patients with scleroderma or some cancers.

    The hard shelters protect the bacteria from most assaults, including high heat and many antibiotics. However, says rheumatologist Dennis Carson of the University of California, San Diego, tetracycline is known to accumulate on apatite crystals and so might be a promising candidate for attacking nanobacterial infections.

    The link between bacteria and kidney stone disease is far from proven, however. “They may have something here,” says microbiologist Mitchell Cohen of the Centers for Disease Control and Prevention in Atlanta. “But I'd like to see broader studies looking at different types of stones in different parts of the world.” Nevertheless, the find is “one of the most intriguing and fascinating additions to this area of research that I can imagine,” says nephrologist and kidney stone specialist Fredric Coe of the University of Chicago. Coe notes that at least four teams have reported tiny spherical deposits in or near the calcified plaques often found in the kidneys of patients who suffer from kidney stones. “I don't know that it's their bacteria,” he says, “but it sure looks suspicious.”

  7. PARTICLE PHYSICS

    First Glimpse of the Last Neutrino?

    1. Meher Antia
    1. Meher Antia is a writer in Vancouver, British Columbia.

    Of the 12 elementary particles thought to make up all of the matter of the universe, physicists have spotted 11. Now the last holdout, the tau neutrino, may finally be in the bag. At least three of the exotic particles appear to have left their tracks in a detector at the Fermi National Accelerator Laboratory (Fermilab), in Batavia, Illinois. But researchers on the experiment, called DONUT, want to find another seven or so before popping the champagne.

    Neutrinos are notoriously difficult to detect—make one in the lab, and it is likely to slip insouciantly through the table, the planet, and much of the universe. These runaways come in three flavors: the electron neutrino, the muon neutrino, and the tau neutrino, each named for the particle it creates when it does happen to interact with matter. Over the years, physicists have become adept at spotting electron and muon neutrinos, but no one has ever seen a tau neutrino.

    Physicists, however, are pretty sure that the tau neutrino exists. Experiments at the European accelerator laboratory CERN near Geneva have shown that a heavy particle called the Z decays into exactly three types of neutrinos, for instance. The unseen tau neutrino also made headlines recently when an experiment in Japan suggested that muon neutrinos raining down from the upper atmosphere might be changing into tau neutrinos and eluding detection—a transformation that would imply that neutrinos have mass, contrary to a long-standing assumption (Science, 12 June, p. 1689). To confirm the identity shift, future experiments will try to detect the tau neutrinos directly. For that reason “it's very important to confirm that one can actually see a tau neutrino first,” says Carl Albright of Fermilab and Northern Illinois University in De Kalb.

    That's a challenge because, unlike the relatively pedestrian electron and muon neutrinos, tau neutrinos are very difficult to make in the laboratory. At DONUT, a dense stream of protons from Fermilab's Tevatron accelerator is smashed into a tungsten target, but fewer than one collision in 10,000 produces a tau neutrino, says Byron Lundberg, a spokesperson for the group.

    To detect these rare particles, the team built a stack of sheets coated with silver bromide emulsion next to the collision area. When a tau neutrino plows through these sheets, it has a slight chance of bumping into an atom and creating a tau particle. The tau would leave a millimeter-long track in the emulsion—which operates like a photographic plate—before decaying to other particles, which would also leave tracks.

    After 4 months of taking data, the team found what looked like the tracks of three tau neutrinos. The group expects to find more tau tracks when they analyze the rest of the emulsion. Until then they're not making any definite claims. “I think it is very likely that [their observations] are correct,” says Hywel White, a neutrino expert from Los Alamos National Laboratory in New Mexico. “Everybody believes that there is a tau neutrino, but you have to have experimental proof of it.”

  8. SEISMOLOGY

    A Quieter Forecast for Southern California

    1. Richard A. Kerr

    Four years ago, Southern Californians—already resigned to the risk of quakes, fires, and floods—got a bad jolt. An official group of seismologists had concluded that despite the powerful earthquakes of 1992 and 1994, destructive temblors were likely to strike the region even more frequently in coming decades. The reason: the strain built up in the crust by an “earthquake deficit” over the past 150 years (Science, 28 January 1994, p. 460). Earthquake insurance rates quadrupled after the announcement. But now an independent analysis and a review by seismologists, including one of the original panel members, both conclude that the deficit was due to accounting errors—and that the region is right on schedule for quakes.

    In the current issue of the Bulletin of the Seismological Society of America, seismologists Ross Stein and Thomas Hanks of the U.S. Geological Survey in Menlo Park, California, explain how gaps in the historical record led the original panel to believe that too few quakes had occurred. “Based on this analysis, I'm sold on there being no indication of a major earthquake deficit,” says seismologist Duncan Agnew of the University of California (UC), San Diego.

    The Working Group on Southern California Earthquake Probabilities reached its original, ominous conclusion by combining, for the first time, every type of observation that could indicate how often big quakes should strike. The group compiled geological data on active faults, the historical record of past quakes, and geophysical measurements of accumulating crustal strain. In the end, the combined geological and geophysical constraints seemed to call for twice as many quakes of magnitude 6 and above as had occurred. The strain would be released in an increased rate of moderate-to-large quakes, or, as working group member David Jackson of UC Los Angeles suggested on his own, in a single huge quake many times more powerful than the Big One—estimated to be of magnitude 7.9—that rocked Southern California in 1857.

    But Stein and Hanks came to a different conclusion after scrutinizing the historical record more closely. They compiled their own record of Southern California seismicity since 1903 and found the number of quakes to be in line with that expected from the geological and geophysical data. The quake numbers dropped off before 1903—but that's just when quake records would be expected to get spotty in Southern California, they say.

    In the 19th century, the primary seismic recording device was the newspaper. Seismologists today map the location and size of old quakes from newspaper accounts of how hard the ground shook in a given quake. The 1849 gold rush drew enough people—and newspapers—into northern and central California to ensure that no magnitude 6 or larger quakes would be missed there, Stein and Hanks found, but in the south the population was sparse (see map). “It's a rotten record in southern California until the turn of the century,” says Stein. “It would be impossible to have a complete catalog until about 1900.d”

    Paper trail.

    In the 1860s, Northern California sported plenty of newspapers to record earthquakes, but there were few in the southern part of the state.

    BULLETIN OF SSA88, 635 (1998)

    In response to this work, seismologists Edward Field and James Dolan of the University of Southern California in Los Angeles, together with Jackson, completely redid the working group's analysis. They found numerous small mistakes, from the missed 19th century quakes to a rounding error, and they agree that the quake deficit is no more. That means a lowered risk from future quakes—and perhaps lower earthquake insurance rates. But Southern Californians can't relax: Even without a deficit to make up, they are still due for four or five magnitude 6 and larger quakes per decade.

  9. CLIMATE CHANGE

    Warming's Unpleasant Surprise: Shivering in the Greenhouse?

    1. Richard A. Kerr

    Global warming could shut down an ocean current that warms the northern latitudes. The prospect underscores the oceans' power over climate, also featured in the Special Section beginning on page 189

    Wallace Broecker is worried about the world's health. Not so much about the fever of global warming but about a sudden chill. For more than a decade, the marine geochemist has been fretting over the possibility that a world warming in a strengthening greenhouse might suffer a heart attack, of sorts: a sudden failure to pump vital heat-carrying fluids to remote corners of Earth. If greenhouse warming shut down the globe-girdling current that sweeps heat into the northern North Atlantic Ocean, he fears, much of Eurasia could within years be plunged into a deep chill. In the mid-1980s, that prospect and its alarming consequences prompted Broecker, a longtime researcher at Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, to start urging his colleagues to examine records of the climate system's past behavior for clues to its future health.

    The data pouring in from ice cores and marine sediments are only fueling Broecker's fears—and worrying many of his colleagues too. At a conference* last month in Snowbird, Utah, researchers heard overwhelming evidence that the so-called “conveyor belt” current that warms northern Europe and adjacent Asia has repeatedly slackened and at times even shut off during the past 100,000 years, in concert with dramatic climate shifts around the hemisphere. And computer models suggest that, ironically, the greenhouse world's moister air could also squelch the conveyor belt.

    At first, Broecker's colleagues weren't much concerned about the relevance of past climate shifts to the greenhouse future. But given the powerful new evidence that shifts in ocean circulation have jolted climate, many now wonder about the unpredictable course that climate change may take. “Wally is occasionally wrong … but for the most part he's remarkably prescient,” says paleoceanographer Jerry F. McManus of the Woods Hole Oceanographic Institution in Massachusetts, who calls his colleague's disquietude “scientifically justified.”

    A sudden cooling of the Northern Hemisphere, even by only a few degrees on average, could throw a monkey wrench into the societal works. Farmers may have to contend with unseasonable cold, drought, and floods—an unsettled climate that could jerk one way and the other from year to year. Says glaciologist Richard Alley of Pennsylvania State University in University Park, “Humanity would continue, but a lot of us would be very unhappy.”

    A hair-trigger climate

    With climate-change debates polarized by hyperbole from both ends of the political spectrum, mainstream scientists have tended to avoid raising alarms. But recent findings show that global climate patterns flip-flopped every few thousand years—sometimes violently—during the last ice age. Climate indicators such as dust and isotope ratios in ice cores from Greenland reveal 24 so-called Dansgaard-Oeschger events, marked by 10° temperature swings that sometimes occurred in just a few years. North Atlantic sediments reveal that great flotillas of icebergs have sailed southward no fewer than six times in the past 100,000 years, each followed by plunging surface temperatures in the Atlantic.

    Just as the last ice age was giving way to the thaw of the current Holocene Epoch 10,000 to 15,000 years ago, a prolonged climate swing sent temperatures plummeting again in Greenland and Europe in a matter of decades. The 5° to 10°C temperature drop replaced Europe's newly emerged forests with glacial tundra for 1000 years before the Holocene brought the forests back for good, and the cooling had a noticeable effect on parts of the rest of the hemisphere—from the dustiness of the Gobi Desert to the strength of the Arabian monsoon.

    No one knows for sure what's at the root of these climate swings, but shifts in ocean circulation are a leading candidate. In the Atlantic, climate records preserved in bottom sediments suggest that surface water temperatures changed in step with changes in the ocean's conveyor belt, a term Broecker coined for the current whose upper loop carries warm, shallow waters from the North Pacific across the Indian Ocean, around Africa, and up the Atlantic. With a flow equaling that of 100 Amazon Rivers, the conveyor delivers enough heat to the North Atlantic's northern half to equal 25% of the solar energy reaching the surface there. Off Labrador and north of Iceland, frigid winds absorb that heat and carry it downwind, easing the chill on Europe and adjacent lands by as much as 10°C. Winds not only steal the North Atlantic's warmth but also boost its saltiness by evaporating freshwater, making surface waters dense enough to sink. The colder and saltier deep water flows southward, completing the conveyor belt loop.

    Scientists have long suspected that this crucial artery can be totally shut down; now they have persuasive evidence, from an analysis of carbon-14 preserved in sediments of the southern Caribbean's Cariaco basin. Carbon-14, a radioactive isotope, is produced from nitrogen by cosmic rays striking the atmosphere. Sinking conveyor waters in the North Atlantic carry carbon-14, in the form of carbon dioxide, into the deep sea. So, if the conveyor were shut down, carbon-14 that would have entered the deep sea should build up in the atmosphere. Such an increase would leave a trace in ocean sediments.

    Paleoceanographer Konrad Hughen of Harvard University and his colleagues have now found a carbon-14 buildup in Cariaco sediments deposited during the first 200 years of the Younger Dryas, the cold snap at the end of the last ice age 13,000 years ago. The rise was so rapid that the conveyor must have been shut down for that 200 years, the group concluded. For the remaining 1100 years of the Younger Dryas, a less efficient conveyor kicked in before regaining its full steam about 12,000 years ago.

    Researchers say that the ultimate cause of the conveyor's shutdown 13,000 years ago was the gradual warming of the Arctic at the end of the last ice age due to Earth's changing tilt toward the sun. Freshwater gushing from melting glaciers, they reason, would have spread a layer of less salty and therefore less dense water across the far northern North Atlantic, eliminating the extra density that drives the conveyor. A sudden diversion of conveyor-halting meltwaters to the St. Lawrence River and the North Atlantic might account for the abruptness of the Younger Dryas.

    Freshwater messed with the conveyor at other times during the ice age, too. During six Heinrich events, ice sheets collapsed and sent armadas of melting icebergs across the North Atlantic (Science, 6 January 1995, p. 27). And in Dansgaard-Oeschger cycles, Broecker has suggested, the freshwater flushing of the North Atlantic could have been part of an endless oscillation of freshening that slows the conveyor and stagnation that lets salinity build back up and accelerate the conveyor.

    In at least some of these events, freshwater jammed the conveyor completely, argued climate modeler Thomas Stocker of the University of Bern and his colleagues at Snowbird. They determined the timing of climate shifts around Antarctica and Greenland relative to one another, using as a benchmark the fluctuations of atmospheric methane trapped in ancient ice of both poles. Stocker's group found that during two or three of the biggest Dansgaard-Oeschger events, as well as during the Younger Dryas, the cooling in Greenland came at the same time as a warming in Antarctica. Later, when Greenland warmed a bit, Antarctica was cooling.

    Opposite responses at opposite poles is just the pattern expected from a complete shutdown of the conveyor, because while a strong conveyor carries heat to the Northern Hemisphere, it draws heat away from the Southern Hemisphere. Turn off the conveyor, on the other hand, and the heat-starved north will cool while heat piles up in the south. The absence of an obvious out-of-phase relation during smaller, shorter Dansgaard-Oeschger climate swings suggests to Stocker and his colleagues that the conveyor slowed during those events or was restricted to the North Atlantic but did not shut down.

    Although the conveyor may have faltered and even stopped during the last ice age, today it is almost rock steady. Climate records from ice and sediments show that subtle oscillations lasting roughly 1600 years seem to have persisted through the Holocene (Science, 27 February, p. 1304). But these oscillations can't compare with the ice age climate swings; the latest may have been the Little Ice Age of the 17th and 18th centuries, but it brought cooling of only about 1°C. “All the evidence we have for large flips comes from times when we had ice melting, dumping freshwater into the North Atlantic,” says modeler Thomas Crowley of Texas A&M University in College Station. “It hasn't flipped in 8000 years, so modest perturbations aren't enough. You have to kick it harder.”

    Calculating cooling's consequences

    Broecker's worry is that greenhouse warming will kick the Holocene conveyor into its next big flip. This time, however, the villain is not expected to be melting glaciers alone. Instead, the warming would deliver more freshwater to high latitudes by increasing the load of water vapor that the atmosphere can carry from lower latitudes. The far North Atlantic would then receive a torrent of freshwater flowing into it from rain, snow, and swollen rivers.

    In a sophisticated computer simulation of this kick, climate modelers Syukuro Manabe of the Institute for Global Change Research in Tokyo and Ronald Stouffer of the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey, raised carbon dioxide concentration in a model atmosphere over 140 years until it plateaued at four times the preindustrial levels, which might be reached in the real world by the 22nd century. They found that the high-latitude North Atlantic freshened and the conveyor slowed to a near standstill about 60 years after the quadrupling. In a recent extension of that simulation to more than 5000 years from now, Manabe and Stouffer were startled to find that over the next couple of thousand years of continuing warmth, the conveyor recovered completely. “I don't understand why,” says Stouffer. “It's not exactly satisfying.” Others share that discontent, adds Woods Hole's McManus: “We're learning that the climate system often responds in an unpredictable fashion.”

    Predicting the effects of the next conveyor shutdown is equally problematic. If it were like the Younger Dryas, Reykjavik would be bulldozed into the sea by the Iceland Ice Cap, Ireland's green fields would be blighted into the barrenness of present-day Spitsbergen, and Scandinavian forests would be replaced by treeless tundra. But Broecker notes that the Younger Dryas cooling started in a world cooler than today's, and a greenhouse-driven shutdown would come after perhaps several degrees of warming.

    Broecker's best guess is that around the North Atlantic a shutdown would trigger a climate shift as large as the Younger Dryas's—reversing a century's worth of greenhouse warming and driving areas like Europe several degrees colder than present. And the cooling could take as little as several decades. If that weren't disruptive enough, the transition could be peppered with climate “flickers” lasting 5 to 10 years during which climate would whipsaw between extremes. Around the rest of the hemisphere, the cooling might just be a temporary relief from the greenhouse, notes Broecker, but the altered rainfall patterns could be just as disruptive as the temperature changes.

    The capricious nature of past climate changes is exactly why Broecker says he has sounded the alarm. Given current knowledge, “there's nothing to do but guess” at the chances the conveyor will flip again soon, he says. To make those guesses more informed, says Broecker, “we should be doing much more in the polar regions” to understand why water sinks there and to track the conveyor's behavior. For his part, Penn State's Alley hopes to learn more about past climate flips by more finely analyzing the Greenland ice core records. And McManus wants to check older marine sediments from times even warmer than the present to see if extreme warmth really can destabilize the conveyor. Perhaps soon Broecker will be able to relax—if not cool off.

  10. CLIMATE CHANGE

    As the Oceans Switch, Climate Shifts

    1. Richard A. Kerr

    While climate scientists ponder the ocean's possible wildcard role in the next couple of centuries of greenhouse warming (see main text), other researchers are considering how the ocean and atmosphere shift climate in our lifetime. Last winter's El Niño—an unusual warming of the tropical Pacific that altered weather patterns worldwide—was a prime example of ocean and atmosphere conspiring to bring climate change. But the tropical Pacific is likely not the only puppeteer pulling the strings. Oscillations in the North Pacific and the North Atlantic have long been suspected of shifting climate from decade to decade, and others are on the table.

    It's one thing to indict ocean-atmosphere interplay as a suspect in these “decadal” climate shifts, however, and quite another to convict it. “Research into ocean climate is in its infancy,” says oceanographer Lewis Rothstein of the University of Rhode Island, Narragansett. “It's a fully nonlinear system in which the human mind trying to figure out cause and effect doesn't work very well. Everything interacts with everything else.”

    In sorting out the complex interactions driving decadal climate change, researchers look to the ocean because only it can set such a leisurely pace. During an El Niño, for instance, tropical winds blowing from the east that draw deep, cold water to the surface may weaken, allowing surface waters to warm; the ocean warming further weakens the easterly winds. This ocean-atmosphere interaction can bring on a full-blown El Niño in a matter of months, but swings from El Niño to its opposite number, the cool La Niña, and back take between 3 and 7 years. For that kind of pacing, researchers must invoke the ocean's tremendous inertia.

    While an ocean like the tropical Pacific may set the pace, the atmosphere can extend an ocean's reach. For example, El Niño may be just part of a near-global daisy chain that loosely connects three oceans. Oceanographer James Carton of the University of Maryland, College Park, and his colleagues have used a computer model to merge 50 years of temperature and salinity observations into a complete and consistent picture of an evolving world ocean. By analyzing this motion picture, they can pick out oscillations in water temperature—and their possible connections—throughout the ocean.

    This picture showed that before El Niño revs up in the Pacific, the tropical Indian Ocean warms. Then, like spectators in a stadium doing “the wave,” warming appears in the tropical Pacific, setting off a full-blown El Niño. About 9 months after El Niño's wind shifts have leaped South America, they change the circulation of the tropical Atlantic to warm the ocean there, sometimes bringing drought to the Sahel of Africa and the coffee-growing region of northeast Brazil. “There's definitely a tropical connection,” says Carton. “It's through the atmosphere from one basin to the next.” The nearly global link stops there, after about 4 years and more than 30,000 kilometers.

    The newly emerging ocean-atmosphere oscillations aren't confined to the tropics. The tropical Atlantic's own ocean-atmosphere-driven oscillation, a pair of researchers believes, extends beyond the tropics to include more than 11,000 kilometers of the Atlantic, from at least as far south as the mid-South Atlantic to beyond Iceland.

    The proposed “pan-Atlantic decadal oscillation” or PADO, described in the 15 June issue of Geophysical Research Letters (GRL) by meteorologists Shang-Ping Xie of Hokkaido University in Sapporo and Youichi Tanimoto of Tokyo Metropolitan University, fluctuates on a 10- to 15-year time scale. At one extreme, east-west bands of alternately warmer or cooler water span the length of the Atlantic accompanied by changes in atmospheric circulation; at the other extreme, the temperature anomalies reverse. Guided by computer modeling, Xie and Tanimoto suspect that the ultimate driver for the PADO may be decadal variations of the North Atlantic Oscillation, a seesaw of atmospheric pressure between Iceland and the Azores (Science, 7 February 1997, p. 754).

    The chain of oscillations doesn't stop there. According to an analysis in the 1 May GRL by meteorologists David Thompson and J. Michael Wallace of the University of Washington, Seattle, the strength of a swirling vortex of winds circling the Arctic in the wintertime stratosphere influences the lower atmosphere right down to the surface, including the far northern North Atlantic. Varying from day to day and decade to decade, the oscillation is a natural response of the atmosphere to the random jostlings of the ocean-atmosphere system, the way a drum has a natural tone when struck. Depending on where it is in its cycle, the Arctic Oscillation warms or cools northern Europe and Siberia.

    The Arctic gong also reverberates in the far North Pacific, where oceanographer David Enfield of the National Oceanic and Atmospheric Administration in Miami is finding that on a decadal scale, the far northern reaches of the Pacific and the Atlantic seem to be varying in lockstep. The connection must be through the atmosphere, he says, and the Arctic Oscillation is a likely agent.

    Another high-latitude oscillation may bring the global interplay full circle to the tropics. In 1977, the North Pacific seemed to flip to a whole new mode of operation (Science, 20 March 1992, p. 1508). It cooled and the atmospheric low-pressure center off the Aleutian Islands intensified and shifted eastward, bringing more storminess to the West Coast, warming to Alaska, and periodic winter freezes to Florida, among a raft of environmental changes.

    Last year, meteorologist Nathan Mantua of the University of Washington, quantitative biologist Steven Hare of the International Pacific Halibut Commission in Seattle, and their colleagues dubbed this climate shift the Pacific Decadal Oscillation, or PDO, after finding evidence that shifts also occurred around 1947 and 1925. In the spirit of “everything interacts with everything else,” Mantua and his colleagues note that the PDO appears to have close ties to El Niño—which seems to have shifted to another, warmer mode in 1977. Whether El Niño triggered the shift in the PDO or vice versa, they can't say. Determining just how many climate oscillations exist and how tightly they are linked should keep meteorologists and oceanographers busy well into the greenhouse world.

  11. AIDS RESEARCH

    International AIDS Meeting Injects a Dose of Realism

    1. Michael Balter,
    2. Jon Cohen

    The upbeat mood of 2 years ago has given way to acknowledgment that even the best therapies have flaws—and they are not reaching millions who need them

    Geneva—Every 2 years, researchers, policy-makers, activists, clinicians, drug developers, and journalists gather at an international AIDS conference for what amounts to a giant group therapy session that, in the end, pronounces to the world the state of the field. At the politically charged 5-day meeting * held here last week, the mood was distinctly downbeat compared with the triumphalism at the last such gathering in Vancouver, British Columbia. Then, meeting goers rejoiced over the cocktails of anti-HIV drugs that were miraculously yanking people back from the brink of death, and speculations about curing the infection grabbed headlines. Last week, the nearly 14,000 participants heard that researchers are now scrambling to find new treatment strategies as the first-line drugs begin to fail in some patients, that drug side effects are mounting, and that new drug-resistant strains of HIV are emerging. “Remission,” not cure, is the new buzzword.

    To make matters worse, some untreated people who once appeared invulnerable to HIV are now seeing their immune systems begin to decline. And although the meeting's official theme was “Bridging the Gap” between wealthy and poor countries, participants heard of the formidable obstacles to making anti-HIV therapies available to people in poor countries. “The biggest AIDS gap of all is the gap between what we know we can do today and what we are actually doing,” said Peter Piot, chief of UNAIDS, the United Nations AIDS program, at the opening session of the meeting—which, for the first time, was organized by both scientists and HIV-infected people.

    The news was not all bad, however. New evidence that it might be possible to stimulate the immune system to fight off HIV provided some glimmers of hope. And, although much of the meeting rehashed findings that have surfaced at smaller meetings during the past year, the abundance of presentations, posters, and schmoozing between sessions offered many novel insights. Indeed, the session titled “Are World AIDS Conferences Worth It?” was canceled for lack of interest. Piot set the overall mood in the meeting's opening session: “Now is the time for us to embrace a new realism and a new urgency in our efforts,” he told participants.

    Losing control

    Part of the new realism comes from accumulating evidence that while powerful combination therapies can chase HIV from the bloodstreams of infected patients, the virus continues to lurk in a dormant form in a small number of CD4 T lymphocytes, HIV's primary target (Science, 14 November 1997, p. 1227). Many AIDS researchers had hoped that this reservoir of latently infected cells would eventually die out, taking the virus with them into oblivion. But new work presented by David Ho, director of the Aaron Diamond AIDS Research Center in New York City, indicates that such hopes were premature. Ho reported that HIV continues to replicate at a low level even in patients with undetectable levels of virus in their blood. Moreover, the progeny viruses are wild-type rather than drug-resistant strains, meaning that even viruses sensitive to the drugs are not completely suppressed.

    While Ho's data present a more realistic view of treatment success, new studies by Diane Havlir and Douglas Richman from the University of California, San Diego, challenge the common wisdom about treatment failure, when HIV rebounds and becomes detectable. Havlir and Richman examined the virus in patients whose HIV rebounded while they were taking a cocktail of two drugs that attack HIV's reverse transcriptase enzyme and one that blocks its protease enzyme. They thought that it would have become resistant to all three compounds, but to their surprise, the virus was not resistant to the protease inhibitor, the most potent of the drugs. So, they conclude, the routine practice of switching rebound patients to an entirely new drug regimen may do more harm than good: They may be discarding an effective drug along with those that the virus has overcome.

    That wasn't the only surprise in the Havlir-Richman study: The patients who had the biggest jumps in CD4 counts were the ones most likely to have viral rebounds. This seemed counterintuitive, because CD4s are the very cells that HIV selectively destroys, and boosting their numbers has become a central aim of AIDS treatments. But these data suggest that when CD4 levels increase after treatment, they set up many new targets for HIV, generating “fuel for the embers” of viral infection, said Richman. “Greater increases in CD4 are in general good, but they require greater vigilance in terms of lost control.”

    Help on the way

    While HIV continues to thwart efforts to defeat it directly with antiviral drugs, some immunologists have been making steady headway by trying to enlist the immune system in the battle. Bruce Walker at Massachusetts General Hospital in Boston presented the latest chapter in a compelling story he has told at a number of recent meetings. Walker has accumulated evidence that many so-called long-term nonprogressors—people who have controlled their HIV infections for more than a decade without taking drugs—have high levels of CD4s that specifically recognize the virus (Science, 8 May, p. 825). The CD4s, also called T helper cells, serve as battle commanders, orchestrating much of the immune system's assault on foreign invaders. Most infected people quickly lose HIV-specific T helpers, which led Walker to wonder what it would take to preserve these cells.

    In collaboration with colleagues in the United States and Italy, Walker has aggressively treated people who had been recently infected with HIV but had yet to produce antibodies to the virus. Every one of the seven patients the team has followed over several months developed increasingly powerful anti-HIV T helper responses.

    “This is beautiful,” says virologist Andreas Meyerhans at the University of the Saarland in Homburg, Germany. Based on these new results, Walker argued that physicians should set up early warning networks to identify acutely infected people—some develop rashes and fevers—and offer them potent drugs while there is still time to protect HIV-specific T helper responses. And in a late breaker session, Fred Valentine at the New York University Medical Center in New York City presented data suggesting that a “therapeutic” HIV vaccine can rev up the immune response against the virus in patients who are further along in their infections. Valentine and his colleagues gave 43 HIV-positive patients the standard cocktail of two reverse transcriptase inhibitors and a protease inhibitor. Four weeks later, when their virus was undetectable, the researchers immunized one group with an inactivated HIV that had been stripped of a protein called gp120, which studs the virus's outer coat. After 20 weeks, the group that received anti-HIV drugs alone scored very poorly in a key test of their T lymphocytes' ability to mount an immune response against HIV. But the immunized group demonstrated what Valentine called “huge” responses in the same test. These results are “quite striking,” comments Walker, who adds that “the immune system can clearly contribute to controlling the virus.”

    Live vaccines: a mortal blow?

    Despite these promising results, little progress was reported on one key immunological front: the development of preventive vaccines. Indeed, the controversial idea of developing a vaccine based on a weakened strain of live HIV received a setback with the latest news about a group of Australians who had been accidentally infected with a viral strain that had seemed benign. Everyone in the group, which now consists of a blood donor and five transfusion recipients infected by his blood, remains free of AIDS, despite having been infected for between 13 and 17 years. Their virus has a defective HIV gene called nef—the very gene deleted from a live, but weakened AIDS virus that has worked better in monkey experiments than any other vaccine strategy.

    In Geneva, however, Jenny Learmont, a nurse with the Australian Red Cross Blood Service who first recognized the significance of the cohort, reported that the donor and two recipients have recently seen their CD4 counts drop. Their counts are still in the normal range, stressed Learmont, but the finding has given her pause. “It would have to change your thinking that it's not causing damage,” said Learmont. “Obviously this now needs to be examined very hard.”

    A more encouraging development on the vaccine front came from an unlikely source: a study of babies in Nairobi, Kenya, born to HIV-infected mothers. Kelly MacDonald of the University of Toronto examined 141 mother-baby pairs to try to determine why some babies became infected and others did not. She reported that babies who had different genetic markers on their immune cells—so-called major histocompatibility complexes, or MHC—from their mothers were significantly less likely to become infected. MacDonald and others think this difference is protective because when HIV buds from a cell, it carries a piece of the host cell membrane—including MHC molecules—on its surface. So babies who build an immune response against their mothers' discordant MHC, explained MacDonald, are also building a response against HIV from their mother. Monkey and chimp experiments of AIDS vaccines have also shown that immune responses can develop against the MHC on the viral envelope. “The data on discordance are very intriguing,” said pediatrician Arthur Ammann, head of the American Foundation for AIDS Research.

    MacDonald and her colleagues also found that babies seemed to gain protection if they had a specific MHC known as a “supertype”—which is present in 40% of the world's population. When HIV infects a cell, MHC molecules present pieces of the virus to the immune system to indicate that the cell should be destroyed. Presumably, this supertype of MHC is better at presenting viral pieces than other MHCs. Putting the discordance and the supertype gene together, “this would suggest the protective effect is as strong as any intervention we've identified” to prevent maternal-infant transmission, said MacDonald, who is planning to develop vaccines that exploit these findings.

    The gap not bridged

    Many speakers emphasized, however, that few such discoveries have benefited the developing world, where 90% of HIV-infected people live. “Why is it that despite our efforts the level of care is still so grossly inadequate for most people with HIV?” asked Piot. The short answer is the daunting cost of providing expensive therapies to the developing world. Robert Hogg of the BC Centre for Excellence in HIV/AIDS in Vancouver, British Columbia, presented an estimate of the cost of providing triple combination therapy to the more than 30 million HIV-infected people across the globe: $36.5 billion annually. Faced with those figures, UNAIDS has been trying to convince drug companies to lower their prices in developing countries. UNAIDS official Joseph Saba, who is coordinating this project, says five companies have officially joined the initiative, including Roche, Bristol-Myers Squibb, and Glaxo Wellcome, which has begun a pilot program to provide its anti-HIV drug AZT to 30,000 pregnant women in 11 countries at a 60% to 75% discount. Because nearly 600,000 infants worldwide acquired HIV from their mothers during 1997, Glaxo Wellcome will make at least a modest profit on AZT even at reduced prices. Says Glaxo Wellcome spokesperson Benedict Plumley, “it is important to be clear that we are not a charity.” Eventually, the participating drug companies will negotiate separate prices in each country, based on what the market can bear.

    Conspicuously absent from the agreement so far is Merck, which makes the best-selling protease inhibitor indinavir. Indeed, its absence prompted a demonstration, led by the AIDS activist group ACT UP, that spray-painted the company's exhibit at the meeting. Merck spokesperson Jeffrey Sturchio says Merck is reluctant to join the club because inadequate health care infrastructure in many developing countries makes it “impractical and potentially ineffective” to administer these drugs properly whatever their price. He adds that indinavir is already priced 25% to 30% lower than competing protease inhibitors. Rather than join the UNAIDS initiative, says Sturchio, Merck has concentrated on creating training programs for physicians in developing countries who might eventually prescribe combination therapies.

    But even if drug companies cut their profits to a minimum, the price of therapy may still not be low enough to make a difference to millions of HIV-infected people. Says physician Elly Katabira of Mulago Hospital in Kampala, Uganda: “Even if the price [for combination therapy] was only $200 per month, which is peanuts in Western countries, this is more than many Africans earn in their lifetimes.”

    As if to underscore the magnitude of what remains to be done, a digital counter placed on the stage for the closing ceremony ticked off, in bright red numbers, the number of people around the world currently estimated to be infected with HIV. As the meeting ended and participants filed out of the conference hall, the counter reached 33,535,780 and kept on ticking.

    • *12th World AIDS Conference, Geneva, Switzerland, 28 June to 3 July.

  12. EDUCATION REFORM

    U.S. Tries Variations on High School Curriculum

    1. Jeffrey Mervis

    The traditional sequence of biology, chemistry, and physics in U.S. high schools, say reformers, may be at the root of poor student performance

    Gary Freebury has participated in an alphabet soup's worth of federally funded attempts to reform the U.S. high school science curriculum in the past decade. The Kalispell, Montana, high school chemistry teacher wrote a preliminary version of what later became Scope, Sequence, and Coordination (SS&C), a national effort to “teach every student, every subject, every year,” and later crisscrossed the country helping teachers work with the new course material. He also helped add a science component to the state's $10 million Systemic Initiative for Montana Mathematics and Science.

    Last month, the 62-year-old Freebury retired after 35 years at Flathead High School—and took on yet another project to improve U.S. science education. He's hoping to convince state officials to adopt a reform called American Renaissance in Science Education (ARISE), which would reverse the traditional order of teaching the four core disciplines, starting high school students out with physics rather than biology (see Policy Forum, p. 178). “I think it's the right way to go because it's so logical,” says Freebury, who believes that a national effort will help him in Montana.

    Teachers like Freebury are in the vanguard of efforts by governments, organizations, and even individual schools to change a century-old system of teaching high school science. The United States stands alone among industrialized countries in offering its high school students a series of yearlong courses each consisting of a single science subject. As a rule, the sequence begins with biology and proceeds through earth sciences, chemistry, and physics, with enrollment dropping off at each step until fewer than 20% of U.S. students ever take physics. That approach may work fine for producing Nobel laureates, say educators, but it's not effective for the average student. Bill Schmidt, a science educator at Michigan State University in East Lansing, calls it “the plop-fizz approach: We drop students into a class and expect them to learn everything there is to know about a subject in 1 year.” The result, he says, is a mile-wide, inch-deep curriculum that frustrates students. A dismal performance by U.S. seniors on the recent Third International Mathematics and Science Study (TIMSS), for which Schmidt was the U.S. coordinator, is seen as the latest evidence of the flaws in such an approach (Science, 27 February, p. 1297).

    Reformers say that by teaching biology first, the present curriculum also fails to do justice to the increasingly quantitative science it has become. “Science has a hierarchical nature and right now, although I hate to admit it, molecular biology is at the top of the heap,” says Leon Lederman, the Nobel Prize-winning physicist and education activist who put together ARISE. Concepts from physics and chemistry are crucial in biology, he says. “Unfortunately, our curriculum doesn't reflect those connections.”

    Attempts to reorder science teaching come in a variety of flavors: Integrated, inverted, and coordinated science are the most common labels. But regardless of their differences, all try to entice more students into science by offering a different sequence of subjects and emphasizing labs, group projects, and other hands-on activities instead of lectures. They also try to encourage teachers to erase the boundaries between disciplines. The idea behind many of these efforts is to emulate Europe and Asia, where middle school students begin a cyclical curriculum that covers all the sciences each year in progressively greater detail and depth. “The top-achieving countries [on TIMSS], like the Scandinavian countries, are teaching physics every year since the sixth grade,” says Schmidt.

    But such changes are tough to implement, especially given the pluralistic nature of U.S. education across some 16,000 school districts. Ask Thomas Palma, head of the science department at North Hunterdon High School in New Jersey and a 34-year classroom veteran. Palma anticipated ARISE by nearly a decade when he lobbied the powers that be to invert the science curriculum and make ninth-grade physics mandatory. “People ask me why more schools haven't done this,” says Palma. “Well, you have to be a lunatic. I took a well-established program at a relatively affluent school district where most kids go to college and turned it upside down, with no guarantee that it would work. I had two school board members, Ph.D. physicists, who told me it wouldn't work. And 3 years ago we got a new school superintendent who said he planned to get rid of the program.”

    Palma and others emphasize that teaching physics earlier requires more than simply reshuffling the order of classes. It means tailoring the course to the math that students have taken, either algebra or geometry, instead of more advanced topics like trigonometry or calculus. “It's not the same physics that was traditionally taught,” adds Arthur Eisenkraft, who has promoted similar reforms as science coordinator for Bedford Public Schools in Westchester County, New York.

    In spite of the obstacles, Palma won over his critics and today points proudly to data showing that many more students are taking science, at more advanced levels, and doing as well or better on college-oriented national achievement tests. Moreover, last year the district's second high school, which had initially balked at the change, began to implement a similar reform.

    Advocates say that such changes also can open physics to a wider audience. “Before, the only kids that took physics were our top-notch seniors,” says Maureen Daschel of St. Mary's Academy, a college-prep, Catholic girls school in Portland, Oregon, which inverted its curriculum 6 years ago. “Now a lot more take a second year of physics, and the kids say they feel better prepared to do science in college.” Indeed, says Eisenkraft, a fundamental aim of such reforms is to reach students with limited math skills, who otherwise would not be exposed to rigorous science courses, and give them the tools to continue on. “Our goal is to reach all students,” says Eisenkraft, who teaches physics at Fox Lane High School and has also been involved in SS&C.

    Even with the best material, reformers agree, well-trained and knowledgeable teachers are essential for successful implementation. For schools adopting an inverted curriculum, the biggest problem may be finding additional physics teachers—or retraining current staff—to handle the increased student load, as well as acclimating staff to a younger batch of students. Conversely, there's also the problem of how to cope with a temporary surplus of biology teachers, including some not certified to teach other subjects, as biology becomes an upper level course. “Professional development is the key, both for current and future teachers,” says Rodger Bybee, head of the Center for Science and Math Education at the National Academy of Sciences and an adviser to ARISE. “And that costs money.” Then there's the issue of elitism. Lederman remembers the reaction of 60 physics teachers during a workshop in which he outlined his proposal. “They gave me an ice-cold stare, as if to say, ‘We don't do freshmen.’”

    Instead of simply restacking the layers in the science cake, the SS&C project—spearheaded by former National Science Teachers Association (NSTA) executive director Bill Aldridge and separated into middle school and high school projects—set out to teach each of the disciplines every year with materials prepared ahead of time by the teachers themselves. But its fate illustrates the difficulties such reform efforts face. In 1996, officials at the National Science Foundation (NSF) pulled the plug on the high school portion of SS&C, which operated at 13 sites, after expressing concern about the quality of the materials. The project was halfway through its expected 4-year life. (Existing units are available online at no charge from NSTA at www.gsh.org/nsta/default.htm)

    “What's so good about SS&C, in theory, is that it tried to break away from labels and create a genuine, spiraled approach,” says NSTA's current executive director, Gerald Wheeler. “As a teacher, it meant I don't have to wait a whole year, while I'm doing physics, to bring up a concept in chemistry. That's closer to the real world.” But Wheeler admits that SS&C failed to overcome enormous “logistical hurdles,” from developing the material on time to retraining the staff to preparing students for year-end achievement tests. “You needed teachers certified in all four areas, which we didn't have at Fox Lane,” says Eisenkraft. Although some schools used a rotating team of teachers to compensate for that lack of individual expertise, others say this approach disrupted the usual ties between students and teachers. And several schools have avoided integrating courses because of the risk that some students may not be adequately prepared for discipline-based tests.

    Regardless of the content, any reform effort also must overcome the problem of assessing its impact on a complex and dynamic environment—what some evaluators compare to “changing the tires on a car as you're driving down the road.” Aldridge has complained bitterly that NSF demanded a finished product after too short a time, and NSF program manager Wayne Sukow, who was closely involved in SS&C, admits that the impact of any major reforms is hard to gauge, at least in the short term. “You really need at least a generation of students—12 to 15 years—to study the impact of curriculum reform,” says Sukow. “But it's tough to sustain interest for that long.” Frances Lawrenz of the University of Minnesota, Minneapolis, concluded after a $400,000 evaluation of the SS&C's first 2 years that “it is certainly no worse than traditional science teaching.” But she found “little evidence” that students had learned more or changed their attitude about science.

    Old hands of school reform know how hard it is to bring about change. Shirley Malcom, head of education programs at the American Association for the Advancement of Science (which publishes Science) and an adviser to ARISE, thinks the project is promising “not because it's the truth and the light … but because Leon's questioning the canon, and that's always healthy. I wish him luck, because he'll surely need all the help he can get.”

  13. KOREA

    Major Reforms Proposed to Improve Science Payoffs

    1. Michael Baker
    1. Michael Baker is a writer based in Seoul.

    Korea's current economic crisis highlights the need for changes to boost the return on its large investment in research

    Seoul, South Korea—The new government of President Kim Dae Jung has begun a comprehensive reform of science and technology (S&T) policy aimed at creating what officials call a “technology-based advanced economy.” The reforms are an effort to repair a system that, both scientists and government officials agree, suffers from bureaucratic infighting, a lack of incentives for quality research, and poor links between the academic and industrial sectors. “There are serious weaknesses” in the present policy, admits Kang Chang-Hee, a legislator who was appointed this spring as minister of science and technology.

    The new policies are part of a broader effort by an opposition party that has finally taken power to restructure Korea's inefficient economy. The S&T changes include a new top-level body to coordinate R&D policy, a reorganization and possible streamlining of the government's 34 research institutes, a plan to create 10 or more science and technology universities, and a decision to extend dual citizenship to scientists and engineers as an inducement to return home after studying and working abroad. Efforts are also under way to reform hiring and promotion practices within institutes and to foster innovation with large grants for high-risk, high-payoff research.

    The reforms are aimed at correcting a situation in which a heavy investment in R&D—sixth largest in the world in 1995—has yielded relatively low dividends. A recent international survey combining data and the responses of global business executives, for example, places Korea 28th in terms of S&T competitiveness. The previous government aggravated the problem by a continual churning of top officials, including five science ministers in 5 years.

    The new government has promised to remedy this situation, preaching efficiency. In line with that approach, planners at the Ministry of Science and Technology (MOST), which Kang oversees, want to divide South Korea's 34 publicly funded research institutes and related projects into three categories, representing basic, applied, and social science research. (MOST currently operates 20 of the institutes, which range from the Korea Institute for Science and Technology, founded in 1965, to the 2-year-old Korea Institute for Advanced Study.) A proposal to replace each institute's board of directors and president with one board for each category is under “heated discussion,” says Joon Eui-Jin, director-general of the ministry's Science and Technology cooperation bureau. Some see the reorganization as the first step in a weeding-out process.

    MOST controls just 35% of the government's $2.5 billion R&D budget. (Industry funds the major part of the country's R&D.) The rest of the publicly funded research is divided among the other 15 ministries, which are forever squabbling about jurisdiction. The ministries of information and trade, for example, each want to control research on high-capacity computers. MOST vies with the defense ministry to control research on spy satellites. Another ministry spars with MOST over aeronautics. “Somebody should have an overall responsibility,” says Joon.

    That somebody will be President Kim, who will chair a new National Science and Technology Council to be created after the National Assembly opens in September. The council is seen as a forum for debate on overall science and technology policy, with the president as arbiter and MOST as administrator.

    However, many observers are skeptical. “They're always rediscovering the wheel,” says Han Moo Young, editor of the Internet-based Korean-American Science and Technology News (www.phy.duke.edu/∼myhan/B_KASTN.html) and a physicist at Duke University. But “everything turns to dust” when it comes to implementation, he adds.

    Internally, the institutes also need overhauling. Bad researchers aren't fired and good ones aren't rewarded, say Korean and Korean-American scientists who requested anonymity. Cronies are often chosen over qualified candidates for top positions, they add, and research proposals are sometimes judged by scientists from other fields. “Koreans never say ‘I don't know.’ You're supposed to know everything,” says one scientist. The Science and Technology Policy Institute, a government-supported think tank, has recently adopted changes in hiring and promotion practices that are seen as a possible model for the country.

    In the meantime, efforts are under way to strengthen the scientific infrastructure. Last year, for example, Korea began a $25 million Creative Research Initiative with large grants for what the government calls “very imaginative but highly uncertain” projects in fields ranging from genetics to nanofabrication.

    To complement this investment in basic research, officials are considering a proposal to establish 10 to 15 new S&T universities. The 4-year programs would emulate Germany's dual system of awarding academic and professional degrees and England's “sandwich system” by allowing students to spend their sophomore year getting credit while working in industry. “The problem is the quality, not the quantity, of our graduates,” says Kang. “It's very important to shorten the [skills] gap between what universities produce and what industry needs.”

    The government also plans to offer dual citizenship to ethnic Korean scientists now holding foreign citizenship. Lack of citizenship is a barrier for those seeking top positions at national institutes and universities. Kang hopes the new rules also will be a signal to domestic students planning to go overseas that they are always welcome to return.

    Ironically, the economic crisis itself may be the biggest harbinger of reform. Although the government is increasing R&D spending relative to other programs, this year's overall budget was cut by about 13%, and industrial R&D is expected to drop by 16%. Even bigger cuts are in store if the banks need a massive recapitalization. If that happens, greater efficiency—the reformers' rallying cry—could also become the most important measure of success in trying to remake Korean science.

  14. STRUCTURAL BIOLOGY

    Race for Stronger Magnets Turns Into Marathon

    Researchers are pushing magnet technology to develop a new generation of nuclear magnetic resonance (NMR) machines

    Tokyo—U.S. spectroscopist Paul Ellis has developed a strong attachment to magnets. In 1994 his group at the Pacific Northwest National Laboratory (PNNL) in Richland, Washington, ordered a superconducting magnet from Oxford Instruments in Britain for what will be the most powerful NMR spectrometer ever built. This superconducting magnet will generate a field of 21 tesla (T), more than 400,000 times stronger than the magnetic field of Earth and 20% more powerful than in the best commercially available NMR machines. The machine is now more than a year overdue, however, and won't be delivered for several more months. The delay reflects the challenges facing magnetmakers as scientists demand ever more powerful tools to explore molecular structures in unprecedented detail.

    Magnets are the key element behind NMR spectroscopy, which makes use of the fact that a magnetic field can set some atomic nuclei wobbling like a spinning top. Changes in the wobbles caused by a second oscillating magnetic field can be used to determine the characteristics of the nuclei. “The higher the [magnetic] field, the better the resolution and sensitivity,” says Robert Griffin, director of the Massachusetts Institute of Technology's (MIT's) magnet lab. But a magnet's strength is limited by the material used for the magnet's coils, in particular its current-carrying capacity and its ability to withstand the electromagnetic forces being generated

    Ellis's group, which studies the molecular structure of proteins produced by chemically damaged DNA, knew that their demands would require “a new level of magnet.” But they also knew that the quest for greater resolution has fostered a steady upward march in magnet strength and field oscillation. The top commercially available NMR machines have frequencies of 750 megahertz (MHz) and 17.6-T magnets, and several other companies are vying with Oxford to develop the 21-T magnets needed for 900-MHz machines.

    Getting to 900 MHz poses several technical challenges, however. The wire for the magnet's coils is made of an alloy of niobium and tin, in a ratio of 3 to 1, that provides the homogeneity and stable fields required by NMRs. The strength of the field depends on the current carried by the wires and the size of the coils. But the current-carrying capacity of a given material can itself be degraded by high magnetic fields. Although wiremakers have gradually increased the current capacity of Nb3Sn, 900 MHz “is right on the edge of the ability of niobium-tin to carry current,” says Steven Van Sciver, director of magnet science and technology at the U.S. National High Magnetic Field Laboratory (NHMFL) at Florida State University, in Tallahassee.

    A second limiting factor is the electromagnetic force, which pushes outward on the coil. That puts a tensile load on the individual strands, which increases with field strength, current, and the radius of curvature of the coil. Because 21-T magnets create a force far beyond what the Nb3Sn alloy can resist, magnetmakers must reinforce the subcoils, typically with bands of stainless steel wires.

    Then there are a host of engineering details that must be resolved. It's a slow process, says Van Sciver, involving gradual improvements in such things as winding techniques and the epoxy resins used to fill the voids in the coils. Oxford Instruments spokesperson John Kearns agrees, although he says proprietary concerns prevent him from providing details: “It's just par for the course when you're pushing the forefront of a technology.”

    Both the Florida lab and the National Research Institute for Metals (NRIM) in Tsukuba, Japan, are working on 900-MHz NMRs, but as a steppingstone to more powerful machines. Rather than one coil, NMR magnets use a series of coils nested within coils, “like juice cans nested within coffee cans,” says Van Sciver. Lab scientists hope to use stronger coils made from new materials within or in place of the innermost coils of the magnets in their 900-MHz machines to reach the next level. NRIM has set its sights on a 1000-MHz, or 1-gigahertz (GHz), machine, which would require a field of 23.5 T, while NHMFL is aiming for a 25-T magnet to power a 1.066-GHz machine. NRIM is also planning to wrap wires with the Nb3Sn conductor around a tantalum core for strength. “It will be a more compact magnet,” says Tsukasa Kiyoshi, who heads the NRIM group. The technique is also expected to improve the efficiency of the coil-winding operation and lower costs.

    Oxford officials say the company expects to deliver the magnet to PNNL by the end of the year, while the NHMFL group hopes its 900-MHz machine will be ready by the end of 1999. Next year NRIM also hopes to produce a 920-MHz machine using an improved Nb3Sn wire for the innermost coil. A test coil made of the new wire recently set a world-record field of 21.7 T for niobium-tin alloys. The national labs are making their machines for in-house use, but both also have corporate partners.

    From there it will be on to 1 GHz. Both labs plan to replace the innermost Nb3Sn coil of the magnets with another material. For this inner coil, the NHMFL group is relying on a bismuth oxide, a high-temperature superconductor already proven capable of carrying sufficient current under a high magnetic field. The outer Nb3Sn coils will generate 20 T, with another 5 T coming from the inner bismuth coil. Unfortunately, the bismuth “is more challenging [to work with] than niobium-tin by a significant factor,” Van Sciver says.

    The Japanese are also working with the bismuth compound. But they are hedging their bets by developing a niobium-aluminum material that also promises to come close to carrying sufficient capacity under a high field to use in a 1-GHz NMR. “We'll develop [both alternatives] and then evaluate them,” says Kiyoshi.

    Van Sciver says his lab hasn't yet committed to a target date, but Kiyoshi hopes to have a working 1-GHz, 23-T machine by early in 2002. Ellis describes that as “indeed optimistic.” And that 1-GHz milestone is far from marking the end of the road. “I'm looking forward to 2 GHz,” says Griffin.

  15. ARCHAEOLOGY

    Geological Analysis Damps Ancient Chinese Fires

    Studies of sediments at Zhoukoudian, China—long considered the site of the first use of fire—suggest that any flames there were not kindled by human hands

    When and where did our human ancestors stop running from fire and start guarding and preserving it as a vital tool for survival? For the last half-century, nearly every archaeology textbook has offered a simple answer to that question: 500,000 years ago, in Zhoukoudian, China, where Peking man—Homo erectus—huddled around a hearth tending kindling and roasting deer.

    But that answer is now up for revision, according to a reanalysis of the Zhoukoudian site presented on page 251 of this issue by an international team. “In a sense, we spoil the story,” says the lead author, structural biologist Steve Weiner of the Weizmann Institute for Science in Rehovot, Israel. Applying a battery of techniques, Weiner and his colleagues did confirm that there are some burnt bones at the site about 30 miles southwest of Beijing, but those might have been burned naturally. And they found no evidence of controlled use of fire: no hearths, no ashes, and none of the unique chemical signatures expected from fires.

    The signs of fire at Zhoukoudian are now no clearer than at dozens of older sites around the world. “The bones are burnt, but we don't have the smoking gun: the fireplaces which people assumed to have been there,” says co-author geoarchaeologist Paul Goldberg of Boston University. That means there's no strong evidence of fire use until about 300,000 years ago and none definitively associated with H. erectus, the hominid that began to spread through Asia and into cold northern latitudes starting about 1.8 million years ago. Researchers must now consider that this colonization may have happened without fire. “We now have to reconsider H. erectus, their migrations, and their capability,” says Huang Weiwen, an archaeologist at the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) in Beijing, who has worked extensively at Zhoukoudian.

    The site was first excavated in the 1920s and '30s, when researchers found hominid fossils, stone tools, burnt bones, and what they described as ancient hearths preserved as layers of ash up to several meters thick. It all seemed to add up to solid evidence of human control of fire; some researchers even concluded that the thick ash layers represented continuous occupation over thousands of years.

    The new study is the “first really systematic investigation since the early excavations,” notes anthropologist Rick Potts at the National Museum of Natural History in Washington, D.C. In 1996 and 1997, Weiner's team revisited the site, a sheer cliff cut into the hillside, perching on a 10-story-high scaffolding for their analysis. They focused on layers 10 and 4, previously noted for putative king-sized hearths. They cleaned the exposure, studied the sediments microscopically, and used infrared spectrometry onsite to analyze the chemical constituents of sediments and fossil bone. In the lab, they confirmed that a small number of bones were burned. But the sediments contained no ash or siliceous aggregates, soil-derived minerals that are cemented together in trees and stay intact after burning—and should be present at the site of almost any wood fire. The thick layers aren't ash at all, but accumulations of organic material, much of it laid down under water, says Weiner.

    The team did find stone tools closely associated with burnt mammal bones. And more of these bones came from large animals than small, a proportion considered consistent with human activity, because people are more likely to roast horse than mice for dinner. But although this clearly indicates the presence of fire somewhere nearby, it doesn't convince most researchers that humans rather than nature sparked the flames. That's part of the reason why even older purported evidence of fire—up to 1.8 million years old—from sites in Africa and Asia has been considered “dubious,” says paleoanthropologist Philip Rightmire at the State University of New York, Binghamton. “The whole thing is [now] ambiguous, and that's the normal situation,” adds anthropologist Lewis Binford of Southern Methodist University in Dallas, who visited Zhoukoudian briefly in the 1980s and first challenged the interpretation of hearths.

    The paper also raises questions about whether humans actually lived at the site, because the researchers describe it not as a traditional cave but as the enlargement of a vertical fault, open to the sky. “This is an important reinterpretation,” says Potts. “It means that, who knows, maybe it wasn't a home.” Anthropologist Alison Brooks at George Washington University in Washington, D.C., who has also worked at the site, goes further: “It wouldn't have been a shelter, it would have been a trap.” Taken together, the evidence “brings Zhoukoudian a good deal more in line with sites from around the world, with a low fingerprint of human activity,” says anthropologist Chris Stringer of the Natural History Museum in London.

    The first strong evidence of purposeful use of fire is now associated with much younger humans. “This puts it forward at least to H. heidelbergensis and may push it forward to Neandertal,” says Brooks. A leading candidate may be Vértesszöllös, Hungary, an H. heidelbergensis site between 400,000 and 200,000 years old, where burned bone is arranged in a radial pattern as if around a campfire. “That spatial evidence is missing for Zhoukoudian,” says Potts.

    Still, some scientists advise against drawing sweeping conclusions from this single study. “The researchers were limited by the area they sampled,” far from the center of the cave, points out Huang. “Therefore, it is not an ideal place to detect the evidence of controlled fire use,” adds Gao Xing, an archaeologist formerly with the IVPP and now at the University of Arizona, Tucson.

    Nonetheless, ambiguity at Zhoukoudian raises questions about whether H. erectus anywhere used fire, Stringer says. Yet the species somehow survived in Zhoukoudian's temperate climate and colonized lands even farther north. The absence of fire suggests that H. erectus was much less advanced, argues Brooks. But other recent discoveries have suggested that the species was a sophisticated toolmaker, points out Huang, and perhaps even traveled by boat (Science, 13 March, p. 1635). For now, the dampened flame at Zhoukoudian has thrown these ancient humans into deeper shadow. “This work is another new beginning, but it is not enough to answer all the questions we are curious to know,” says Huang.

  16. SOCIETY FOR DEVELOPMENTAL BIOLOGY MEETING

    How Embryos Shape Up

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco.

    About 800 biologists gathered at Stanford University from 20 to 25 June for the 57th annual meeting of the Society for Developmental Biology. Study organisms ranged from flies to mice to plants, but there was plenty of common ground, including a new pathway by which signaling molecules can shape the early embryo and a new gene that helps specify right from left.

    WNT Takes a New Path

    In development, as in so much of biology these days, the gene's the thing: Researchers probe which genes turn on and off as embryos develop and which signaling molecules push the genetic switches. But surprising results presented at the meeting show that at least one classic signal, the wingless (WNT) protein, can guide development without touching those switches. At a crucial moment in development, WNT triggers an early cell to divide asymmetrically into two daughter cells, which later give rise to different sets of tissues. The new results, reported in a plenary session by developmental geneticist Bruce Bowerman of the University of Oregon, Eugene, and his colleagues, suggest that WNT does so by bypassing the genes and acting directly on the cell's internal skeleton.

    The result establishes a new modus operandi in developmental biology signaling pathways. WNT, which is perhaps best known for helping to create pattern in insect appendages, manages this feat at least in part by sending a signal down a chain of molecules to the nucleus of its target cell, where it activates specific genes. Now, says Norbert Perrimon, a developmental geneticist at Harvard Medical School in Boston, “Bruce has provided some really convincing data that proteins in the WNT pathway directly control the cytoskeleton without [turning on genes].” The finding also makes developmental researchers reconsider the cytoskeleton. “The cytoskeleton [as] a direct signaling target has not been on people's radar screens,” says William Talbot, a developmental geneticist at New York University's Skirball Institute of Biomolecular Medicine. “Normally you get a signal, figure out how it gets to the nucleus, and then you think you're done. Certainly we have to think about the cytoskeleton now.”

    Division with a difference.

    Only after getting a message from a nearby P2 cell can the EMS cell divide so that one daughter can become endoderm.

    SOURCE: B. BOWERMAN

    Bowerman made his discovery in the roundworm Caenorhabditis elegans, where researchers had already shown that at the four-cell stage of development, one cell, called P2, delivers an important message via the WNT pathway to its neighbor cell, called EMS because it gives rise to both endoderm and mesoderm. According to the current model, P2 orders EMS to distribute its contents so that when the cell divides, it yields two distinct offspring. The daughter cell closest to P2 (named E, for endoderm) makes tissue that becomes gut, and the other daughter (named MS, for mesoderm) makes tissue that becomes muscle. Without P2 next door, EMS gives rise to two MS cells.

    Just how the WNT signal skews EMS division wasn't clear. But P2 was known to control the orientation of EMS's mitotic spindle—the array of skeletal fibers that pulls apart the chromosomes as the cell divides. This might be how P2 forces EMS to generate its distinct daughters, reasoned developmental biologist Bob Goldstein, currently at the University of California, Berkeley, who did the original P2 signaling work. If the mitotic spindle is oriented correctly, then one daughter might get a different batch of cytoplasmic material from the other.

    To find out whether the WNT pathway dictates the axis of spindle formation, Ann Schlesinger, a graduate student in Bowerman's lab, did experiments using a variety of EMS and P2 cells, some normal and some having mutations in the WNT pathway. When she put mutant P2 next to normal EMS or vice versa, the spindle formed at random angles relative to P2, showing that spindle orientation does require the WNT pathway.

    Next, Schlesinger blocked all transcription—the activation of genes by transcribing their DNA into messenger RNA—in both EMS and P2, using a chemical called actinomycin D. She saw “perfectly normal spindle orientation” relative to P2, showing that the WNT signal was getting through even though the cell couldn't turn on any new genes. “You don't have to go through the nucleus and activate genes,” says Bowerman. Instead, WNT seems to act on the spindle directly, targeting molecules that must already be in the cell.

    The next step will be to identify those molecules, says Stuart Kim, a developmental biologist at Stanford University. “It's very exciting,” he says. “Bruce has several genes that seem to play similar roles [in affecting EMS division] as the WNT genes, but they don't appear to be known WNT pathway genes. Maybe these will be involved in directing the cytoskeletal events.”

    Even though the WNT signal seems to bypass the nucleus in directing EMS division, it may ultimately circle back to the genes in later cell generations. The daughter cells presumably have different fates because they apportion the cytoplasm in such a way that one of the cells has what it takes to develop into endoderm. The still-mysterious components of this cytoplasm might then direct different patterns of gene expression in the daughters. Says Bowerman: “Instead of going to the cytoskeleton through the nucleus, we're suggesting that, at least in some cases, you go to the nucleus through the cytoskeleton.”

    Putting a Heart in the Right Place

    Hearts must learn left from right early in their development. Like many other organs, a normal heart is asymmetric, located on the left side of the body, with veins and arteries hooked up so blood flows in one way and out the other.

    Researchers had already identified some of the steps in a genetic signaling pathway that helps define right and left in an embryo long before anyone looking at it can see a difference. And at the meeting, two independent presentations—a talk and a poster—announced the first candidate for a molecule that may actually translate this signal into an asymmetric heart. Pitx-2 (also known as Ptx2), the gene that produces the molecule, is activated only on the left side of frog, mouse, and chick embryos, persists there as the organs develop, and controls the position of the heart and gut.

    Heart shaped.

    Normal development of a frog heart involves an asymmetric shape, driven apart by lopsided expression of the Pitx-2 gene.

    ILLUSTRATION: P. MORRIGHANSOURCE: BLUM ET AL.

    “This gene is very important, because it's not just a marker but actually has a function,” says Leonard Zon, a geneticist and hematologist at Children's Hospital in Boston. “Left-right asymmetry is fundamentally related to heart formation, and people are racing to try to understand how it works,” in part because it may help explain congenital birth defects in which organs are reversed.

    Biologists already knew that a gene called nodal appears to direct the developing heart and other organs to their proper left-right locations. But nodal is turned on—in some cases by the patterning molecule Sonic hedgehog (Shh)—and then off before any visible asymmetry appears, so scientists reasoned that it must signal another gene or genes.

    The two groups represented at the meeting weren't looking for genes that direct heart asymmetry when they found Pitx-2, but it attracted their attention because it's expressed only on the left side of the embryo. Cliff Tabin's lab at Harvard Medical School in Boston, working on chicks, and Martin Blum's lab, at Forschungszentrum Karlsruhe Institute of Genetics in Germany, working on frogs and mice, independently presented work showing that Pitx-2's leftward bias is what skews the heart and gut.

    A heart starts out as a straight tube; the first visibly asymmetric step in its development occurs when that tube curls or “loops” to the right into an S-shaped structure (see diagram); the direction of looping helps specify where the heart will end up in the body. Both Tabin and Blum found that early in development, Pitx-2 appears on the left side of the tube. Tabin also found that in chicks, the portion of the heart derived from the left side of the tube expressed Pitx-2. The findings imply that in chicks, frogs, and mice, Pitx-2 stays on long enough and in the right places to shape the heart.

    The researchers next showed that Pitx-2 responds to the signals that control left-right patterning, the Shh/nodal pathway. They introduced nodal or its frog relative into the right side of young chick or frog embryos and found that Pitx-2 was expressed not only on the left but also on the right. Tabin's group next introduced antibodies that inactivate Shh into the left side of the embryo and found no Pitx-2. These results showed that the nodal signaling pathway can turn Pitx-2 both on and off.

    But can the gene actually control organ formation? To find out, Tabin blocked normal Pitx-2 expression with antibodies against Shh, while artificially producing Pitx-2 on the right side of the embryo with a virus that carries the gene. Some of the resulting embryos grew heart tubes that looped in the wrong direction. “Ptx2 by itself is sufficient, in [the] absence of other signaling, to drive the looping to the left,” Tabin said. Blum confirmed Tabin's results in frogs: He injected mouse Pitx-2 into cells on the right side of a frog embryo and later saw heart and gut tubes that looped incorrectly.

    Although it's possible that Pitx-2 turns on yet another gene, “these results give you the feeling that there might be a direct connection between Ptx2 and organ development,” says Kathryn Anderson, a developmental geneticist at the Sloan Kettering Institute in New York. “You don't need to invoke five more steps between Ptx2 and the ability to set up asymmetry.” Pitx-2, it seems, lies at the heart of hearts.

  17. COASTAL ECOLOGY

    Death by Suffocation in the Gulf of Mexico

    1. David Malakoff
    1. David Malakoff is a writer in Bar Harbor, Maine.

    Scientists have traced the origins of a vast hypoxic region in the gulf to inland fertilizer use; now officials must decide what to do

    Lafitte, Louisiana—Albert Darda, captain of the shrimp trawler Misty Morn, has never met Iowa farmer Bryan Sievers, who lives more than 1600 kilometers up the Mississippi River from this marsh-fringed Gulf of Mexico fishing port. But the two men have a common enemy: a huge swath of oxygen-starved, nearly lifeless ocean that spreads off the Louisiana coast every summer. The dead zone—as it is popularly called—drives away shrimp and fish, leaving a graveyard of strangled clams, starfish, and marine worms. Darda fears it will eventually smother his livelihood by damaging one of the richest fishing grounds in the United States, which pumped more than $3 billion into Louisiana's economy last year. Sievers is troubled for a different reason: He worries that regulators will soon ask him and other farmers to take costly but unproven steps to shrink the dead zone—a menace that many scientists say is primarily caused by nitrogen fertilizer washing into the gulf from millions of farm fields across the massive Mississippi Basin.

    Both men's fears are at the heart of an increasingly rancorous debate over the severity of the dead zone's ecological threat—and the practicality of trying to shrink it. Environmentalists and some scientists are convinced that the zone, which has doubled in size since 1993 and now ranks as the Western Hemisphere's largest, heralds the imminent collapse of Louisiana's valuable coastal fisheries. They are urging a special White House panel, which will issue a long-awaited report on the problem later this year, to recommend quick action to curb the flow of farm nutrients into the Mississippi River. But farm groups say a push to change agricultural practices in the Mississippi Basin—which covers 41% of the lower 48 states and holds more than half the nation's farmland—would be premature. Scientists can't show that the dead zone has harmed fisheries, they say, nor is it certain that new techniques applied to distant fields will help the gulf. “People seem to be pointing the finger at agriculture before all the facts are in,” says Sievers, who raises corn, soybeans, and cattle on 400 hectares near New Liberty, Iowa.

    Life-taking waters.

    Oxygen-starved coastal “dead zones” spawned by human activity, shown above as red dots, have tripled in number worldwide in the last 30 years.

    DIAZ AND ROSENBERG, SOURCE: OCEANOGRAPHY AND MARINE BIOLOGY: AN ANNUAL REVIEW 33, 245 (1995)

    Any steps taken to heal the Gulf of Mexico will be monitored intently across the globe: The dead zone is one of more than 50 similar oxygen-starved coastal regions worldwide (see map), most of which have formed in the last 50 years. Moreover, concern is rising that smaller oxygen-depleted regions off the U.S. coast could also grow—a topic that will be addressed later this year by the National Oceanic and Atmospheric Administration (NOAA) in a report on the status of U.S. hypoxic zones. Says marine biologist Robert Diaz of the Virginia Institute of Marine Science in Gloucester Point: “How policy-makers address the gulf's hypoxic zone is going to be a model for dealing with other serious coastal problems that have their origin on land.”

    Origin of death

    That the dead zone is being debated at all is a testament to the doggedness of a few marine biologists and the wake-up call provided by the record floods that surged down the Mississippi in 1993. Researchers first documented hypoxic, or oxygen-poor, water pockets off the Mississippi's mouth in the early 1970s. But it wasn't until 1985 that a meagerly funded group of researchers, including Nancy Rabalais of the Louisiana Universities Marine Consortium in Chauvin and Eugene Turner of Louisiana State University in Baton Rouge, began to document the hypoxic zone by taking oxygen readings at dozens of offshore stations, often piggybacking the measurements onto other projects.

    Rabalais and her colleagues showed that the zone is a creature of the Mississippi. It gets its start early each year when melting snow and spring rains wash nutrients—including nitrogen and phosphorus—off the landscape into the rising river. Except during drought years, the warmer, lighter river plume spills dozens of kilometers outward into the gulf, sliding over the heavier, saltier ocean water, forming a lidlike layer. Fueled by sunlight and dissolved nitrogen, massive algae blooms thrive near the surface, attracting tiny crustaceans called copepods and other organisms that graze on plankton. Dead algae and the grazer's fecal pellets sink to the bottom, where they are devoured by oxygen-consuming bacteria. Hypoxia sets in when oxygen levels in the isolated bottom water drop below 2 milligrams per liter, too little to support most marine life; anoxia occurs when the bacteria use up the rest of the oxygen, suffocating even themselves.

    Although the dead zone can form as early as February and persist as late as October, it generally stays from May to September in coastal waters up to 60 meters deep. Contrary to popular perception, it doesn't just hug the bottom: In shallower areas, 80% of the water column can be hypoxic, and the smothering effects can extend to within several meters of the surface. “Hypoxia has the potential to affect organisms living far off the bottom,” Rabalais says. Organisms that can swim, such as fish and shrimp, flee the area. But less mobile creatures, such as starfish and clams, desperately seek oxygen by abandoning the security of their sea-floor burrows or climbing to high points that might penetrate oxygenated waters. Some brittle stars even stand on their points, stretching to catch some oxygen.

    In the summer of 1989, Rabalais and Turner found that the gulf's hypoxic zone covered a tongue-shaped area of up to 9000 square kilometers, west of the Mississippi's bird's-foot delta. But then came the 1993 deluge, when the river dumped into the gulf almost twice its usual annual flow of 580 cubic kilometers. Within a few months, the dead zone doubled in size, stretching westward with the prevailing currents into Texas waters. Since then, despite hopes that the zone would shrink to its pre-1993 girth, it has receded only slightly, to about 16,000 square kilometers in 1997.

    Call to arms

    Alarmed by the dead zone's unyielding territorial grip, 18 fishing and environmental groups—quoting chapter and verse from clean water laws—in January 1995 petitioned federal and Louisiana officials to take steps to cut off the nutrients that spur the zone's growth. Last year, the Clinton Administration responded by appointing six committees to assess hypoxia science and decide, by the end of this year, how to tackle the problem. “There is little disagreement that nitrogen is the nutrient that drives hypoxia,” says Don Scavia of NOAA, who is coordinating the White House Committee on the Environment and Natural Resources Research's (CENR's) Gulf of Mexico Hypoxia Assessment. “The controversy,” he says, is “trying to figure out where it is coming from and how to reduce it.”

    According to Scavia, the CENR effort “boils down to figuring out whose nitrogen it is that is feeding the hypoxic zone.” Scientists do know that an awesome mass of nitrogen moves down the Mississippi: About 1.82 million metric tons of it reaches the gulf each year, almost triple the amount 40 years ago, estimates Cornell University biogeochemist Robert Howarth. The nutrient comes from a variety of sources, says Don Goolsby, a hydrologist at the U.S. Geological Survey (USGS) in Denver, who leads the CENR team assessing the river basin's nitrogen discharge hot spots. By far the leading culprit, Goolsby and others believe, is nitrogen fertilizer leaching from the basin's croplands—particularly those planted with corn. Other sources include livestock manure, nitrogen-fixing legumes such as soybeans, sewage treatment plants, nitrogen-rich soils in drained wetlands, and airborne nitrogen oxides from fossil-fuel burning.

    More than half the 11 million metric tons of nitrogen added to the basin each year comes from fertilizer, according to the USGS. Iowa farmers, for instance, applied on average 130 kilograms of nitrogen per hectare of corn in 1996, according to state records. Up to half this fertilizer may find its way into ground and surface waters as nitrate, or into the air as nitrogen oxides, estimates a team led by Stanford University's Peter Vitousek. USGS studies suggest that over 60% of the Mississippi's nitrate originates in the Corn Belt north of the Ohio River. Such calculations, which Goolsby cautions are based on sketchy data, raise hackles in farm country. “The agriculture community is being forced to take a defensive posture, even before the actual source of the hypoxia problem is determined,” says C. David Kelly of the American Farm Bureau Federation in Chicago.

    But other experts say that farmers don't have to wait for further studies to justify actions that could benefit both the gulf and their bottom line. “Have we traced a nitrogen atom from a farm field to the gulf? No,” admits ecologist John Downing of Iowa State University in Ames. “But if you cost out the excess nitrogen we know is flowing down the Mississippi each year, it is worth something like $750 million. On economic grounds alone, I think you can convince farmers to take steps to keep nitrogen on their fields.” Downing knows, however, that educating the region's farmers about nutrient management—which he hopes to do through a new $4 million grassroots outreach program—is an uphill battle: Just 11% of about 500 rural Mississippi Basin residents recently surveyed by his group were even aware the dead zone exists. Laments Downing, “We realized we were asking them how they would help solve a problem that they did not know they had!”

    Another headache for those who want to curtail the dead zone is how to assess its economic threat. It's hard to weigh the benefit of protecting the gulf's $3 billion fishing industry, which supports at least 40,000 jobs, against costs incurred by the $98 billion Mississippi Basin farm economy, which supports almost a million farmers. “Trying to prove the economic benefits of reducing the hypoxic zone is going to be extremely difficult: We don't have the data,” says Walter Keithley Jr. of Louisiana State.

    Part of the problem is that biologists so far have been unable to link serious declines in a gulf fishery to the transient hypoxic zone. There are, however, some warning signs: Evidence suggests that the dead zone may corral fish and shrimp into areas near shore, blocking them from spawning areas. And some studies show that sea-floor communities in hypoxic areas tend to be impoverished—younger, sparser, and home to fewer species. Ironically, increased nutrients may be nourishing some of the gulf's fisheries. Annual catches of menhaden, a fish used for its oil and for aquaculture feed, have skyrocketed since the 1930s, to almost 550,000 metric tons a year in the 1990s. Although the link is fuzzy, the boom may be fueled by the nitrogen-fed algae blooms, which are a major food source for young menhaden and other fish, says Churchill Grimes of NOAA's fisheries research lab in Santa Cruz, California. To assess these conflicting signals, this summer the Louisiana Department of Wildlife and Fisheries is beginning a 3-year study of hypoxia's impact on commercial fisheries.

    Before that study can reach any conclusions, however, it appears likely that government officials will forge ahead with programs to slash the Mississippi's nitrogen loads. The Clinton Administration, for instance, has proposed spending $322 million over the next 5 years on reducing hypoxia. The money, which would be funneled through several agencies, would go to research and to implement so-called “win-win, no regrets” measures unlikely to anger the powerful farm lobby. Indeed, even the Fertilizer Institute, the industry's Washington, D.C.-based trade group, supports one such measure: voluntary incentives for creating 2 million miles of nitrogen-trapping vegetation buffers along farm fields. Administration officials say other initiatives could include building wetlands and helping farmers buy computers and equipment that would enable them to apply fertilizer more efficiently.

    But even if all these steps are taken, hypoxia researchers say it could be decades before they yield meaningful results—and in the short term, the gulf's ecological health could even worsen. Studies in watersheds around the Chesapeake Bay and elsewhere, for instance, have shown that after years of effort to reduce nitrogen leaching, levels of the nutrient in streams and rivers can remain stable or even rise slightly. The problem, says Goolsby, is that “these systems have been saturated with so much nitrogen for so long that there is a very long lag time in cycling it out”—a discouraging prospect for lawmakers who may need to show quick results for expensive and controversial restoration projects. And if remediation efforts do succeed in reducing nitrogen levels, a shifting nutrient balance could shake up the gulf's phytoplankton community in unpredictable ways that abet hypoxia or even lead to blooms of toxic dinoflagellates, such as those found in red tide, warns Quay Dortch of the Louisiana Universities Marine Consortium. “We could see some unintended consequences in getting to where we want to be,” she says.

    As scientists hash out the best strategy for battling the dead zone, gulf fishers like Darda and midwestern farmers like Sievers are beginning to appreciate how tightly bound their livelihoods are to the mighty Mississippi and the dissolved nitrogen it sweeps into the sea. “Never before,” says Downing, “has the interconnectedness of life in distant rural communities been so apparent.”

  18. FISHERIES BIOLOGY

    Ecology's Catch of the Day

    1. Karen Schmidt
    1. Karen Schmidt is a writer in Washington, D.C.

    A massive federal undertaking to learn where fish live in the ocean could lead to major changes in the way the United States manages its fisheries

    Dave Packer probably knows more about the mid-Atlantic summer flounder than any other person on the planet. The marine ecologist has spent a year scouring journals, digging up graduate theses, and tracking down survey data to assemble a picture of exactly where off the U.S. eastern seaboard the fish spend their lives. Packer's daunting task—as he describes it, to find out “everything you've always wanted to know” about the flounder's habitat, from Maine to Georgia—involved sorting through more than 250 papers and sometimes conflicting accounts on, among other things, the fish's preferred water temperatures, bottom types, and food. The grueling exercise has yielded an “incredible overview of the species,” says Packer, who works for the National Marine Fisheries Service (NMFS) in Highlands, New Jersey.

    Packer's labors are part of a sea change in the way the U.S. government hopes to manage increasingly fragile ocean fisheries. His flounder opus is one of hundreds of similar studies spawned by a federal effort to force fishery managers to take into account the health of a fish's habitat—and not simply its population size—in setting restrictions on fishing. This move to give ecologists a greater voice in the management of commercial fish stocks comes courtesy of the 1996 Magnuson-Stevens Fishery Conservation and Management Act. “For the first time, people who manage fisheries must consider habitat, and that is an overdue and giant leap forward,” says Elliott Norse, president of the Marine Conservation Biology Institute in Redmond, Washington. The law mandates that by 11 October, eight fishery management councils around the country must have finished mapping out “essential fish habitat” for more than 600 species, from groupers off Florida to salmon off Alaska.

    Industries that operate along the coast are anxiously awaiting the results: Areas designated as essential habitat should gain protection under the act, and harmful commercial activities—including some fishing practices—could be curtailed. But specific repercussions are still unknown. For instance, East Coast councils haven't tipped their hands yet on whether Packer's findings might help trigger restrictions on activities such as dredging in estuaries or waste discharges. That the law might wrap a protective cocoon around coastal nurseries delights the fishing industry. “We hope [implementing the law] will unlock fish stocks that we currently cannot use because they are contaminated by pollution or harmful algae blooms,” says Richard Gutting of the National Fisheries Institute, an industry lobby group.

    Deep impact.

    Sea-floor snapshots from Georges Bank, off New England: undisturbed (left), after scallop dredging (middle), recovery after a disturbance.

    USGS

    In a rare show of solidarity, scientists, conservationists, and fishing industry officials agree that safeguarding habitat is a key to restoring beleaguered fish stocks. But the effort faces a big obstacle: Scientists know so little about the habitat needs of many fish species that they are starting virtually from scratch. Indeed, some observers argue that, in the face of this yawning knowledge gap, the federal effort to define essential habitats is badly underfunded. Others worry that the program could be politically vulnerable if it leads to tighter regulations. Still, scientists are thrilled about the prospect that ecology could soon help underpin fishing regulations. The act and its “far-reaching implications,” predicts Jim Murray, a fisheries biologist at North Carolina State University in Raleigh, “will be a cornerstone of fishery management for years to come.”

    Essentially black boxes. The effort to define essential habitats is aimed at remedying a key shortcoming of the Magnuson Fishery Conservation and Management Act of 1976, landmark legislation that laid the groundwork for regulating where, when, and how many fish can be caught in a season. “Right now, the fishery management models assume that habitat is stable, but that's not the case,” says Jim Burgess, director of the National Oceanic and Atmospheric Administration's (NOAA's) habitat restoration center, who until May directed NMFS's habitat conservation office. Habitat for commercial species is being degraded by a host of factors, Burgess says. They include pollution from oil spills, urban and agricultural runoff (see story on p. 190), dredging, damming coastal rivers needed for spawning, and filling or draining salt marshes. Fishing methods that disturb the seabed, such as trawling, also alter fish habitat, although the damage they inflict is still hotly debated.

    But, although experts say it's clear that human activities are degrading fish habitat, it's still uncertain how much the intrusions contribute to declines in fish stocks. That's why environmentalists and commercial fishers joined forces and lobbied Senator Ted Stevens (R-AK) 2 years ago to include a provision in a new version of the law—the Magnuson-Stevens Act—requiring that each of the eight regional fishery management councils identify essential habitats, and threats to those habitats, for each species in its jurisdiction. The act defines essential habitat as “waters and substrate necessary to fish for spawning, breeding, feeding or growth to maturity.” NMFS has instructed the councils to use survey data on a species at a given location to map habitats used by fish, from birth to death. For overfished species that now occupy reduced ranges, the councils can consider historical data.

    For most species, there are major knowledge gaps. “We tend to know where the adults are but not the juveniles,” explains Burgess, who says the missing data present “a major obstacle” to implementing the law. In many cases, fishery managers infer likely habitat based on notions about a species' needs at various stages in its life cycle. But that requires knowledge—such as where kelp forests and rocky bottoms are located—that too is often lacking, Burgess says.

    Even when there's good information about a species' geographic range, it's not easy to determine which areas are truly essential. That will require data on a species' reproductive and growth rates in different geographic areas—information that is scarce for any fish, says Paul Brouha, executive director of the American Fisheries Society (AFS). “We don't know enough by orders of magnitude” for even one of the best studied species, the summer flounder, says Brouha. “We know where they spawn, where the adults and juveniles go, but we don't know how much habitat needs to be maintained in order to maintain productivity.”

    This limitation leaves some experts doubting that firm links can be drawn anytime soon between a fish population's health and its essential habitat. “I'm not sure we'll ever get to the connection between habitat and productivity,” says Roger Pugliese, habitat coordinator for the South Atlantic Fishery Management Council in Charleston, South Carolina. Instead of focusing on individual species, Pugliese's council is mapping the location of habitats—such as coral reefs, sargassum algae, oyster beds, salt marshes, and mangroves—and using data on fish distribution to verify locations of important habitat types.

    Faced with an incomplete picture, NMFS has sent mixed messages to the councils. In January, the agency told the councils to err on the side of caution and broadly designate essential habitat. For the summer flounder, says Packer, a swath of estuaries along the East Coast—including Long Island Sound, Chesapeake Bay, the Hudson-Raritan estuary, Delaware Bay, and Pamlico Sound—will likely be proposed as essential habitat because they serve as nursery areas. According to Burgess, the first designations may be too large, but they will be pared down as more data are collected. But in a letter last May, NMFS director Rollie Schmitten urged the councils to define essential habitats as narrowly as possible. “When we got the letter, we wondered if he was changing the rules,” says one council analyst. According to NMFS spokesperson Scott Smullen, the rules are the same; Schmitten, he says, was only asking the councils to refrain from going overboard in defining habitats to ensure political support and steady funding for the program.

    Regulatory sea change? Pinpointing essential habitat is hard enough, but it's just the first step. And the next steps—determining the threats to critical areas and how to deal with them—could lead regulators into choppy waters. Take the potential threat posed by some fishing practices. NMFS and AFS asked two scientists—Peter Auster, science director of the National Undersea Research Center at the University of Connecticut, Avery Point, and Richard Langton of the Maine Department of Marine Resources—to review 95 studies on how fishing affects marine habitats. The pair concluded that trawling and other methods that disrupt the sea floor can harm ecosystems, with long-lived species such as sponges and corals suffering the steepest population declines. “There's a strong inference that fishing is a widespread factor in changes to sea-floor communities,” says Auster. Still, he says, no one has yet quantified how specific fishing gears and the frequency and intensity of their use relate to the severity of disturbance: “How many passes of a trawl is equal to a single sea scallop dredge? That kind of exercise has not been done.” What's sorely needed, he and others argue, are controlled experiments to see how various fishing methods alter different marine habitats. “This is basic stuff—what foresters and terrestrial scientists have known for decades,” says Norse of the Marine Conservation Biology Institute. “When we get those answers, then we can say trawl here, dredge there, and not here.”

    In the meantime, Burgess predicts, the councils are unlikely to clamp down on commercial fishing practices. With that in mind, Auster says he's encouraging the councils to establish more no-take zones—where all fishing is banned—while researchers study how these areas recover. But that could antagonize industry. “If this law is used to attack fishermen, then the political support to implement the law will not be there,” warns Gutting of the National Fisheries Institute. He contends that enough marine sanctuaries already exist for study.

    The councils are holding hearings through the summer to help them hash out whether to propose new marine reserves and new fishing regulations in their management plans. Some environmental groups are downbeat about the prospect of reform. “We've seen few signs that councils will actually enact habitat protection measures—the real teeth behind the essential fish habitat provision—by the October deadline,” says Tanya Dobrzynski of the American Oceans Campaign.

    Efforts to curb other threats to essential habitats could prove equally contentious. Starting this fall, the Secretary of Commerce (the department in which NMFS operates) must be notified of any federally regulated projects—for example, logging in national forests, mining on the continental shelf, or development of coastal wetlands—that pose a threat to essential fish habitat. In each case, NMFS will make recommendations for minimizing habitat damage. Although NMFS cannot shut down projects, the Magnuson-Stevens Act gives it the authority to consult with agencies that have regulatory power, such as the Environmental Protection Agency. NMFS's new consultant role may well lead to court battles over the act's interpretation, predicts AFS marine affairs specialist Lee Benaka.

    Already, several forest industry groups have asserted that NMFS is misinterpreting the law. They worry about added red tape, as NMFS could have a say in how forestry is practiced as far inland as Idaho—part of the spawning habitat of Pacific salmon, which migrate from coastal rivers to the sea. “NMFS is trying to extend its jurisdiction into land management,” charges Greg Schildwachter of the Intermountain Forest Industry Association in Missoula, Montana. He contends that NMFS should have no jurisdiction inland because the Endangered Species Act, U.S. Forest Service policies, and other laws already give salmon ample protection. NMFS has not ignored its critics: In his letter to the councils last May, Schmitten asked them to include in their deliberations industries besides fishing that may be crimped by the essential habitat designations.

    Other observers are more concerned that NMFS lacks the staff and funding to fulfill its consulting duties. The agency now employs 50 people to handle more than 10,000 requests for consultation—“totally inadequate” resources, says AFS's Brouha. NMFS has requested $2.85 million in 1999 to deal with essential habitat issues, a hefty boost over this year's $2 million budget, but AFS is lobbying Congress to add much more. Contends Brouha, “Essential fish habitat needs to become a $50 million program in the next 3 or 4 years if it's to respond to the Magnuson-Stevens Act in a meaningful way.”

    Political and budgetary obstacles suggest that translating the act's good intentions into meaningful practice is a long way off. But many conservationists believe the new emphasis on habitat is at least a step in the right direction. “We've known that habitat questions are crucial to conservation on land,” says Norse. “Now we can no longer ignore them in the sea.”

  19. OCEANOGRAPHY

    Instruments Cast Fresh Eyes on the Sea

    1. Robert Irion
    1. Robert Irion is a science writer in Santa Cruz, California.

    From roving sensors to fixed offshore observatories, a raft of innovations is opening up new horizons in ocean research

    Monterey Bay, California—Below deck on the research vessel Point Lobos, pilot Stuart Stratton fires hydraulic thrusters to nudge Ventana, a tethered robot, within arm's length of a steep canyon wall some 1000 meters below the surface. Colleague Paul Tucker twists a joystick to guide Ventana's arm to the rock face—a jagged gray cliff on the shipboard monitor—then gently plucks a salami-sized seismometer from its rocky cleft to retrieve its data recorder. For an encore, the team inserts a new seismometer in the same borehole, near a hazardous fault that cuts offshore between Monterey and Santa Cruz.

    Scientists from the Monterey Bay Aquarium Research Institute (MBARI) perform such ballets nearly every day with the remotely operated vehicle (ROV) Ventana. The former oil-exploration craft has opened new research vistas within Monterey Canyon, where shallow coastal waters plunge into deep ocean. Ventana's sensors probe canyon features that were inaccessible just a decade ago: cold hydrocarbon seeps, a sea floor contorted from centuries of earthquakes, and bizarre creatures such as the tadpolelike larvaceans—filter feeders enshrouded by nets of mucus.

    But Ventana, MBARI researchers say, is just a foretaste of the coming era of robotic oceanography. Last March, the institute celebrated the first expedition of a $32 million tandem: the research vessel Western Flyer and its state-of-the-art ROV, Tiburon. The twin-hulled, 36-meter Western Flyer gives a more stable platform for deploying an ROV than the round-bottomed Point Lobos, which churns even the hardiest stomachs. And Tiburon's all-electric design, far quieter than Ventana's hydraulic systems, lets the robot “sneak up on things” with exquisite control at depths of up to 4000 meters, says marine geologist Marcia McNutt, president of MBARI. “We're getting to the point where there is little justification for manned submersibles,” McNutt says. “ROVs and autonomous underwater vehicles [AUVs] are the wave of the future in ocean science.”

    Roving recorders.

    Gliders can spend months in the open ocean, tracing circulation patterns.

    CLAYTON JONES, WRC

    Other oceanographic institutions around the world are catching that same wave. Fleets of robotic sensors drift in the oceans' midwaters, periodically bobbing up to beam their data to satellites. AUVs, set loose for hours to months at a time, crawl on the sea floor or skim through the water along preprogrammed paths. And fixed observatories monitor currents, salinity, and other tracers of the ebbs and flows of coastal ecosystems. Although some researchers say that federal funding falls short of the long-term buy-in that new oceanographic tools require, “the technology today unquestionably is driving the science,” says Michael Reeve of the National Science Foundation (NSF) in Arlington, Virginia. “There is incredible complexity and variability in the ocean that we have only recently begun to see.”

    Sea legs for new devices. To open that view, instrument designers are drawing on advances in areas such as microelectronics. New chip and sensor technologies have shrunk instrument packages while increasing their capacity to store huge amounts of data—creating, in McNutt's words, “high-powered brains with low-power requirements.” And materials such as Teflon-coated titanium, rugged glass spheres, and dense plastics have enabled engineers to devise better ways to cope with the sea's corrosiveness and crushing pressures. As a result, many oceanographers no longer need to wait a year to go out on a ship or schedule a costly manned submersible dive. The sea, while still daunting, is now much more accessible.

    Sea lion.

    The LEO-15 ocean observatory, ready to be lifted overboard off the New Jersey coast.

    WOODS HOLE OCEANOGRAPHIC INSTITUTION

    One futuristic vision, imagined in 1989 by the late oceanographer Henry Stommel of the Woods Hole Oceanographic Institution (WHOI) in Massachusetts, already is coming to pass. Stommel dreamed of a fleet of 1000 undersea craft that would wander the globe for years, steering with adjustable wings and ballast. The devices would upload data to satellites several times a day and receive new instructions. Many of those attributes now exist in drifting sensors called PALACEs (Profiling Autonomous Lagrangian Circulation Explorers), first deployed in 1990 by oceanographer Russ Davis of The Scripps Institution of Oceanography in La Jolla, California. Resembling small compressed-gas cylinders, PALACEs regularly record temperature and salinity in the upper 2000 meters of the ocean. They ascend and descend through the water on timed cycles, when pumps adjust the volume of oil between internal reservoirs and inflatable bladders.

    “These floats allow us to get a large number of [sensors] into the ocean relatively inexpensively,” says Davis. Hundreds of PALACEs are tracing circulation patterns in the Atlantic, Pacific, and Indian Oceans, relaying data to satellites when they pop to the surface. They also track basin-scale events, such as the annual sinking of cold water in the North Atlantic—part of a “conveyor belt” of heat exchange between the tropics and high latitudes (see story on p. 156). PALACEs have even revealed a new current in the Labrador Sea.

    Although today's floats drift with currents, teams led by Davis and two others—Charles Eriksen of the University of Washington, Seattle, and Douglas Webb of Webb Research Corp. in East Falmouth, Massachusetts—are designing steerable gliders for coastal and open-ocean research. Some of the gliders propel themselves by tapping energy from the temperature differential between shallow and deep waters. Scientists could send such devices to specific regions to monitor hydrothermal vents, plankton blooms, and other phenomena, thus fulfilling Stommel's prophecy a decade or two ahead of time.

    Gliders could roam the waters indefinitely, but the robotic subs called AUVs are already making shorter forays, exploring the sea floor and water column without the expense and potential danger of putting scientists into heavy submersibles. However, the going price of AUVs—about $50,000 for small units to more than 10 times that for fully loaded, Cadillac-class vehicles—means their inventors have hesitated to send them on unsupervised journeys where they might vanish or get damaged. “We're still like a father who has given his son his first car,” says WHOI engineer Chris von Alt. “We're willing to let them drive around the block but not much farther.”

    Cost is not the only factor holding back AUVs. Accurate navigation is the “Achilles heel” for today's vehicles, especially in deep water, says James Bellingham of the Massachusetts Institute of Technology, a leader in AUV development. “Full autonomy is the exciting research frontier, but we're not there yet,” he says. Docking systems, still being refined, would extend the range of AUVs by providing fresh charges of power away from the ship. Even so, the Odyssey IIb AUVs developed by Bellingham's group have collected data during 18 field deployments in the last several years, including a recent study of how cold surface waters mix with deeper water in the Labrador Sea (Science, 17 April, p. 375). This spring, the vehicles also measured how sound scatters off the Mediterranean sea floor—a U.S. Navy-funded project for locating mines and other buried objects.

    Oceanographers can also gather data using observatories that stay put and study what the currents bring. Beneath the swells some 9 kilometers off the coast of Tuckerton, New Jersey, is the Long-Term Ecosystem Observatory, also known as LEO-15. According to project director Fred Grassle of Rutgers University in New Brunswick, New Jersey, the remotely controlled observatory—a truncated stainless-steel pyramid that sits in water 15 meters deep—is the first of its kind. A cable from shore powers a suite of biological, chemical, and physical sensors, as well as a winch to ferry instruments up and down. Within a year, small AUVs called REMUS (remote environmental monitoring units), created at WHOI, will dock at LEO-15 and periodically scout the nearby sea floor. “We want to know everything about a 100-kilometer-square area of generic continental shelf,” Grassle says. “This observatory is doing the kind of research we haven't been able to do until now,” he adds, because moving platforms and randomly timed surveys miss fine-scale details.

    Deployed in August 1996, LEO-15 has tracked seasonal upwellings near the coast. Cold, nutrient-rich water rises during the summer every time sustained winds blow from the southwest and push warm surface water offshore. Gyres of upwelled nutrients form on the lee sides of sea-floor mounds; LEO-15 watches how and under what conditions these gyres trigger plankton blooms. During storms that stir this nutrient pot even more, LEO-15 continues to make observations—a valuable resource when the seas are too risky for boat surveys, Grassle says.

    Another pioneering nearshore observatory is Aquarius, an undersea lab in the Florida Keys National Marine Sanctuary. Teams of scientists spend up to a week inside the lab at a depth of 20 meters, studying nearby coral reefs. “Some of the most exciting research has come from staying in place and making long-term observations,” says G. Michael Purdy, director of NSF's Division of Ocean Sciences.

    Funding tide is turning. Despite the promising new instruments, a disquieting undercurrent is rippling through the oceanographic community. Researchers see a waning of long-term commitments by federal agencies to fund instrumentation, says Scripps Institution of Oceanography director Charles Kennel. A federal infusion of $24 million, announced in Monterey last month at the National Ocean Conference by Vice President Al Gore, will help a few projects (including LEO-15 and Aquarius) during the next several years but won't meet the field's larger needs, researchers at the conference agreed.

    Although oceans pose research challenges similar to those of space, Kennel maintains, federal agencies approach the two frontiers far differently. “At NASA, there is a close understanding of the relationship between technology development and the ability to do science in the future,” says Kennel, a former associate director at NASA. “I don't see a similar strategy in ocean technology.” Marine geophysicist Fred Spiess of Scripps, who has developed instrumentation since the early 1950s, goes further: There is “no question,” he contends, that funding is being squeezed to the point where ocean science suffers.

    “Funding ocean instrumentation is a really tough job,” says Mel Briscoe, director of the Processes and Prediction Division at the Office of Naval Research (ONR). “A normal 2- or 3-year proposal will barely get you started, and both the funder and the scientist have to be prepared for a long series of failures.” Of the agencies that fund most instrumentation development—ONR, NSF, and the National Oceanic and Atmospheric Administration—the Navy has traditionally been the most willing to provide long-term support, says MBARI's McNutt. ONR, Briscoe maintains, is “perhaps singular” in its willingness to fund risky proposals and to encourage technology transfer from military applications to the research community.

    Since the end of the Cold War, however, ONR's support for new instrumentation has dwindled, McNutt and others contend. Last year ONR spent $102 million on basic ocean science, compared to $98 million in 1990. “In inflated dollars, our budgets are smaller,” Briscoe acknowledges. As a result, he says, “I don't think we are seeing as much instrumentation development.” Briscoe adds that ONR'S funding emphasis shifted away from the open ocean to the coastal zone as the Navy's priorities changed at the end of the Cold War. Although ONR still supports AUV programs, nearshore observatories, and other select technologies, says Robert Gagosian, director of WHOI, researchers in fields such as acoustics and deep-sea research may feel left out to dry.

    Despite its own flat budget for instrumentation, NSF is trying to fill the gap, says H. Lawrence Clark, who directs the agency's Oceanographic Technology and Interdisciplinary Coordination Program. Clark's program provides $4.5 million a year—about one-third of the agency's budget for ocean technology—as dashes of “venture capital” for projects that might otherwise fall through the cracks. And MBARI thrives because of its founder, the late electronics pioneer David Packard, whose gifts of more than $200 million endowed big-ticket items such as the Western Flyer and Tiburon.

    The notion that private patronage is becoming more important in keeping ocean technology afloat makes Gagosian, for one, uncomfortable. “If we don't succeed in funding the instrumentation we need now with federal support, we will miss a window of opportunity unique to our generation,” he says. “We have learned the scientific questions to ask, but we may lose the technical experts. They'll leave oceanography and go where the challenges are.”

  20. OCEANOGRAPHY

    Sensing the Sea Without Breaking the Bank

    1. Robert Irion

    NASA may have pioneered the trend toward “faster, cheaper, better” with its Discovery space missions, but ocean scientists faced with budget cuts for new instrumentation (see main text) have adopted that mantra, too. Innovative tools at relatively low cost are surfacing throughout the United States, as researchers develop technology that their colleagues elsewhere can adapt and use.

    One example of oceanography on the cheap is the $5000 “OsmoAnalyzer,” a compact device that, from a buoy, measures concentrations of dissolved nitrate for up to 3 months. The instrument requires little power, as osmotic pressure pumps seawater droplets into a detection chamber every 15 minutes. Newer models will test for phosphate, iron, and other nutrients. “Everyone is clamoring for in situ chemical instrumentation, because samples change quickly when removed from their environment,” says OsmoAnalyzer developer Hans Jannasch of the Monterey Bay Aquarium Research Institute (MBARI). A companion tool, the “OsmoSampler,” slowly draws a year's worth of water samples into a single tube a millimeter wide and up to 2 kilometers long, allowing researchers to gauge chemical changes at one site over time without returning to collect many separate batches. Ultraslow collection rates prevent the samples from diffusing into each other.

    Also in the pennies-per-kilobyte category is a sensor for spotting harmful algae blooms in their earliest stages. MBARI molecular biologist Chris Scholin and colleagues at Saigene Corp. in Redmond, Washington, have devised a $7500 “dipstick” test that identifies RNA sequences unique to each toxic species. In a small water sample, detergents and heat break open cells. Fluorescent molecules latch onto particular phytoplankton RNA strands to identify killer species. Last May the probe spotted a nascent diatom bloom near Santa Cruz, California. The same diatom had killed hundreds of Monterey Bay seabirds in 1991; this time health officials were able to track the bloom down the coast, where it apparently killed birds and sickened sea lions. Although researchers still don't know what drives these blooms, says Scholin, “early warnings can go a long way toward mitigating potential problems.”

    Two new devices across the continent aren't quite as low budget, but they've opened research windows as effectively as high-priced ROVs. University of Maryland scientists use the $150,000 “ScanFish,” a towed, batlike fin crammed with instruments, to monitor the Chesapeake Bay's ecology (see p. 196). And a team at Johns Hopkins University and the University of Rhode Island (URI), Narragansett, has developed a submersible “holocamera” that uses holography to image all particles in a cylinder of water about the size of a can of spray paint. The camera yields precise three-dimensional positions and velocities for hundreds of thousands of particles. “Holography is the only tool that gives us both of those measures at scales from centimeters to microns,” says URI's Percy Donaghay. A better grasp of how plankton and other particles move in response to small-scale turbulence, he says, will help researchers understand the base of the sea's vast food web.

  21. ESTUARINE ECOLOGY

    Stirring Up the Chesapeake's Cradle of Life

    1. Jocelyn Kaiser

    Scientists are teasing out how the bay's currents and crannies create lavish nurseries for fish

    Some say the best beachcombing spot along the Chesapeake Bay lies at the tip of its Eastern Shore, just north of where the bay spills into the Atlantic Ocean. Now that corner of the Chesapeake is drawing notice for a reason other than the fishing lures and driftwood that litter the sand there. Last year, Raleigh Hood, a biological modeler at the University of Maryland's (UMD's) Horn Point lab in Cambridge, proposed that everything from flotsam to plankton should be pulled together, like soap scum in a draining bathtub, by a 16-kilometer-wide eddy near the bay's mouth. At first Hood's colleagues were skeptical-his prediction, after all, relied on an error-prone mathematical model. Then last year, Hood says, scientists began tracking scads of jellyfish and anchovy larvae “right smack dab in the middle” of that part of the bay.

    Hood and others now suspect that the eddy creates oases of tiny plants and animals that nourish fish and crustaceans. “It took some observations to wake us up,” says Bill Boicourt, a UMD physical oceanographer. The area has joined a list of probable ecological hot spots in the Chesapeake now being studied in a 6-year, $3 million National Science Foundation (NSF) project called Trophic Interactions in Estuarine Systems (TIES). The project is testing the idea that an estuary's physics-its bumps and crevices and complex flows of fresh and salt water-largely explain why fishery yields in the Chesapeake and other bays are so much higher than in lakes and the open ocean. “People talk about it all the time,” says estuarine ecologist John Day of Louisiana State University in Baton Rouge. “But careful documentation is lacking.”

    Mixing it up.

    At several spots in the Chesapeake Bay, swirling currents and unique topography create unusually rich ecosystems.

    SOURCE: UMD

    TIES has already changed how scientists view estuaries. So far, the group has documented the forces that sustain one of the bay's nutrient mixing areas, termed convergence zones, and even discovered the unexpected new zone near the bay's mouth. Figuring out why the Chesapeake teems with seafood may have a practical payoff, too. It could help agencies make better fishery-management decisions, such as where to dredge channels and how to clamp down on the surfeit of nitrogen and phosphorus running into the bay-nutrients that, at high levels, can fuel phytoplankton blooms that deplete the water's oxygen and kill fish.

    The project, part of NSF's Land Margins Ecosystem Research program (see sidebar), traces its roots to studies in the 1960s that hinted at the importance of mixing zones such as turbidity maxima, where fresh river water collides with tide-driven brackish water, whipping up sediments chock-full of zooplankton and detritus. “There's 10 times, 100 times the ambient concentration of food, and that's where you're able to outgrow the competitors,” UMD's Mike Roman says. But studies on turbidity maxima in the San Francisco Bay and the St. Lawrence River, for instance, focused mainly on the food pyramid's base, such as plankton and fish larvae, says microbial ecologist Tim Hollibaugh of the University of Georgia, Athens. TIES, launched in 1995, is moving the science up the food chain by exploring how local flow patterns caused by winds, tides, and topography affect higher trophic levels-from tiny crustaceans called copepods on up to striped bass and blue crabs. “TIES is taking on what we want to find out next,” says Hollibaugh.

    TIES is using high-tech muscle to achieve its aims. Researchers are mapping the bay's physics and biota using an array of data-gathering tools, including algae-tracking airplanes and fish-finding ships. One key instrument is the ScanFish, a flat, sensor-packed device that, lowered from a boat “like a yo-yo,” says project co-leader Walter Boynton of UMD, measures temperature, depth, salinity, turbidity, oxygen, and plankton abundance. The ScanFish alone generates 480,000 data points on average per day during three cruises totaling 45 days a year.

    Halfway through its term, the project has already yielded insights into how organisms benefit from the Chesapeake's plumbing. The team has found, for example, that light penetrates only a few centimeters into the turbid water of a roughly 50-square-kilometer area below the Susquehanna River. Despite such hostile conditions for plant life, this turbidity maximum supports a thriving food web, Boynton says, because currents carry in phytoplankton and other foods for higher organisms. “There were outrageous numbers of larger zooplankton and fish” in this zone in 1996, he says. Such findings carry “important news” for the U.S. Army Corps of Engineers and other agencies that groom the bay for navigation, says UMD fisheries biologist Ed Houde: Dredging near such zones could alter salt levels or topography and end up harming commercial species such as striped bass.

    TIES researchers are studying other types of convergence zones, including “persistent lateral fronts” in the southern bay-essentially short-lived turbidity maxima that form and dissipate as the tides bring cold and warm, or salty and fresh, water masses together. They've also unleashed ScanFish on a region in the midbay called the “hydraulic control point,” a turbulent area where the shallow bay suddenly plunges about 15 meters. Its violent currents, UMD's Boicourt says, should stir up pulses of nutrients and small organisms that spur plankton and fish growth. The team plans to set out by boat later this month to sample plankton and fish abundance there.

    Although much of the TIES effort is aimed at sharpening a murky view of the bay whose broad outlines are well known, the team has come up with a few surprises-including the convergence zone in the lower bay where the beachcombers like to roam. Hood realized that nutrients might accumulate there after running computer simulations of how particles are washed through the bay. Verifying that this nutrient-rich zone supports a thriving community of fish and other organisms will likely require more passes with the ScanFish and other instruments, says Hood, whose group reports on the zone in a paper submitted to the Journal of Geophysical Research.

    Experts say they're impressed by TIES so far. Findings on how fish larvae exploit turbidity maxima to hide from predators and eat in peace “caught me by surprise,” says University of Rhode Island, Narragansett, estuarine ecologist Candace Oviatt. “It was completely new to me.” Others say the jury is still out on whether the Chesapeake's physics accounts for the bay's booming wildlife. Still, “the detail of spatial variability they're getting is unprecedented,” says ecologist Wim Kimmerer of San Francisco State University. “This is interesting stuff, well done, and it's going to produce a lot of good results.”

  22. ESTUARINE ECOLOGY

    Bringing Ocean 'Fringe' Research Into the Mainstream

    1. Jocelyn Kaiser

    Broad coastal studies such as the effort to map links between biology and physics in the Chesapeake Bay (see main text) are all too rare. Topping a list of hurdles impeding ambitious projects, experts say, is that much of the federal spending on coastal research—more than $200 million a year—gets frittered away on rote data collection. Congressionally mandated monitoring programs churn out statistics on everything from water oxygen levels to fish population size that tend to gather dust on agency shelves, says ecologist John Hobbie of the Marine Biological Laboratory in Woods Hole, Massachusetts. Very little money, he contends, goes for “anything that results in a publication.”

    That's bad news in light of the ecological threats menacing coastal waters. “We've learned in the last decade how important [coastal ecology] is to us,” says Jim Cloern of the U.S. Geological Survey in Menlo Park, California, citing everything from dead zones in the Gulf of Mexico that suffocate marine life (see p. 190) to the possibility of vanishing coastlines if sea levels were to rise as a result of global warming. Potential perils such as these are underscored by the numbers of people living near coasts: According to an estimate by Harvard University's Andrew Mellinger, 38% of the U.S. population lived within 100 kilometers of a sea coastline in 1994.

    Only marginal value?

    Plum Island Sound is the sole land-margin ecosystems research site added to NSF's LTER network so far.

    LMER COORDINATING OFFICE

    Although officials acknowledge that coastal research dollars must be spent more wisely, a federal drive to instill science into monitoring programs has been halting at best. A White House-organized interagency project begun 3 years ago to coordinate ecological monitoring—including coastal studies—across agencies to develop a snapshot of the country's environmental health hasn't gotten off the ground, asserts marine ecologist Robert Huggett, research vice president at Michigan State University in East Lansing. Huggett calls the floundering effort “the greatest disappointment” of his recent 3-year stint as science chief at the Environmental Protection Agency.

    Some agencies are forging ahead on their own to give monitoring a better scientific underpinning. Earlier this year, the National Oceanic and Atmospheric Administration (NOAA) set up a committee of outside scientists that will be “directly involved in designing and carrying out” the coastal component of the agency's U.S. Global Ocean Observing System, a program designed to beef up data collection from satellites, buoys, and boats, says Don Scavia, senior scientist at NOAA's National Ocean Service. The point, he says, is to convince both academics and staffers that “monitoring is another form of research. We're trying to change the perspective.”

    One of the few programs to which experts give high marks for integrating monitoring and research is NSF's Land Margin Ecosystems Research (LMER) network, which for the past decade has funded multiyear projects (this year at Chesapeake Bay and three other sites). But NSF is now phasing out the LMERs, and the program tasked with continuing this sort of research—the Long-Term Ecological Research (LTER) network—has been slow to pick up the slack. In a coastal competition last year, only one proposal—from Plum Island Sound, Massachusetts, an LMER—made the cut to become an LTER. Although NSF plans to hold another competition this year, some worry that the end result could be fewer U.S. land-margin projects. And that would be another setback for a discipline that, in Hobbie's view, languishes as “almost an immature science.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution