News this Week

Science  13 Feb 1998:
Vol. 279, Issue 5353, pp. 981
  1. COSMOLOGY

    The Universe Shows Its Age

    1. Andrew Watson
    1. Andrew Watson is a writer in Norwich, U.K.

    A COSMIC EMBARRASSMENT IS FADING. BY SOME NEW MEASURES, THE OLDEST STARS NO LONGER APPEAR TO BE OLDER THAN THE UNIVERSE AS A WHOLE

    Four years ago, a nagging problem in cosmology looked set to erupt into a full-scale crisis. A team of astronomers led by Wendy Freedman of the Carnegie Observatories in Pasadena, California, published a long-awaited measurement of the universe's expansion rate, determined from Hubble Space Telescope (HST) observations of pulsating stars in a far-off cluster of galaxies. The result unnerved astronomers. The measured expansion rate was so fast that it implied that the universe has been slowing down for a mere 8 billion years since the big bang. Some earlier measurements of cosmic expansion had already pointed to worrisomely young ages for the universe, but this new result made it billions of years younger than its oldest stars appeared to be.

    Cosmic yardsticks.

    The Hubble Space Telescope picks out Cepheid variable stars in galaxy M100 (top) in the Virgo cluster to fix its distance, while pulsating RR Lyrae stars help give a fix on the globular cluster M15 (bottom).

    HST/NASA

    The crisis intensified the next year, when Craig Hogan of the University of Washington, Seattle, and Michael Bolte of the Lick Observatory in Santa Cruz, California, published a careful study of the nests of old stars called globular clusters, which reconfirmed earlier age estimates of about 16 billion years. The universe, it seemed, was just half the age of its oldest inhabitants. Something appeared to be drastically wrong with the observations, or with cosmologists' basic picture of the universe.

    The discrepancy spurred a burst of activity on both sides of the age divide. Now, 3 years on, the crisis is abating. Improved theoretical models of stars and new, highly accurate data from the European Space Agency's Hipparcos star-mapping satellite have wiped billions of years off the ages of globular clusters, pushing them down to perhaps 12 billion years. And further Hubble observations, together with new techniques for measuring cosmic distances, have nudged the expansion age upward, with some figures now approaching 12 billion years as well.

    View this table:
    View this table:

    “For the first time, what we are seeing are many different methods converging, and their error bars are overlapping,” says Freedman. “Things have changed a lot in the last 6 months,” says astrophysicist Brian Chaboyer of the Steward Observatory in Tucson, Arizona. What's more, a potential escape from the conflict has emerged: new indications that the overall mass density of the universe is much lower than many theorists had expected. If those results hold up, the age conflict could simply evaporate, because a lighter universe would be substantially older for a given expansion rate.

    Although consensus is not a word in common use in the age debate, there is definitely talk of convergence. Chaboyer is bullish: “I now believe that the oldest stars are younger than the age of the universe, and that no crisis exists in cosmology regarding stellar ages.” Others are optimistic but do not think the fight is over yet. “It is still true, as it was then, that some cosmological models predict an age [for the universe that] is too short,” says Hogan. “The degree of the conflict for these models is not as bad as it was, but is still there.”

    The trouble with Hubble

    It is still too early to speak of a resolution, after all, when the various groups measuring the cosmic expansion rate still do not agree among themselves. The figure in dispute is known as the Hubble constant, which is the expansion rate measured in kilometers per second per megaparsec of distance (3.26 million light-years). To determine the Hubble constant, astronomers divide the speed at which the expansion is carrying a distant star away from Earth by the star's distance. The recession speed is easy to measure from the degree to which the distant object's light is redshifted—displaced toward the red end of the spectrum. The tough part is the distance.

    Astronomers estimate distance by comparing the apparent brightness of the star with its true brightness. Judging a star's true brightness is tricky, too, but a set of unusual stars called Cepheid variables has seemed to offer an answer. Instabilities in the structure of these stars cause them to flicker in a regular way, and the period of the flickering is related to the star's true brightness. Knowing a Cepheid's apparent brightness, its flicker rate, and its redshift, astronomers in principle have all they need to measure the Hubble constant.

    A further problem is that stars in our own galaxy or even nearby ones cannot give a true reading of the Hubble constant, because the gravitational pull of surrounding stars and galaxies generates motions that are hard to disentangle from cosmic expansion. “One of the primary motivations for building the [HST],” says Freedman, “was to allow the discovery of Cepheids out to a distance of the Virgo cluster,” a cluster of galaxies about 50 million light-years from Earth. Her team's first analysis of these Cepheids yielded the 1994 paper, with its high Hubble constant of 80 ± 17.

    Since then, Freedman's 27-strong team has continued to gather data, and they have moderated their claims slightly. “We have now measured distances to about a dozen galaxies,” says Freedman. The team's current best estimate, based on their Cepheid measurements tied to more distant cosmological rulers, puts the Hubble constant at 73 ± 11.

    That figure could still imply a disturbingly young universe, but other groups have marshaled HST data to argue that the universe is expanding much more slowly, which would make it older. Allan Sandage of the Carnegie Observatories and his collaborators at the Space Telescope Science Institute in Baltimore and at the University of Basel in Switzerland have combined Cepheids with another kind of beacon, the exploding white dwarf stars called type Ia supernovae. By observing Cepheids with the HST, Sandage and his collaborators were able to determine the distance to six galaxies, containing seven supernovae, including one in the Virgo cluster. From records of the supernovae's brightnesses as seen from Earth, the group could then determine their absolute brightness.

    “The Hubble Space Telescope is required because type Ia supernovae are rare, and none is near enough that the Cepheids in the same galaxy could be observed from the ground,” says Gustav Tammann of Basel. All seven supernovae had very close to the same maximum brightness, implying that this kind of supernova can serve as “standard candles”—objects whose distance can be inferred from their apparent brightness alone. Sandage's team went on to analyze about 30 other supernovae in galaxies far beyond the Virgo cluster, out of reach of the Cepheid yardstick. These supernovae gave an averaged value for the Hubble constant, published last year, of 58 +7/-8. “This is the most direct and most secure way to determine [the Hubble constant],” says Tammann.

    The gulf between Freedman's result and Sandage and Tammann's may seem wide, but other methods are emerging that could break the impasse. For example, supernovae of a different cast, type II, can also serve as cosmic yardsticks. These explosions mark the death of giant stars when they collapse into neutron stars, hurling their outer layer of hydrogen and helium out into space in the process. They “glow like giant light bulbs,” says Brian Schmidt of the Mount Stromlo and Siding Spring Observatories in Australia. Astronomers can measure the speed at which this envelope flies outward, and by cranking those observations into theoretical models, they can determine the absolute brightness of a type II explosion, and hence the all-important distance. Schmidt's recent best estimate of the Hubble constant with this “expanding photosphere” technique is 73 ± 7.

    The optical illusions called gravitational lenses could also clinch a value for the Hubble constant. When light from a distant object passes near a massive galaxy or cluster of galaxies on its way to Earth, gravity can bend the light so that it follows several different paths. The result, as seen from Earth, is multiple images of the original object—and a celestial geometry that allows astronomers to infer the absolute distance of the object.

    Tomislav Kundic of the California Institute of Technology in Pasadena, for example, has studied a lensing system in which a large elliptical galaxy in the center of a galaxy cluster creates a double image of a quasar beyond. “Because the light travel time is different along the two paths, variations in quasar brightness first appear in one of the images, and then, after a time delay, repeat in the second image,” he says. This translates, via simple geometry and a model of the gravitational lens, into a Hubble constant of 64 ± 13, a result Kundic published last year. Emilio Falco of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, studied a different system to come up with a matching value, 62 ± 7.

    Clusters getting younger

    If all these techniques do converge—as some astronomers think they might—on a Hubble constant of between 60 and 70, the age problem would still be acute, unless the oldest stars give ground as well. And that is just what has happened in the last year. The sticking point had been globular clusters—containing what are, by common consent, some of the oldest stars associated with our galaxy. Roughly spherical groupings of about a million stars, they lie above and below the galactic disk.

    To calculate the age of a cluster, astronomers mark the positions of all its stars on a chart of brightness versus temperature. All the stars in the cluster are roughly the same age but have a variety of masses. “Higher mass stars live fast and die young,” says Chaboyer; hence, they take up positions in the hot, bright corner of the chart. Most of the rest of the stars form a diagonal line across the chart, from hot and bright stars to cool and dim. As the cluster ages, the large bright stars are the first to exhaust their primary nuclear fuel of hydrogen, and their surfaces begin to cool. Hence they begin to move away from the hot, bright corner, and the diagonal line develops a kink, called the turnoff point. Then slightly less massive stars also begin to run out of hydrogen and cool, and the kink moves down the line. The position of the kink on a brightness-temperature plot signals a cluster's age, but to determine it, astronomers need to know the stars' true brightness—and hence their distance. Therefore, Cepheids and other variable stars have again been their standard yardsticks.

    Up until the mid-1990s, these kinds of studies had convinced most astronomers that globular clusters were between 16 billion and 18 billion years old, according to Francesca D'Antona of the Rome Astronomical Observatory. The situation was little improved by Hogan and Bolte's 1995 paper, putting them just short of 16 billion years, or by an almost identical result published the next year by Don Vandenberg of the University of Victoria in Canada. A few astronomers, including D'Antona and her colleagues, have recently begun holding out for younger ages based on a reappraisal of globular cluster theory. But their arguments would have remained on the fringe had it not been for the Hipparcos satellite.

    Hipparcos mapped the positions of 120,000 stars 100 times more accurately than ever before, yielding data that were released to astronomers last June (Science, 21 February 1997, p. 1064). The Hipparcos results enabled Michael Feast of the University of Cape Town in South Africa and Robin Catchpole of the Royal Greenwich Observatory in Cambridge, U.K., to measure the distance to nearby Cepheid variable stars by parallax: tracking changes in the apparent position of a star relative to the background carpet of stars as Earth moves in its orbit around the sun. The farther off the star, the less it will appear to move. This distance measurement, which is independent of the brightness-flicker relationship, showed that the Cepheids in the Large Magellanic Cloud are about 10% farther away than was previously thought, according to Feast. In other words, Cepheids in general are brighter—and hence farther away—than astronomers had realized.

    The ripples from Hipparcos are now spreading through the age debate. The result pushes up the distances to globular clusters, which means their stars' intrinsic brightnesses must be greater, and shifts the brightness-temperature charts toward younger ages. “My current best estimate for the age of the oldest globular clusters is now 11.5 ± 1.3 billion years,” says Chaboyer—a dramatic downward revision. The new Cepheid scale also affects measurements of the Hubble constant, says Feast. Of Freedman's value for the Hubble constant of 73, Feast says, “I would bring that down to 66 with my Cepheid scale.” The revision does not blunt the conflict between Freedman's result and Sandage's, however, because Sandage's constant also comes down—to 54 or less.

    Bruce Peterson, also at Mount Stromlo, and his colleagues on the MACHO Project have found that Hipparcos data support a revision in another distance scale, this one based on a different set of pulsating stars called RR Lyrae stars. They have been studying RR Lyrae stars in the Large Magellanic Cloud, relying on a quirk in pulsations of some of the stars that allows the team to tie down the actual star brightnesses very accurately. When compared with the observed brightnesses, this yields an accurate distance for the Large Magellanic Cloud that matches the new Cepheid distance. Applying the same calibration scheme to RR Lyraes in the globular cluster M15 “reduces the globular cluster ages by about 30%,” says Peterson. His best estimate of the cluster age is 12.6 ± 1.5 billion years.

    One dissenting voice comes from John Fernley, of Britain's University of Sussex, and colleagues, who used Hipparcos parallax measurements to check RR Lyrae distances and found that the traditional distance scale held up well. But the lower cluster ages are consistent with another set of stellar ages, from the ancient stars called white dwarfs, which Terry Oswalt of the Florida Institute of Technology in Melbourne describes as “basically the ‘dead’ cores of stars.” Oswalt explains that white dwarfs “are slowly cooling. They shine only because they were initially very hot, and the cooling process takes billions of years.”

    Because of their faintness, astronomers can only see white dwarfs in our galactic neighborhood, but in that area they see none cooler than about 4000 kelvin. The implication of this abrupt cutoff is that even the oldest white dwarfs have not yet had time to chill out completely. Based on the cutoff and the estimated cooling rate, Oswalt and his colleagues will soon publish a best guess for the age of the galactic disk of 9.6 billion years. Add to that figure 2 billion years for the galaxy to collapse from the big bang and the disk to form, and “we get an absolute lower limit to the age of the entire universe of about 11 billion to 12 billion years,” he says. A new white dwarf age result from Sandy Leggett of the Joint Astronomy Center in Hilo, Hawaii, and his colleagues, to appear in April's Astrophysical Journal, puts the age of the oldest dwarfs at a younger 8 ± 1.5 billion years.

    Open-and-shut case

    With this new batch of ages for the oldest stars, the battle lines in the age debate have shifted. If not for high Hubble constant readings like Freedman's, astronomers could be forgiven for heading to a betting shop and putting a sizable bet on 12 billion years as the age of the universe. But one surprise factor could make the entire debate moot. The young expansion ages are all based on the assumption that the universe is “flat”—that it contains just enough mass to prevent it from expanding forever. Many recent observations, however, indicate that the universe may actually be “open,” its mass density low enough that it will expand forever rather than stopping or even collapsing again (Science, 4 April 1997, p. 37; 31 October 1997, p. 799; 21 November 1997, p. 1402).

    The lower the mass of the universe, the less gravitational pull there is to slow its expansion: For an open universe, a given Hubble constant implies an older universe. A 12-billion-year open universe could easily have a Hubble constant as high as the one Freedman measured. “If it's as high as 73, it makes it look more like an open universe,” says Feast. Whatever vintage the universe is, it now looks certain to last long enough for astronomers to figure out its true age.

  2. INFECTIOUS DISEASES

    A Method in Ebola's Madness

    1. Ingrid Wickelgren

    It's a demon suitable for a horror flick—a quick and gruesome killer. When the Ebola virus struck Zaire 3 years ago, it felled more than 160 people with symptoms that included raging fevers and widespread hemorrhaging—even from the eyes. From 50% to 90% of those infected in that outbreak and others died within 2 weeks, typically from shock. Now, researchers have a new clue about just what makes the Ebola virus so dangerous.

    Demon virus.

    Researchers are getting a handle on Ebola virus's high pathogenicity.

    ANTHONY SANCHEZ

    On page 1034, a team led by molecular virologist Gary Nabel of the University of Michigan Medical Center in Ann Arbor reports results suggesting that the virus uses different versions of the same glycoprotein—a protein with sugar groups attached—to wage a two-pronged attack on the body. One glycoprotein, secreted by the virus, seems to paralyze the inflammatory response that should fight it off, while the other, which stays bound to Ebola, homes in on the endothelial cells lining the blood vessels, helping the virus infect and damage them. “It's a remarkable paper,” says immunologist Barry Bloom of Albert Einstein College of Medicine in New York City. It shows that these glycoproteins “can account for the two major aspects of the disease—failure of the immune response to kill the virus and damage to endothelial cells.”

    If confirmed in infected animals and humans, the findings suggest that these glycoproteins could be targets for anti-Ebola vaccines as well as for drugs that treat Ebola infections. And, in an ironic twist, some of this work could yield a new way to treat common ailments such as heart disease and cancer with gene therapies. The Ebola glycoprotein that homes in on endothelial cells could be attached to a harmless viral vehicle that delivers therapeutic genes to these cells, either spurring the growth of new blood vessels that bypass blocked coronary arteries or closing down the blood vessels that feed tumors.

    Ever since Ebola was isolated 22 years ago, after the first outbreaks in Zaire and Sudan, virologists have sought the molecular weapons it uses to produce its deadly hemorrhagic fever. In 1979, Michael Kiley and his colleagues at what is now the Centers for Disease Control and Prevention (CDC) in Atlanta found a clue when they plucked from the viral surface a glycoprotein that looks like a molecular tool for gaining entry into animal cells. But the cellular targets of this molecule remained unknown.

    To try to pin down this protein's role, the Michigan team, in collaboration with Anthony Sanchez at the CDC, induced cells to make an avian retrovirus containing the Ebola glycoprotein. They then used fluorescent antibodies that bind specifically to the glycoprotein to trace the virus's interactions with various kinds of cultured cells. The glycoprotein, they found, preferentially binds to human endothelial cells, allowing the retrovirus, which would not normally infect human cells, to enter them. Presumably, this protein also helps the Ebola virus infect endothelial cells, making them fragile and leading to hemorrhaging.

    Antibodies also helped the group work out the role of the secreted version of the glycoprotein. Sanchez's team had discovered this protein in the late 1980s in the blood of infected patients. It is a truncated version of the protein found on the virus, and researchers had thought that this similarity might allow the secreted protein to serve as a decoy, sopping up immune cells and antibodies that might otherwise attack the membrane glycoprotein on the virus.

    But the Nabel team found that the secreted glycoprotein attaches not to the immune cells that might specifically attack the virus, but to neutrophils, which trigger inflammation, an early general assault in which scavenger cells clear the body of foreign bodies. These results suggest, says Nabel, that rather than serving as a decoy, the secreted glycoprotein actively blocks an inflammatory response that might otherwise stamp out the virus. “It's as if the virus is throwing darts at the neutrophil,” he says.

    No one has shown that the Ebola virus proteins behave the same way in animals as they do in cell cultures, although experiments to find out are under way in monkeys. And even if the cell-culture results hold up, nagging questions may remain. One strain of Ebola, for instance, kills with unusually severe symptoms but makes relatively little soluble glycoprotein, raising doubts that it is always critical for causing the disease. Another unknown is the identity of the endothelial cell receptor to which the membrane glycoprotein binds—information crucial to devising therapies that might block Ebola's binding to these cells.

    But even as researchers work to answer those questions, Nabel's team is already exploring another possibility: adding the Ebola virus membrane glycoprotein to a harmless virus that could then carry therapeutic genes specifically to endothelial cells. With such a tool, notes Bloom, doctors might someday treat cardiovascular disease.

    Gene-carrying viruses equipped with the Ebola glycoprotein might be used, for example, to deliver growth-factor genes that could trigger the growth of new blood vessels to circumvent damaged ones. Because cardiovascular disease alone afflicts hundreds of millions of people worldwide, that would be an astounding achievement, and it could even give a lift to Ebola's macabre reputation. Says Bloom: “As I read the paper, I started cheering.”

  3. ECOLOGY

    Of Mice and Moths--and Lyme Disease?

    1. Jocelyn Kaiser

    Charles Darwin once speculated that English cat lovers might unwittingly be setting off an ecological domino chain that leads to prettier gardens. Cats eat the mice that normally pillage the nests of bumblebees, so Darwin reasoned that more cats would mean more bees—and more of the red clover and purple-and-gold pansies, called heartsease, that the bees pollinate. “It is quite credible,” Darwin playfully digressed in his 1859 treatise, The Origin of Species, “that the presence of a feline animal in large numbers in a district might determine … the frequency of certain flowers in that district!”

    Sic 'em.

    More mice munch more gypsy moth pupae (left) but may mean more Lyme disease.

    IES

    Neither Darwin nor anybody else apparently tested this idea, but ecologists have now unraveled an equally intriguing, albeit less picturesque, skein of interactions that may govern upsurges of Lyme disease and tree-ravaging gypsy moths. In a 3-year study described on page 1023, a team led by Clive Jones and Richard Ostfeld of the Institute of Ecosystem Studies (IES) in Millbrook, New York, traced the links between several forest species to show that bumper crops of acorns lead to an explosion of mice. The mice in turn protect the oak trees by eating gypsy moths, but they also host ticks that can spread Lyme disease, a sometimes disabling human infection.

    To tease out these links, Jones and Ostfeld had to manipulate large forest patches by trapping mice or adding acorns—an effort other ecologists are applauding. “It's a wonderful example of how perturbing the system produces results that you wouldn't have expected to be there, unless you'd done the experiments,” says Princeton University ecologist Andrew Dobson. But some epidemiologists say too many other factors determine Lyme disease outbreaks for the work to have much predictive value for now. “People are talking about the acorn connection with Lyme disease risk, and it's not established,” cautions Yale Lyme disease expert Durland Fish.

    Jones and Ostfeld's team, collaborating with Jerry Wolf of Oregon State University in Corvallis, initially set out to learn what controls populations of gypsy moths, a European invader that plunders eastern U.S. forests every decade or so. The researchers knew that white-footed mice are important predators of gypsy moth pupae. The mice also eat acorns, and their population booms after “masting”—the term for an abundant acorn season that occurs naturally every 2 to 5 years. To the ecologists, it seemed plausible that masting would check moth populations, which would take off only a few years after mouse populations crashed.

    Jones's team tested this idea in upstate New York in the summer of 1995, 1 year after a masting, when mice were abundant. They removed most mice from three unfenced 2.7-hectare forest patches. Next, they compared the survival rate of moth pupae in the study areas and control plots. They found about 45 times more of these pupae and moth egg masses in plots with fewer mice. Using freeze-dried pupae attached to wax, which picked up tooth marks, they were able to confirm that mice were eating the pupae in the control plots.

    After firming up that link, the researchers simulated a masting. With help from local Girl Scouts, they spread 3500 kilograms (nearly 4 tons) of acorns on their experimental plots. Mouse populations skyrocketed. “Our data show that the key trigger [for moth outbreaks] is this relationship between the acorns and the mice,” Jones says.

    Along the way, the researchers kept an eye on ticks, because white-footed mice are a reservoir of the Lyme disease spirochete, which they transmit to tick larvae. More mice wouldn't necessarily mean more ticks: Ticks of reproductive age live on deer, not mice. But the summer after the masting, the team found far more tick larvae—an eightfold rise—in the acorn-rich plots compared to other plots. The acorns had apparently attracted tick-bearing deer and boosted mouse numbers as well, Jones says. And the adult ticks, in turn, had spawned more offspring, which infested more mice: The mice in the acorn-rich plots bore 40% more tick larvae than those in other plots.

    More acorns, more mice, more deer, more ticks: It adds up to a larger Lyme disease risk, the researchers argue. “It suggests that you may be able to warn people when the risk of Lyme disease is high,” Ostfeld says. But the study also highlights the challenge of managing ecosystems, because in this case trying to cut down on Lyme disease by, say, chemically suppressing acorn production could send gypsy moth numbers soaring. “Once we start tinkering with nature, we could get in a wonderful mess,” Dobson says.

    Fish and other epidemiologists, however, interpret the study more cautiously. They point out that a high larval tick count the summer after a masting may not necessarily mean more infected juvenile ticks a year later. Indeed, Jones's group members didn't measure infection rates on their plots last summer. “The question is still open,” says Joseph Piesman, who heads the Lyme disease vector branch at the Centers for Disease Control and Prevention in Fort Collins, Colorado. Many other factors, such as rainfall and competing parasites, also affect the abundance of ticks carrying the Lyme disease spirochete, Piesman says. So nailing down any acorn link, he adds, may take at least a decade of observing mastings and tick outbreaks.

    Most experts agree, however, that the work underscores ecology's importance in studying vector-borne diseases. “The entire genetic sequence of the organism wouldn't tell you this,” Dobson says.

  4. SEISMOLOGY

    A Slow Start for Earthquakes

    1. Richard A. Kerr

    If seismologists had their way, every earthquake would have a prelude—days or weeks of preparations along the fault that was about to break. Quake prediction would then be a matter of watching for the right signals. Theoretical and lab studies have suggested that faults should give off such warning signs as they edge toward rupture, but no one has yet found them. Now, researchers using a seemingly roundabout method—testing for the effects of tides on quake timing—offer the strongest evidence yet that some faults do start to slip, rapidly concentrating stress, for hours or days before the full-blown rupture. “The good news is that something must be happening before earthquakes,” says seismologist Thomas Heaton of the California Institute of Technology in Pasadena. However, there's no guarantee of successful prediction, he cautions. “The bad news is that it may be so small it's useless.”

    Tidal test-bed.

    Earth tides don't affect the timing of small quakes on the San Andreas fault, buried beneath these hills near Parkfield.

    R. A. KERR

    Seismologist John Vidale of the University of California, Los Angeles, and his colleagues coaxed this bit of good news from the timing of more than 13,000 small to moderate quakes on California's San Andreas fault, near the town of Parkfield, and on one of the great fault's branches, the Calaveras. The ultimate driver of earthquakes on these faults is the slow march of the tectonic plates to either side, which continually adds stress at about 0.1 millibar per hour, building over the years toward the 1 bar to 100 bars needed to rupture a fault. But the gravitational tugs of the moon and sun, which raise tides in the earth just as they do in the ocean, also vary the stress—much faster than tectonics does. As the tides wax and wane, they alternately increase and relieve stress on faults at a rate of several millibars per hour.

    If the steady buildup of stress from tectonics and the rapid variations from tides were the only factors involved, Vidale's team reasoned, the tides should sometimes trigger quakes on faults already close to the breaking point. The effect would be subtle—most seismologists long ago rejected schemes to predict earthquakes from tides. Nevertheless, the seismic events should be more common when the tidal pull is strongest, for example during full and new moons. They could occur randomly with respect to tides only if some third process rapidly loads stress onto faults just before quakes, overwhelming the tidal effects.

    To test for any trace of tidal influence, Vidale and colleagues included only quakes on straight segments of the two faults, which amounted to 13,042 earthquakes of magnitude 1 to 6. The large number of events together with the known fault orientations—which allowed the tidal stresses to be calculated reliably—gave the study unprecedented sensitivity. Still, they found no correlation between the two phenomena. Of the 13,042 quakes, only 95 more occurred when tidal stresses favored fault failure than when they discouraged failure—not a significant result. Some other short-term stress source must have overwhelmed the tidal effects. “The lack of a tidal correlation argues that there is some preparation process over the days before an earthquake,” says Vidale. During the final hours before such an event, stress must build many times faster than it does during a tidal cycle—at least 150 millibars per hour, he adds.

    Can seismologists catch this preparatory movement in action and so predict earthquakes? No one knows yet. In theory and lab experiments, the stress-inducing process is a slow but accelerating slip on a small patch of fault. That slip causes stress to build up faster and faster around the edge of the patch, until a larger area of the surrounding fault ruptures in an earthquake. “What we don't know is the size” of the patch, says theoretician and experimentalist James Dieterich of the U.S. Geological Survey in Menlo Park, California. Estimates range from a patch a few hundred meters across before a magnitude 5 quake, which might be detectable by strainmeters buried near the surface, or one only a few meters in size, in which case detection would be hopeless.

    Researchers may get an answer from the next magnitude 6 quake to hit the heavily instrumented Parkfield area (Science, 19 February 1993, p. 1120) or from a proposed project to monitor a small patch of the fault at Parkfield that regularly fails in frequent magnitude 1 quakes. The answer could make or break earthquake prediction forever.

  5. PHYSICS

    Atom Laser Shows That It Is Worthy of the Name

    1. Alexander Hellemans
    1. Alexander Hellemans is a writer in Naples, Italy.

    Just over a year ago, a team of researchers at the Massachusetts Institute of Technology (MIT) announced the creation of what they called an atom laser. Purists, however, challenged the name. True, the device does produce pulses of atoms in which all the quantum-mechanical atom “waves” are coherent: They have identical wavelengths and travel in step, peak-to-peak and trough-to-trough, just like the beam of light waves from a conventional laser. One essential laser attribute seemed to be missing, however: amplification. In a conventional laser, each wave triggers another one—a process called stimulated emission—rapidly building up a powerful, coherent beam. In this issue of Science, the atom laser team makes good on its claim.

    Up the amp.

    Lasers amplify by enticing photons from atoms; atom lasers, by enticing other atoms.

    L. CARROLL

    On page 1005, the researchers, led by Wolfgang Ketterle, report evidence that the “light” in the atom laser—atoms held in a quantum state known as a Bose-Einstein condensate (BEC)—forms through a process analogous to stimulated emission, as atoms already in the condensate help coax additional atoms into quantum lockstep. The evidence is now good enough for one skeptic. “I believe the term [laser] is appropriate,” says Keith Burnett of Oxford University, adding, “As Shakespeare said: ‘A rose by any other name would smell as sweet.’ “Burnett and others are also impressed by the experimental finesse that this new demonstration entailed. Creating a BEC is difficult in the first place, but watching one evolve without destroying it is “fantastically challenging,” says Burnett. “Ketterle's lot are just exquisitely good experimentalists.”

    To create a BEC—a feat first achieved in 1995 by a group in Colorado (Science, 14 July 1995, pp. 152, 182, and 198)—researchers cool a gas to less than a millionth of a degree above absolute zero and trap it in magnetic fields. At such temperatures, the atoms coalesce into a single quantum state and cease to be distinguishable, behaving as a single entity. Last year, Ketterle's team turned a BEC into a pulsed beam by periodically flipping the spin of the atoms with radio waves, allowing some to escape the magnetic trap. They demonstrated that the atom beams were coherent, fulfilling one laser criterion, by allowing two of them to overlap and interfere. Just like two optical laser beams, these atom beams produced an interference pattern of dark and light fringes (Science, 31 January 1997, p. 617).

    But the term “laser,” which stands for light amplification by stimulated emission of radiation, implies more. A conventional laser extracts light from a population of atoms, which are continually being pumped up into an excited, higher energy state. Photons of the laser beam bounce back and forth between two mirrors and through the excited atoms. A passing photon can stimulate an excited atom to shed energy by emitting another photon, in phase with the first and with an identical wavelength. Now Ketterle and his colleagues say that the rate at which a BEC forms in a supercold, trapped gas implies that a similar stimulated process, dubbed stimulated scattering, is at work. Although it is not exactly an amplification process, stimulated scattering can be viewed as analogous to the stimulated emission process in an optical laser.

    To trace BEC formation, the team cooled a gas of sodium atoms to the threshold of BEC formation. Once the sample of about a million atoms had reached the required ultralow temperature, the researchers switched off all the cooling. They illuminated the atoms with a very faint laser beam and watched them with a high-speed charge-coupled device camera. “We saw that for a while nothing happened, and then slowly the action started and speeded up,” says Ketterle. Many phase changes in nature happen by “relaxation,” in which a system out of thermal equilibrium drops toward equilibrium almost en masse. But relaxation starts out very fast and then slows down. The accelerating pace of BEC formation implies that something different is happening, says Ketterle.

    The team members believe that the BEC forms by speeding up a process of scattering. Atoms in the cooled gas want to join the condensate because it is in a lower energy state, but they need to shed some excess energy to do so. They do this by colliding with another atom outside the BEC and dumping their excess energy and momentum onto it. “One particle takes over the energy and momentum, while the other particle jumps into the condensate,” says Ketterle. And, for reasons that are not yet clear, the more atoms there are in the BEC, the more other atoms want to join, causing the process to speed up rapidly. The same principle governs stimulated emission of photons: “The probability of a photon going in a direction in which there are already photons is proportional to the number of coherent photons that are already there,” explains Burnett.

    Condensed-matter theorists, such as Gora Shlyapnikov of the Kurchatov Institute in Moscow and Peter Zoller of the University of Innsbruck in Austria, say that the condensate growth pattern observed by the MIT team corresponds closely to some existing formation models for BECs. “The results are very convincing,” says Zoller. Kris Helmerson of the National Institute of Standards and Technology in Gaithersburg, Maryland, says the study should have a practical effect, too. “By studying this process, you can find better strategies for the development of more intense sources for atom lasers,” he says.

    Whatever the long-term implications of the research, physicists are happy, for the time being, to watch in wonder. “Now when Ketterle presents their ‘movie’ of what is going on, you can sit back and enjoy the show,” says Burnett. But he adds: “If you have seen all the elements that have gone into it to get to that place, it is really formidable.”

  6. PHYSICS

    Laser Trap Gives Clearer View of Condensates

    1. Alexander Hellemans
    1. Alexander Hellemans is a writer in Naples, Italy.

    Since researchers created the first Bose-Einstein condensates (BECs) 3 years ago, they have been eagerly exploring this new state of matter and seeing what can be done with it—turning it into an atom laser, for example (see main text). But the magnetic traps used to confine the condensates put some kinds of studies off limits, because the magnetic fields freeze the spins of the trapped atoms in one orientation. Now Wolfgang Ketterle and his team at the Massachusetts Institute of Technology have removed the barrier with a new kind of trap.

    As they report in a forthcoming paper in Physical Review Letters, they have developed a trap that confines a BEC with nothing but light beams. Now, says Ketterle, investigators will be able to play with the spins of the atoms that merge into the condensate and see what happens: “We can study ‘spin waves,’ we can study spin dynamics … all of a sudden we have rich physics.”

    To create their optically trapped condensate, the team first cooled sodium atoms in the usual way—by “evaporating” hot atoms from a gas caught in a magnetic trap, leaving a cold residue. Then they focused an extremely fine near-infrared laser beam about 6 micrometers wide into the center of the magnetic trap and switched off the fields. While previous attempts to confine a condensate in a pure optical trap failed because the lasers heated the atoms, Ketterle was able to use a very weak beam because the atoms were already precooled in the magnetic trap.

    The laser field electrically polarizes the atoms, separating their positive and negative charges slightly and turning them into dipoles. The intensity of the laser beam is highest in the center of the trap, drawing one end of the dipoles toward the center and pinning the atoms in place.

    In such a trap, “we can have arbitrary orientations of the atoms, while in a magnetic trap, you have ‘sold off’ your spin because you need stable trapping,” says Ketterle. Another advantage, he says, is that the trap expels any atoms that do not belong to the condensate. The laser frequency is tuned so that any atoms with a slightly higher energy than the rest absorb radiation and get kicked out of the trap, while the atoms in the ground state are left alone.

    Keith Burnett of Oxford University adds that lasers allow much more control over the atoms than magnetic fields do: “You can guide them—the laser beam is just like a tube—and you can move them around.” The laser trap can also hold a greater variety of atoms, including ones that don't have a spin, says Burnett. “It is more universal as opposed to a magnetic field.”

  7. ASTROPHYSICS

    Sunquakes May Power Solar Symphony

    1. Erik Stokstad

    Like a quivering gong, the sun vibrates with millions of different overtones. Although astrophysicists have long exploited this ringing to glean insights into the sun's structure, its exact cause has been a mystery. Now a paper to appear in the 1 March issue of the Astrophysical Journal proposes that solar tremors called “sunquakes” power this resonance with bursts of sound.

    The researchers—from the New Jersey Institute of Technology (NJIT); the University of Colorado, Boulder; and the National Solar Observatory (NSO)—analyzed detailed measurements of the solar atmosphere for telltale motions caused by sound. Their findings strengthen a theory that gas plunges noisily below the solar surface. Although it's unclear how these downdrafts might generate the noise, the team detected this acoustic energy feeding one type of solar oscillation. The quakes themselves could also provide a new probe for peering beneath the sun's surface.

    For decades, astrophysicists have simply attributed the sun's ringing to turbulent convection near its surface, where 1000-kilometer-wide patches of hot gas called granules bubble up from the sun's interior and slam into the solar atmosphere. A twist on this theory emerged in the mid-1980s when computer models predicted that after gas cools at the sun's surface, it forms narrow plumes that plunge at supersonic speeds toward the interior—presumably unleashing sonic booms. The plunging gas might also trigger sunquakes by sucking together neighboring granules.

    The first glimpse of these powerful internal downdrafts came in 1995, when Philip Goode of NJIT in Newark, Thomas Rimmele and Louis Strous, now of the NSO in Sunspot, New Mexico, and Robin Stebbins of Colorado observed a darkening of narrow gas lanes between granules, indicating cooling. If so, the chilled gas must fall “like a bowling ball in a swimming pool,” says Stebbins.

    But separating the resulting quakes from background noise in the sun's turbulent atmosphere is tricky. To do so, the team clocked the speed and direction of gas flowing at two altitudes above the sun's surface—150 and 330 kilometers—by measuring the Doppler shift of light absorbed by iron ions swirling in the sun's atmosphere. The acceleration of first the lower gas layers, then the overlying ones, suggests they were shoved by a sound wave, the researchers say. To see what might be triggering the bursts, they superimposed images of the surface beneath more than 2000 sound bursts—before, during, and after an event. The composites revealed that bursts consistently emanated from the dark lanes (see figure above). Immediately after a sound's intensity peaked, the lanes would narrow, suggesting that neighboring granules were filling a void.

    Just as blowing into a trumpet won't necessarily produce a resonant tone, sound bursts on the sun's surface don't necessarily amplify solar vibrations; they have to have the right wavelength and velocity to excite the sun's natural oscillation frequencies. But the team has evidence that the quakes do excite at least one kind of oscillation, the f-mode. About 800 kilometers away from a sunquake's epicenter, “we can literally see the conversion of sound at many frequencies to sound that just belongs to the f-mode,” says Goode. The team calculates that the estimated total number of quakes would have more than enough power to drive all the sun's oscillations.

    The findings are “a nice illustration of a mechanism actually operating,” says Peter Goldreich, an astrophysicist at the California Institute of Technology. But others are skeptical that sound bursts come from the shrieking downdrafts and thundering granules. In the composite images the team produced, the bursts aren't centered squarely on the dark lanes. That “makes me suspicious,” says Tom Duvall of NASA Goddard Space Flight Center in Greenbelt, Maryland. “We don't have an explanation for that yet,” acknowledges Rimmele. “We still have to investigate.”

    But if sunquakes are real, they might be used to probe granule structure and the local magnetic fields set up in the dark lanes, says Goode—much as seismologists on Earth detonate a charge, then study where waves resurface. Mapping this structure more finely is important because astronomers who now use oscillations to probe the sun's interior must assume that waves passing through the uppermost regions aren't much distorted. Says Phil Scherrer of Stanford University, “We're much better off if we can actually understand the surface and the sources.”

  8. ECOLOGY

    Global Nitrogen Overload Problem Grows Critical

    1. Anne Simon Moffat

    Like a satiated gourmand, the biosphere is becoming glutted with nitrogen compounds. As early as the 1960s, researchers knew that some lakes and rivers were suffering because they were being overdosed with synthetic nitrogen fertilizers and nitrogen oxides discharged by cars and factories. But now, ecologists say, a surfeit of fixed nitrogen, by which they mean compounds such as ammonia and nitrogen oxides, is overwhelming entire ecosystems ranging from forests to coastal waters.

    Overloaded.

    High nitrogen (bottom) promotes the growth of tropical marine plants, but species are few compared to the normal situation (above).

    ROBERT W. HOWARTH

    A new study shows that much land can no longer absorb or break down the increasing amounts of fixed nitrogen, so growing quantities of the compounds end up in rivers, lakes, estuaries, and oceans. The resulting nitrogen influx has touched off oxygen-consuming, coastal algal blooms, including the notorious red and brown tides, and has impaired fisheries. Other work has detailed the toll the excess nitrogen is taking on land plants. Nitrogen compounds are displacing valuable nutrients from forest soils, causing mineral deficiencies that decrease forest vitality and perhaps even harming biodiversity.

    “Fixed nitrogen is essential for all life, but the added nitrogen is literally too much of a good thing,” says Stanford University ecologist Peter Vitousek. Indeed, last year, both the Ecological Society of America and the international Scientific Committee on Problems of the Environment named nitrogen pollution as a “preeminent problem” that is not being given enough public recognition.*

    Until the turn of the century, almost all fixed nitrogen came from Mother Nature, produced from atmospheric nitrogen (N2) by soil microbes or lightning. It didn't accumulate, because other “denitrifying” microbes converted it back to N2. But the widespread use of synthetic nitrogen fertilizers that began in the middle of this century, coupled with the huge increase in the burning of fossil fuels, especially by cars, shifted that balance—a shift that has accelerated in the past dozen years or so.

    “The situation is changing incredibly rapidly,” says Cornell University biogeochemist Robert Howarth. “In recent years, the worldwide rate of fertilizer applications has risen exponentially and, in the northeastern United States, the nitrates produced from fossil fuel emissions have increased about 20% in just the last decade.” From data on current fertilizer production, fossil fuel emissions, and production of nitrogen-fixing crops like soybeans, Duke University biogeochemist William Schlesinger calculates that today, human activities produce 60% of all the fixed nitrogen deposited on land each year—far more than can be used productively in crops and other land plants or denitrified.

    One sign of the nitrogen glut comes from a survey of the nitrogen input into several hundred North and South American and European rivers that Howarth and a multinational team of about 50 colleagues are now conducting. Although ecologists have known about the problems posed by nitrogen from agricultural runoff since the 1960s, even they were not prepared for what the survey is showing. By analyzing data on human sources of nitrogen in the landscape and on nitrogen fluxes in river water over parts of four continents, the researchers estimate that about 20% of the nitrogen that humans are putting into watersheds is consistently getting into the rivers.

    “We found a simple pattern,” says Howarth. “Over a 20-fold range of nitrogen inputs, there is a linear function between the amount of nitrogen that humans put into a region and the amount that gets exported in rivers to the coast.” The constancy of this nitrogen leakage was a “huge surprise,” he adds, given the large differences in climate, vegetation, and human activity in the areas surveyed.

    All this nitrogen runoff has caused a marked uptick in eutrophication, which occurs when excessive nitrogen concentrations lead to abundant growth of algae in the surface waters of estuaries and coastal oceans as well as lakes and rivers. Then, when these plants die, they sink to lower depths and decay, depleting the water's oxygen supply and killing deep-dwelling fish. University of Stockholm aquatic ecologist Ragnar Elmgren attributes the collapse of the Baltic Sea cod fishery in the early 1990s, for example, to nitrogen pollution. He believes that plant matter sinking from algae blooms near the surface has depleted oxygen in deep waters, interfering with cod reproduction. Elmgren attributes those blooms to nitrogen because they occur mainly in spring, when farmers apply fertilizer to their fields, and end when the nitrogen is gone. “The Baltic's nitrogen load had increased at least fourfold during this century, causing massive increases in the nitrogen-limited, spring blooms of algae,” he says.

    And in the Gulf of Mexico, oceanographer Nancy Rabalais of the Louisiana Universities Marine Consortium and coastal ecologist Eugene Turner of Louisiana State University in Baton Rouge have found a far-reaching “dead zone” at depths of one-half to 20 meters. “There has been a significant increase in hypoxia [oxygen depletion] in the last 20 years,” says Turner. He adds that the dead zone, now the size of the state of New Jersey, is expanding westward from the coast of Louisiana into Texas waters. Rabalais and Turner have linked the dead zones to algae blooms caused by nitrogen fertilizer poured into the gulf by the Mississippi River.

    In addition to polluting the world's waterways, excess nitrogen is also adversely affecting terrestrial systems. Until recently, ecologists did not know why nitrogen, which they expected to be beneficial to plants, was harmful, damaging forests in Germany and elsewhere, for example. But a 1994 study in Bavaria by Ernst-Detlef Schulze, a plant ecologist at Bayreuth University and the recently appointed director of the Max Planck Institute for Biogeochemistry in Jena, and his colleagues pointed to one possible mechanism: Surplus nitrogen oxides from burning fossil fuels, deposited as nitrates in acid rain, are impoverishing forest soils. The researchers found that the negatively charged nitrate ions leach positively charged minerals, such as magnesium, calcium, and potassium ions, out of topsoils, leading to mineral deficiencies in forest trees.

    More recent studies by Schulze suggest that nitrogen oxides and ammonia released from fertilizers, animal wastes, and power stations can pass directly from the air into leaves and barks, without being carried from the soils to plant roots. The researchers came to this conclusion by measuring the nitrates and enzyme activities in samples of xylem fluid from beech trees. This fluid, which carries nutrients including nitrogen-containing amino acids up from the roots, normally contains no nitrates.

    But the Schulze team found that in areas of heavy nitrogen pollution, xylem fluid carries significant nitrate concentrations, which presumably entered above ground. He estimates that, in northern Europe, such aboveground uptake now accounts for 60% of the nitrogen found in broad-leaved trees, a dramatic change from earlier years. “Plants have evolved to take in nitrogen via their roots,” says Schulze. “They can't effectively regulate nitrogen from their leaves.” This excess nitrogen causes rapid tree growth, he says, but because the trees are deficient in the nutrients that have been leached from the soil, they are weak and vulnerable to insects and mildews.

    All these changes could impair biological diversity by fostering luxuriant growth of a few species that can thrive at high nitrogen levels at the expense of others. “We could be inadvertently reducing the number of species globally by increasing nitrogen,” says Duke's Schlesinger. Indeed, he adds, this has already happened in many estuaries, where a few phytoplankton species have flourished, choking out other species. A field study by ecologists David Wedin of the University of Toronto and David Tilman of the University of Minnesota, St. Paul, also showed that grasslands receiving abundant nitrogen can lose their diversity as invasive species, which are less efficient at photosynthesis, move in (Science, 6 December 1996, p. 1720).

    While correcting these problems will not be easy, Schlesinger says, “there are several points of optimism.” One possibility is to use fertilizers more judiciously, in much the same way that pesticides are applied selectively in integrated pest management. Interplanting corn with nitrogen-fixing legumes, such as soybeans, can also reduce the need for synthetic fertilizers. Smaller cars, with reduced nitrogen oxide emissions, would help, and better protection of wetlands with their denitrifying bacteria might reduce the fixed nitrogen in the environment. But, cautions Turner, “The problem has kind of snuck up on us, and it is going to take quite a few decades to back out of it.”

    • * Also see P. Vitousek et al., “Human Alterations of the Global Nitrogen Cycle: Sources and Consequences,” Ecological Applications 7(3), 737 (1997).

  9. REEF BIOLOGY

    New Threat Seen From Carbon Dioxide

    1. Elizabeth Pennisi

    Fifteen years ago, the world's reefs began turning white, helping to galvanize concern about global climate change as reef specialists attributed this bleaching to warming seas (Science, 19 July 1991, p. 258). Since then, researchers have identified other problems, including disease and damage inflicted by humans, that seemed to pose a more immediate threat to reef survival. Now, new findings suggest that in decades to come, yet another threat may come to the fore: the increasing amount of carbon dioxide in the air.

    Stunted corals.

    Biosphere II corals shrank as carbonate levels dropped.

    G. ARCILA/1991 SPACE BIOSPHERES VENTURES

    The results, reported last month at a special symposium organized by the Scientific Committee on Oceanic Research and other organizations, show that the amount of carbonate dissolved in seawater has a much greater effect on coral reef growth than had been thought. When it drops, corals and other reef-building organisms have a harder time depositing their limestone skeletons. And increases in atmospheric carbon dioxide should have exactly that effect, because carbon dioxide dissolved in seawater boosts its acidity and decreases the amount of carbonate it can carry. “This [carbon dioxide-] induced weakening will make reefs more susceptible to the other pressures they face and compound their problems,” says Bradley Opdyke, a marine geologist at the Australian National University in Canberra.

    Several studies in the 1960s and 1970s had implied that carbonate fluctuations could affect reef growth, but because seawater is glutted—supersaturated—with carbonate, most researchers thought the fluctuations would have only minor effects. But that's not what biological oceanographer Chris Langdon of the Lamont-Doherty Earth Observatory in Palisades, New York, and his colleagues found by studying a somewhat unusual system: a coral reef established 8 years ago in the “ocean” of Biosphere II, the enclosed, self-supporting ecosystem located outside Tucson, Arizona.

    In 1995, the Langdon team began examining how the growth rates of the Biosphere II reef corals varied when the researchers changed the water's carbonate concentration, either dumping in 45-kilogram bags of sodium carbonate or sodium bicarbonate to increase the concentrations, or withholding those additives for long periods of time to cause the concentrations to decline. Many reef experts expected that even if reef calcification rates dropped with the carbonate concentrations, reefs would continue to grow unless concentrations fell below saturation.

    But the Langdon team found that although the Biosphere II reef grew by about 35 millimoles of calcium carbonate per square meter per day at a carbonate-ion concentration equal to 320% of the saturation level, it lost about 6 millimoles per square meter per day at 170% of saturation. For as yet unknown reasons, the organisms apparently have a harder time converting carbonate ions into limestone at these lower concentrations.

    Jean-Pierre Gattuso, a biological oceanographer at the Oceanographic Observatory in Villefranche-sur-mer in France, and his colleagues saw similar trends when they studied a single coral, Stylophora pistillata, in the lab. They found that as the calcium carbonate concentration went from 390% of the saturation level—the current concentration in seawater—to 98%, the coral's calcification rate decreased threefold, although it did not drop as low as it did in the Biosphere II reef.

    “I think we have just now hit the point where there is evidence to indicate that [carbonate concentration] is really important,” says Robert Buddemeier, a hydrogeochemist at the Kansas Geological Survey in Lawrence. “This is a controlling environmental variable that has simply not been factored into reef biology at all.”

    It is unclear how much this variable has affected reef health to date, and as carbon dioxide rises in the future, other factors the studies don't take into account, such as warmer water, might help counter its effects. But if not, the atmospheric carbon dioxide increases expected over the next century could lead to serious problems. Langdon calculates, for example, that reef formation will decline by as much as 40% if the carbon dioxide doubles as expected in the next 70 years, halving the carbonate concentrations, and by as much as 75% if carbon dioxide doubles again. And, he adds, “it's going to be an absolutely global effect.”

Log in to view full text