News this Week

Science  26 Jun 1998:
Vol. 280, Issue 5372, pp. 2045

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Teaching the Brain to Take Drugs

    1. Ingrid Wickelgren

    Activation of the brain's glutamate circuitry may contribute to the learning of addictive behavior. If so, drugs that block glutamate activity may help addicts kick their habit

    To the brain, an addictive drug is an evil tutor. Its lesson: The brain should want more of the drug and should direct the body to get it—whatever the costs. That lesson rarely gets forgotten, as every relapsed addict knows. Now, recent work is implicating a new player in the addictive learning experience: the neurotransmitter glutamate, which is already thought to be key to more normal learning.

    Until the past few years, neuroscientists studying addiction have usually focused on another neurotransmitter, dopamine, partly because they found that all addictive drugs cause a surge of dopamine in the brain's reward center, the nucleus accumbens. By capturing the brain's attention, this seems to reinforce drug-seeking behaviors (Science, 3 October 1997, p. 35). Recently, though, several lines of evidence have pointed to glutamate as a different kind of teacher's helper for addictive drugs. While bursts of dopamine elicited by the drugs attract the brain's attention, modifications in glutamate signaling seem to produce more stable changes in the brain that lead to compulsive drug-seeking.

    Researchers have found, for example, that blocking glutamate transmission prevents behavioral sensitization in rats. In sensitization, repeated doses of amphetamine or cocaine make the animals more and more frantic, causing them to run faster around their cages or engage in purposeless motions like turning their heads back and forth. Such behavior, thought to reflect changes in the brain, may parallel the increasing anxiety and drug craving that humans feel after repeated hits of amphetamine or cocaine. Scientists have also identified lasting cellular and molecular changes that seem to increase activity in the brain's glutamate circuitry in animals given cocaine. Some of these glutamate circuits may be reactivated during drug cravings—a theory buttressed by brain imaging studies in humans.

    The notion that glutamate plays such a critical role in drug addiction “is a really exciting, new idea,” says behavioral neuroscientist Ann Kelley of the University of Wisconsin School of Medicine in Madison. Adds neuroscientist Peter Kalivas of Washington State University in Pullman: “People are pretty high on glutamate right now.”

    They are euphoric partly because the work suggests that compounds that interfere with glutamate signaling could block the intense drug cravings addicts feel during withdrawal or when they see drug-related objects—cravings that often lead to relapse. “A key problem in addiction is relapse,” says Barbara Herman of the National Institute on Drug Abuse in Rockville, Maryland, who chaired a meeting on glutamate last month at the National Institutes of Health.* For example, one study showed that individuals had “about an 80% chance of slipping up within the first year” after leaving an opiate treatment program, she says. But “with glutamate antagonists,” Herman adds, “a drug-related cue might have less of a potential to elicit drug-seeking behavior”—a hope that is already being tested in some human trials (see sidebar).

    Stopping the frenzy

    For decades, neuroscientists have noticed anatomical clues that point to glutamate as one of dopamine's partners in teaching the sinister lessons of addiction. They found, for example, that glutamate-releasing neurons originating in the brain's thinking areas, including the cerebral cortex, the hippocampus, and amygdala, release glutamate onto neurons in the nucleus accumbens.

    But the first direct evidence that glutamate plays a role in addiction didn't come until the late 1980s, when Ralph Karler and his colleagues at the University of Utah School of Medicine in Salt Lake City showed that a drug called MK-801, which prevents glutamate from acting through one of its key receptors, the N-methyl-D-aspartate (NMDA) receptor, prevents rats and mice from becoming sensitized to cocaine and amphetamine.

    In the early and mid-1990s, a team led by Marina Wolf at The Chicago Medical School in North Chicago confirmed and extended Karler's findings. These researchers discovered that drugs that block various types of glutamate receptors prevent sensitization to amphetamine in rats, as do lesions of the prefrontal cortex, the home of many neurons that discharge glutamate into parts of the reward circuit. “You need glutamate to get sensitization,” Wolf concludes, “and that means sensitization has a lot in common with other forms of learning.”

    Kalivas agrees. When he and his colleagues measured concentrations of the neurotransmitter in the nucleus accumbens of rats that had been sensitized to cocaine, they saw a large jump in glutamate levels, to 50% to 100% above normal. “Only the animals that developed sensitization showed changes in glutamate transmission,” Kalivas says. “So glutamate is what gets recruited for the long-term changes” that lead to sensitization. If sensitization is a good model of drug craving, enhanced glutamate activity in the brain might be at the root of drug-seeking behavior.

    Indeed, that idea is supported by as-yet-unpublished results from Kalivas's team. In these experiments, the researchers taught rats to press a lever for cocaine and then after 3 weeks replaced the cocaine with a plain salt solution. That caused the animals to stop pressing the lever—until they were given a compound that mimics glutamate action at another of its receptors, called the AMPA receptor. Then the rats began pounding on the lever that once produced cocaine as if they were craving the drug. “We believe that repeated exposure to a drug activates glutamate transmission,” says Kalivas. When the glutamate system is reactivated—as it might be by some reminder of the drug or the drug itself—“this contributes to feelings of craving and paranoia.”

    The same may happen in humans. Two years ago, for example, neuroscientists Steven Grant, Edythe London, and their colleagues at the National Institute on Drug Abuse in Baltimore, Maryland, and Yale University School of Medicine in New Haven, Connecticut, rated the cocaine cravings reported by 13 addicts and five controls who watched films of people using both neutral objects and drug-associated items such as glass pipes and razor blades. At the same time, the researchers used positron emission tomography to scan the subjects' brains.

    The researchers found that the degree of craving in the addicts paralleled the intensity of neural activity in the frontal cortex and the amygdala, brain regions that release glutamate in the nucleus accumbens and serve learning and memory functions. Thus, cognitive brain structures that rely heavily on glutamate seem to play a role in the cravings elicited by external cues.

    Changing the brain

    The cellular and molecular changes in the brain's glutamate system that might underlie these changes in activity are now coming to light. At the glutamate meeting, Luigi Pulvirenti of The Scripps Research Institute and the Claude Bernard Neuroscience Foundation, both in La Jolla, California, described experiments in which his team first taught rats to press a lever for either cocaine or food and then measured how much neural activity in the nucleus accumbens they could elicit by stimulating fibers from the hippocampus that feed it glutamate.

    As the rats were first learning to work for cocaine, Pulvirenti says, he and his colleagues found “an incredibly enhanced” neuronal response in the accumbens compared to rats that either worked for food or received cocaine passively. The result is what would be expected if glutamate mediates the animals' learning that they can get cocaine by pressing the lever. “These changes in synaptic efficacy may be part of the early neuronal events that later lead to drug-seeking behavior,” Pulvirenti says.

    Part of the enhanced responsiveness of the nucleus accumbens neurons might be due to the large jolts of glutamate discharged there when the animals were given cocaine. But Kalivas's team has seen evidence of a more permanent change in cocaine-treated rats that might also contribute to the effect: an increase in the number of protein components for glutamate receptors in neurons of the nucleus accumbens. The implication is that the brain builds more glutamate receptors in that region as cocaine addiction takes hold.

    Whether the same happens in amphetamine addiction is less clear, because Wolf and her colleagues found large decreases in the proteins forming the AMPA glutamate receptor in nucleus accumbens neurons in rats sensitized to amphetamine. Still, says Wolf, the data do show “that there are adaptations in glutamate transmission in response to chronic exposure to [both] drugs.”

    Sensitized animals are thought to be especially valuable models of addiction to cocaine and amphetamine—stimulants whose rewarding properties seem to be the main impetus for cravings and repeated drug use. But for other drugs, such as opiates, avoiding withdrawal symptoms is thought to be at least as strong a driver for continued use as is seeking a high. Inhibitor studies have implicated glutamate's brain-sculpting effects in this kind of addiction as well. It seems to play a role in both opiate dependence, in which withdrawal symptoms develop when the drugs are taken away, and tolerance, in which an individual needs more of the drug with continued use to experience the desired effects.

    In 1991, for instance, Keith Trujillo and Huda Akil at the University of Michigan, Ann Arbor, showed that the NMDA antagonist MK-801 could prevent rats from becoming either tolerant to morphine or dependent on it. And in 1993, neuropharmacologist Charles Inturrisi of Cornell University Medical College in New York City, with then-postdoc Paul Tiseo, discovered that another NMDA antagonist, called LY274614, could even reverse tolerance to morphine in rats. This suggested that such antagonists might help addicts or people with chronic pain who have developed opiate dependence or tolerance.

    Although many of the molecular and cellular details of glutamate's influence on addiction remain to be worked out, it's now clear that glutamate does mediate many of the lessons taught by drugs. In doing so, it creates lasting memories by changing the nature of the conversations between neurons—a phenomenon neuroscientists call neuronal plasticity. Says Pulvirenti: “The plasticity that occurs during drug addiction most likely depends on glutamate transmission.”

    Of course, plasticity is also the basis of everyday learning and memory. Thus, the same neurotransmitter may hold the key to both preserving the good memories and erasing those planted by the tutors of addiction.

    Additional Reading


    Pills to Help Keep You Clean

    1. Ingrid Wickelgren

    As neuroscientists link changes in the brain's glutamate system to the learning and maintenance of addictive behavior (see main text), they are finding hints that drugs that interfere with glutamate transmission might be used to treat addiction. “There are a number of promising compounds on the horizon,” says Charles Inturrisi of Cornell University Medical College in New York City, whose lab is among those doing the work.

    So far, a drug called acamprosate, which has been approved for treating alcoholism in Europe and is in clinical trials in the United States, has shown the greatest promise. Originally designed to treat epilepsy, acamprosate was being tested for its ability to quell alcohol-induced seizures in the early 1980s when researchers noticed that trial subjects seemed to relapse less often than controls. Pilot studies in France bolstered this idea, but nobody knew how the compound worked until neuropharmacologist Walter Zieglgänsberger and his colleagues at the Max Planck Institute of Psychiatry in Munich, Germany, showed in the late 1980s and early 1990s that acamprosate blocks the ability of glutamate to stimulate electrical activity in both rat cortical neurons and in the cortexes of anesthetized rats.

    Meanwhile, investigators in 10 European countries began a large-scale clinical trial of acamprosate as a treatment for alcoholic relapse. In the trial, 4000 patients who had been weaned from alcohol at local clinics took either a placebo pill or acamprosate for 1 year and then went without treatment for a second year. By the end of 1996, the results, which were similar in all 10 countries, were in. In Germany, for example, 39% of patients who had received acamprosate were still abstinent after a year of follow-up, compared with 17% of controls.

    For at least some subjects, the drug, which seems to have no serious side effects, diminishes craving for alcohol: “With acamprosate, patients tell you they don't even think about alcohol,” Zieglgänsberger says. What's more, the drug's effects apparently last. Two-year follow-up data from the German portion of the trial still show a lower relapse rate among the patients who took acamprosate.

    Scientists don't know how acamprosate combats alcoholism, but recent work indicates that a dampening of glutamate-triggered activity in cortical neurons—which might reduce craving—is just part of the story. This spring, the Munich team along with George Koob, George Siggins, and their colleagues at The Scripps Research Institute in La Jolla, California, showed that acamprosate actually increases glutamate's activation of neurons in two other rat brain areas, the hippocampus and nucleus accumbens. It's possible, Zieglgänsberger suggests, that during withdrawal, neurons in these areas are abnormally quiet and acamprosate helps enliven them, thereby reducing withdrawal symptoms that can trigger drug cravings.

    Researchers discovered acamprosate's promise in treating addiction accidentally, but they are also looking more systematically for glutamate inhibitors. One hurdle is that widespread blockage of brain glutamate receptors can impair learning and memory as well as produce hallucinations; the drug PCP is a powerful glutamate inhibitor. But researchers have identified substances that loosely bind to the NMDA glutamate receptor without completely crippling it.

    One such compound is dextromethorphan, a medication approved for use in over-the-counter cough syrups. Luigi Pulvirenti's team at Scripps and the Claude Bernard Neuroscience Foundation, also in La Jolla, has shown that giving rats dextromethorphan after they'd been trained to self-administer cocaine can curtail cocaine-seeking behavior. And Inturrisi and his colleagues have found another clue that dextromethorphan blunts addiction: In mice, it blocks the development of, and even reverses, morphine tolerance, in which people need ever more of the drug to feel its effects.

    In addition, researchers are studying other NMDA-receptor blockers as potential antidotes to morphine tolerance and perhaps opiate dependence. If these efforts pan out, research on glutamate's role in addiction may someday help hundreds of thousands of people kick their drug habits.


    Seeking the Sun's Deepest Notes

    1. Alexander Hellemans
    1. Alexander Hellemans is a science writer in Naples.

    The sun plays a silent symphony. It reverberates with oscillations that shake its surface and cause subtle frequency shifts in light from the glowing gases. These oscillations carry clues to the sun's interior, and astronomers have been watching them avidly with a network of telescopes called the Global Oscillation Network Group (GONG) and a space-based observatory called SOHO (Science, 31 May 1996, p. 1264). But so far, the sun's deepest notes—slow pulsations that stir its very core—have eluded them. At a workshop early this month in Boston, researchers discussed new strategies for identifying these deep undulations and weighed one claim of a candidate detection.

    The solar oscillations studied so far are acoustic modes, which resemble sound waves. Generated by turbulence near the surface of the sun, they penetrate the interior and are deflected back toward the surface by the increase in density with depth. These so-called p-modes, which cause patches of the sun's surface to rise and fall over periods of from three to several dozen minutes, have helped solar physicists map the sun's density structure and interior flows (Science, 5 September 1997, p. 1438).

    But p-modes don't penetrate to the core, where the sun's fusion power plant seethes. To probe those depths, astronomers need to pick up gravity or g-modes, in which large masses of gas heave up and down, driven by buoyancy. Such waves should have longer periods than the p-modes. “Thirty-six minutes … divides g-modes from p-modes,” says Richard Bogart of Stanford University.

    The exact frequencies and patterns of g-modes would help astronomers answer such questions as the rotation rate of the sun's core, which would affect deep mixing and the sun's nuclear processes. But the g-modes are thought to be weak and hard to pick out of the noise. “There are so many peaks in the power spectrum, and to tell which peak is real or just noise is almost impossible at this stage,” says Jørgen Christensen-Dalsgaard of Århus University in Denmark.

    Alan Gabriel of France's Institute of Space Astrophysics near Paris reported at the meeting, however, that he may have caught a glimpse of g-modes in SOHO data. “I daringly announced two possible g-mode candidate frequencies. Emphasis is on the word candidate,” he adds. Most astronomers Science talked to are not convinced that the reported signal—two peaks with periods of 66 and 75 minutes—is the real thing, however. “It's far too early to say whether those are really detections or just random peaks,” says Bogart, who adds that it would take at least three or four peaks satisfying the expected frequency relationships to convince him.

    Thierry Appourchaux of the European Space Research and Technology Centre in Noordwijk, the Netherlands, thinks he has a way to find stronger, more convincing peaks: Look at the limb of the sun—its visible edge—rather than the center of the disk. “At the solar limb, the same perturbation can give a three to four times stronger signal,” he says, explaining that at the limb one can track the motions of higher layers of gas, where the density falls and the amplitude of the waves grows.

    Because of instrument limitations, SOHO can't track motions at the solar limb. But Appourchaux is part of a group that will try this strategy for picking up the sun's low notes with PICARD, a small solar observatory that France will launch in 2002.


    A Tangled Tale of E. coli Virulence

    1. Kristin Weidenbach
    1. Kristin Weidenbach is a science writer in Boston. She received the highest dose of the bfpF strain.

    Most people would do anything to avoid a bout of diarrhea. But along with 59 other brave souls at Stanford University, I lined up last year to drink a microbial cocktail the very thought of which seems guaranteed to turn the bowels to water. Sequestered in the clinical research center for 3 days, we anxiously awaited the first signs of the bacterial onslaught and clutched at our last remaining vestiges of dignity as nurses and scientists monitored the outcome. Our sacrifice for science—for which we were paid $300—was not in vain: It turns out that our diarrheal responses revealed a subtle feature of the mechanism that helps explain the virulence of enteropathogenic Escherichia coli (EPEC), a common cause of diarrhea in children in developing countries.

    On page 2114, Stanford microbiologist Gary Schoolnik and his colleagues confirm expectations that hairlike appendages on the surface of EPEC known as bundle-forming pili (BFP) are critical to the full virulence of these bacteria. The pili bundle together into ropelike filaments that interweave among bacteria, binding them into large aggregates. But the tests also suggest that another key to EPEC's virulence is the ability of the pili to disentangle themselves so the bacteria can go on to infect new intestinal cells. “The results are very surprising,” says microbiologist Michael Donnenberg of the University of Maryland, Baltimore.

    Schoolnik and members of his team have been studying EPEC's pili since they discovered them in 1991 (Science, 1 November 1991, p. 710). A strong hint that pili are important for virulence came when they and others mutated some of the 14 genes known to control pili formation and produced strains of bacteria lacking these appendages: The organisms couldn't attach to epithelial cells in the test tube or form bacterial aggregates. As the group now reports, the bugs proved relatively benign when fed to human volunteers. However, another mutant strain—dubbed the bfpF mutant—has produced more unexpected results.


    Mutant bacteria stick tightly together—perhaps too tightly.

    BIEBER et al.

    At first glance the bfpF mutant seemed to be an overachiever. Schoolnik and Donnenberg independently showed that these bacteria aggregate into clusters and stick to epithelial cells in greater numbers than the wild-type EPEC. They also seemed to produce more pili and adhere more closely to human cells than their wild-type cousins. But when the Stanford group used time-lapse photomicrography to take a look at their behavior, they found a subtle difference: Wild-type bacteria form aggregates that disperse over time, but the bfpF mutants remained clumped together in a mass.

    “We had a bet in the lab at that point,” says Schoolnik. “Some bet that it [the F mutant] would have increased virulence, some bet that it would have no virulence.” So they recruited more volunteers and returned to the clinical research center. The results were significant. Only four of 13 volunteers developed diarrhea, succumbing to doses of 2 × 1010 or 1 × 1011 mutant bacteria, compared to 11 of 13 in another set of volunteers who received doses of wild-type EPEC ranging from 5 × 108 to 2 × 1010. The EPEC mutant “does almost everything better in vitro, yet causes so much less disease when given to volunteers,” says Donnenberg.

    Schoolnik's team concludes that the bfpF mutant can infect and colonize the human gastrointestinal tract but fails to disperse, which severely reduces its power to cause diarrhea. “It all links in together very nicely and very logically,” says Alan Phillips, a pediatric gastroenterologist from London's Royal Free Hospital. “If you can't aggregate and disaggregate, then you're not going to colonize very effectively.”

    Microbiologist James Kaper of the University of Maryland, Baltimore, agrees that the study provides “conclusive evidence that BFP are required for full virulence of this organism.” But he is not yet convinced that the human tests prove that aggregate dispersal is critical for virulence. “The dispersal phase is a reasonable hypothesis,” he says, but the mutant bacteria may colonize different sites in the intestine, or other unknown factors may contribute to reduced virulence.

    Donnenberg agrees that human tests can't answer all the questions. “We consider the human model the gold standard, but there are big limitations,” he says. “The volunteer is a big black box. We put bugs in one end and measure diarrhea out the other end, and what happens inside we really have no clue.”


    El Niño Drives Spectacular Flower Show

    1. Richard A. Lovett
    1. Richard A. Lovett is a writer in Portland, Oregon.

    TUCSON, ARIZONA—The serrated Tucson Mountains that rise above their namesake city are usually hospitable only to cacti and other hardy desert life-forms. This spring, however, the Tucsons' volcanic soil—along with much of the rest of the Southwestern desert—erupted in wildflowers, from golden poppies to velvet-red ocotillo and the sunflower blooms of brittlebush. Experts have proclaimed the display one of the desert's most dazzling blooms of the century.

    The rainbow-hued outburst came courtesy of El Niño, the Pacific Ocean warm-up blamed for the rainstorms last winter that drenched much of the Southwest and triggered fluky weather elsewhere in the Americas. Although the capricious climate was a bane to many a mud-encrusted homeowner, it has been a boon to scientists hoping to learn more about how wildflowers survive in harsh climates—and why not every flower blooms in every rainy year.

    Good wildflower blooms hit the Southwest about once a decade, says botanist Mark Dimmitt of the Arizona-Sonora Desert Museum near Tucson. But the last banner year, he says—when “all you have to do is go out in the desert and see flowers everywhere”—was 1979. Nineteen relatively flower-poor years followed, until last winter's El Niño, combined with a late-September hurricane that swept inland from the Baja Peninsula.


    Ocotillo (left) and brittlebush helped fuel El Niño's floral fireworks.


    Throughout the winter and early spring, El Niño-driven storms dumped two to four times the normal rainfall in the Arizona-California desert, where precipitation ranges from 7 to 30 centimeters a year. The rule of thumb, Dimmitt says, is that fall rains and cool, wet winters—the classic El Niño pattern—favor lupines, daisies, and other winter-germinating annuals. But total moisture is “less than half the story,” he adds. During some wet years, Dimmitt says, grasses—not flowers—come out in force, along with other plants that paint the desert green but neglect the rest of the palette. Even in this year's prime wildflower conditions, some desert pockets responded only modestly.

    One reason for this pattern, says Dimmitt, is that the flowers engage in “temporal niche separation,” in which species occupy the same tract by each thriving in years with different climates. In 1979, for instance, owl clover reigned. But this year the fuzzy purple flower “was almost nonexistent,” he says, perhaps because early rains last fall favored other species. Dimmitt is compiling photos of past blooms to compare with weather data to better understand which wildflowers thrive under which conditions.

    Another reason why a wet winter might not draw a full response from some flowers is “bet hedging,” says University of Arizona, Tucson, ecologist D. Lawrence Venable. He compares the tactic to investors who avoid buying a single stock: They might make a heap of money, or they might go broke. The same can happen to a flower population that bets everything on a wet year—in this case, going broke means extinction. That's because sprouts that germinate after the first rainfall stand a risk of getting parched in a later drought.

    Bet-hedging models predict that the drier the environment, the more important it is to a flower population's survival that only a small fraction of its seeds germinate in any given year. A just-completed study on Indian wheat seeds suggests that prediction may not hold in Arizona, says Venable grad student Maria Clauss. After receiving comparable levels of precipitation, seeds from scorching southwestern Arizona actually appeared to have higher germination fractions than seeds from regions near Tucson that usually get up to three times as much rain. Clauss thinks she knows why: In the harsher desert areas, many winters bring no measurable rain, and then El Niño rolls in with repeated drenchings. This means that if it rains at all, it's likely to rain a lot, and the best survival strategy may be for seeds to bet heavily on El Niño: Germinate early and in large numbers.

    Still, just how temperature, rainfall timing and amount, and plants' own strategies interact to cue an El Niño bloom remains a mystery. “When it starts to rain, nobody can predict if it's going to be a good wildflower year,” says Dimmitt. “I don't make predictions.”


    Cosmic Web Captures Lost Matter

    1. James Glanz

    SAN DIEGO—Like the parents of acquisitive toddlers, astronomers have gotten used to losing sight of items they know must be around somewhere. At least 90% of the matter in the universe, for instance, must be in some exotic “dark” form that has not yet been seen directly. More irritating to some astronomers is that they cannot even see most of the universe's ordinary, or baryonic, matter. Billions of years ago—in the era visible in the most distant reaches of space—the missing baryons (protons and neutrons) formed sprawling complexes of hydrogen clouds. Then they vanished.

    Or did they? In a talk at an American Astronomical Society meeting here earlier this month, Jeremiah P. Ostriker, a cosmologist who is also provost of Princeton University, predicted that observers will soon find most of the ordinary matter in the universe right under their noses—resting on the kitchen table, so to speak. Computer simulations by Ostriker and Princeton's Renyue Cen show that the primordial clouds condensed over time into a vast, filamentary network of fully ionized gas, or plasma—a cosmic cobweb that now links galaxies and galaxy clusters.

    In the process, the simulations say, the plasma heated up to about a million degrees kelvin, so it now gives off a faint x-ray glow that is extremely difficult to detect with present satellites. If Cen and Ostriker are right, “most of the baryons could live in this warm intergalactic medium and be nearly invisible,” says August Evrard, a cosmologist at the University of Michigan, Ann Arbor. Soon-to-be-launched satellites, like NASA's Advanced X-ray Astrophysics Facility (AXAF), ought to be able to see the plasma, said Ostriker: These missions “have a fair chance of finding most of the baryons in the universe.” And at the end of Ostriker's talk, one x-ray observer announced that he may already have seen a hint of these hot filaments.

    Gravity's gossamer.

    A simulation shows how primordial clouds might have collapsed into a plasma web over hundreds of millions of light-years; an x-ray observation (bottom) may show a hot spot in the network.


    Cen and Ostriker's simulations take their lead from what has become the standard picture of structure formation in the universe. In this view, the dark matter's gravity slowly amplifies slight ripples in the primordial universe. As the waves steepen, the baryons collect like froth at the top of the waves, where the density is greatest.

    That froth gave rise to filaments and clusters of galaxies and, when the universe was less than half its present age of roughly 13 billion years, to vast gas clouds. Neutral—non-ionized—hydrogen in the clouds absorbed the light of even more distant beacons called quasars, making the gas visible to observers today. The densely packed spikes of absorption in the quasars' spectra give these distant clouds their name—the Lyman-a forest—and let observers gauge how much baryonic matter they contained.

    Astronomers can then compare the mass of the clouds with the total amount of baryonic matter that must have emerged from the big bang, which can be gauged from chemical tracers such as the amount of deuterium that survived the explosion (Science, 7 June 1996, p. 1429; 10 January 1997, p. 158). Although the comparison is fraught with uncertainties, the two numbers suggest that 80% to 90% of the baryons made in the big bang can be found in the ancient forest. “We have it tied pretty solid,” says Michael Rauch of the California Institute of Technology in Pasadena.

    In the billions of years since then, some of the gas has been consumed by star formation and sucked into the cores of great clusters of galaxies, where it is heated to hundreds of millions of degrees and gives off bright, easily detected x-rays. But all of those reservoirs account for less than half of the original baryons. “The question is, where has the other stuff gone?” says Rauch.

    Cen and Ostriker's simulations suggest that most of it is still wafting out there in intergalactic space. As gravity continued its work, the simulations show, the clouds collapsed into a network of filaments. “The metaphor of waves breaking is right,” said Ostriker: The collapse generated turbulence and shock waves, which heated the baryonic “froth” from less than 100,000 degrees kelvin to more than 1,000,000 degrees. That's hot enough to ionize all of the neutral hydrogen but not so hot that the plasma would outshine the obscuring gases in our own galaxy by very much.

    Evrard is reserving judgment on some of the conclusions until he sees the details of the physics code, which Cen and Ostriker will report later in a paper they say is now under review at Science. But just after Ostriker finished his presentation, he got the kind of response every theorist dreams of. Q. Daniel Wang, an astronomer at Northwestern University, raised his hand and said that he may already have found a faint gleam from the filamentary plasma in measurements from the Roentgen x-ray satellite (ROSAT). In one observation, Wang may have seen the warmest part of the network directly (see graphic). In another, he attempted to subtract out the foreground glare of our galaxy, revealing what could be the general background glow of the network.

    Wang says he has already been awarded observing time on AXAF—scheduled for launch later this year—which has better sensitivity and spatial resolution than ROSAT can muster. More evidence for the misplaced baryonic matter, if it's there, could come quickly.


    Possible New Weapon for Insect Control

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco.

    In the past 15 years, genetic engineers have created new strains of crop plants with their own built-in insecticides: bacterial toxins that can kill insect pests that munch on the plants. It's an elegant scheme, promising to keep harmful insects in check without exposing other organisms to insecticidal sprays. But so far the plant genetic engineers have had to rely on only a few types of toxins, from the bacterium Bacillus thuringiensis (Bt). This has raised concerns that the target insects will become resistant to the Bt toxins, leaving the plants defenseless. Now, scientists have identified a promising new group of toxins that might eventually be used instead of Bt, or in combination with it.

    On page 2129, a team led by Richard ffrench-Constant, an insect toxicologist at the University of Wisconsin, Madison, reports the discovery of proteins that the bacterium Photorhabdus luminescens uses to kill a wide variety of insects, including common pests. “This provides evidence for a category of insecticides that we didn't know about before,” says David Fischhoff, president of Cereon Genomics, a Monsanto subsidiary, in Cambridge, Massachusetts.

    The researchers also showed that the toxins work when eaten by an insect pest—a prerequisite for use in genetically altered plants. They now hope that toxin genes from P. luminescens can eventually be used, like the Bt toxin genes, to produce new strains of insect-resistant plants. Combining the new toxins with Bt could ease the resistance problem, they say, in part because an insect is very unlikely to become resistant simultaneously to two toxins, as long as they kill by different mechanisms.

    This quest for new insecticide genes has led researchers to some odd places, including the gut of certain roundworms, the heterorhabditid nematodes, where P. luminescens resides. Some gardeners use this nematode as a natural form of insect control. It wriggles into the circulatory system of an insect and releases the bacteria, which produce something that “turns the insect into soup,” as ffrench-Constant puts it. The nematode reproduces in the carcass, producing tens of thousands of young, each of which swallows a dose of bacteria before emerging and seeking new victims. “It makes Aliens look like a picnic,” says ffrench-Constant.

    Although scientists have known about this phenomenon for some time, no one had chased down the P. luminescens toxin. David Bowen, Michael Blackburn, and Thomas Rocheleau in ffrench-Constant's lab have now done so by growing P. luminescens in lab cultures and tracking the toxic activity to proteins purified from the broth in which the bacteria were grown. They found it in a high-molecular-weight protein fraction, composed of four complexes, designated A through D, each of which weighs about a million daltons.

    Because P. luminescens normally enters the insect circulatory system, ffrench-Constant and his colleagues didn't know whether the toxins could kill when taken by mouth. But when they fed complex A to tomato hornworms, the insects died. The team has since found that complex D also has high oral toxicity, while B and C have little effect on the tomato hornworm. The researchers don't yet know exactly how the toxins work.


    The bacterium P. luminescens lights up in—and kills—tobacco hornworms.


    To find the toxin genes, the Wisconsin team raised antibodies to the complexes and used them to probe a library of P. luminescens genes expressed in cells of the bacterium Escherichia coli. Once the researchers identified the genes, they went on to prove that the isolated proteins, and not a minor component in their test mixtures, were indeed the toxins. They did this by engineering P. luminescens bacteria that couldn't produce the proteins.

    As predicted, bacteria missing either complex A or D were less toxic to insects than the wild-type strain, while a strain lacking both complexes was harmless. “The nematodes have been used for some time in insect control as an organic gardening tool, but it's only now that we're beginning to understand the molecular basis for how the bacteria kill the insect,” says ffrench-Constant. “In doing that, we've discovered a potential biopesticide.”

    Many challenges remain before the newly discovered molecules might be used in crops, however. When the researchers introduced the toxin genes into E. coli, for example, the bacterial cells made the proteins but did not secrete them, and they were not toxic. This result suggests it might be difficult to coax plants to produce the proteins in active form. The researchers are currently trying to identify other molecules that P. luminescens requires to export the toxins from the cell and activate them. In addition, tests are just getting under way to see if the toxins are safe for humans and wildlife, as the Bt toxins are. Still, scientists are encouraged.

    “It's always good to find a novel way to control insects,” says Bruce Tabashnik, an entomologist at the University of Arizona, Tucson. And, he adds, because many insect-killing nematodes harbor bacteria, there may be other, related insecticidal toxins. “It's a whole new realm, waiting to be explored,” Tabashnik says.


    Dinosaur Fossils, in Fine Feather, Show Link to Birds

    1. Ann Gibbons

    Paleontologists seeking to trace the evolution of reptiles into birds have long been hunting a strange beast: a feathered dinosaur. Such a creature, they say, would prove once and for all what many researchers already believe: that the dinosaurs didn't die out completely but instead took wing and evolved into what we now call birds. Two years ago a chicken-sized dinosaur with featherlike filaments bristling along its back raised hopes, but researchers couldn't agree that the bristles were really feathers, and the debate simmered on (Science, 1 November 1996, p. 720, and 14 November 1997, pp. 1229 and 1267).

    Now, a team of Chinese, Canadian, and American paleontologists claims that in rich fossil beds in China they have discovered the real thing—dinosaurs with feathers. In reports in this week's issue of Nature and the July issue of National Geographic, they describe two species of turkey-sized theropod (meat-eating) dinosaurs that have unmistakable feathers fanning out from their forearms and tails. “You can't get around the fact that these are feathers on dinosaurs,” says Philip J. Currie, a paleontologist at the Royal Tyrrell Museum of Palaeontology in Drumheller, Alberta, a co-author of the papers. These fossils “provide the best evidence to date that birds were derived from theropod dinosaurs,” says Peter Wellnhofer, a paleontologist at the Bavarian State Collection of Paleontology and Historical Geology in Munich. The plumage on these flightless creatures also suggests that feathers did not evolve for flight but were first used for insulation, display, or some other purpose.

    Such claims ruffle the feathers of a small band of doubters, who believe the ancestors of birds branched off from the reptiles before dinosaurs appeared. The feathers are real enough, says Larry D. Martin, a paleontologist at the University of Kansas, Lawrence—but they just show that the creatures are birds, not dinosaurs. “I think they've found a group of flightless birds,” he says.

    The new finds come from a site in rural Liaoning Province of northeast China—a place being called a paleo-Pompeii, where millions of birds, dinosaurs, fish, insects, and plants died suddenly, sometime between 120 million and 135 million years ago (Science, 13 March, p. 1626). Over the past year and a half, local collectors brought three new specimens with distinct feathers to paleontologist Ji Qiang, director of China's National Geological Museum in Beijing and lead author of the Nature report. At first, says Ji, he and co-author Ji Shu-an, also of the National Geological Museum, were struck by the resemblance of the fossils to Archaeopteryx, the 150-million-year-old Bavarian fossil long considered the first bird. They thought all of the fossils were a single new species, Protoarchaeopteryx. But as they examined the specimens with Currie last fall, they realized that two of the fossils had much shorter arms and longer, sharper teeth with deeper roots than the others. They named this second creature Caudipteryx zoui: Caudipteryx, or “tail feather,” for its long tail plumes, and zoui after Zou Jiahua, vice premier of China, an avid research supporter.

    Unlike the first purported dinosaur with feathers—a species called Sinosauropteryx from the same beds—both new species have “unambiguous” feathers, says Mark Norell, a paleontologist at the American Museum of Natural History and another co-author. “These are just like feathers in modern birds,” he says. The fossils clearly reveal both downy feathers and more advanced modern feathers, with visible internal structures, including the vane or weblike portion of the feather, with barbs and barbules on either side of the shaft.

    Although the feathers link these creatures to birds, their bodies tie them to theropods, the researchers say. Caudipteryx, in particular, resembles the vicious Velociraptor of Jurassic Park fame. Norell rattles off several features that make the link—short arms, serrated teeth, a theropodlike pelvis, a bony bar behind the eye. The researchers scored the new fossils for 90 characters, he says, and found that Caudipteryx and Protoarchaeopteryx are most closely related to theropod dinosaurs, sitting on a branch of the dinosaur family tree between Velociraptor and Archaeopteryx.

    Most paleontologists who have seen photos of the fossils agree with that ancestry. “There's no doubt that they are more primitive than Archaeopteryx,” says Wellnhofer, an Archaeopteryx expert who has seen a fossil of Protoarchaeopteryx. “Now, we have a fundamental problem of how can we tell a bird apart from a dinosaur?” Feathers are no longer a defining trait of birds, points out Kevin Padian, a paleobiologist at the University of California, Berkeley, who wrote a commentary on the fossils for Nature.

    The dinosaurian origin of feathers is “the most amazing thing about the discovery,” says Padian. “Now we can chart the evolution of feathers and how the flight apparatus was assembled and refined.” Because the feathered arms of these animals were too short to work as wings, it seems that feathers did not evolve for flight. “The first feathers were used for warmth, balance, or courtship display,” Ji suggests. “The ability for flight came later.”

    But Martin and other skeptics still think that where there are feathers, there are birds. Martin cites features such as a shortened tail and an ossified sternum to argue that both new species are more birdlike and advanced than Archaeopteryx. In his avian family tree, the Chinese fossils are the flightless descendants of earlier birds that could fly—the same ancestral birds that also gave rise to Archaeopteryx and modern birds. He notes that the timing is right for this, as the new fossils are some tens of millions years younger than Archaeopteryx.

    Norell and Currie counter that it's not unusual for a more primitive animal to survive as a sort of “living fossil” for many millions of years. And an overwhelming number of characters place these specimens with theropods, not birds, they say. In fact, similar analyses had already convinced most paleontologists that dinosaurs gave rise to birds. “The question was settled for me a while ago,” says Wellnhofer. “But it's always nice to have another piece of evidence.”

Stay Connected to Science