News this Week

Science  24 Apr 1998:
Vol. 280, Issue 5363, pp. 517

    Tracking Insulin to the Mind

    1. Ingrid Wickelgren


    When one thinks of the hormone insulin, what comes to mind is not … the mind. Insulin has long been known as the signal that tells every muscle, liver, and fat cell to pull the sugar glucose in from the blood so it can be used to generate the energy the body needs to survive. But the hormone is supposed to hold no sway over the brain—or so the endocrinology textbooks say. Now, growing, although controversial, evidence is beginning to contradict this dictum, suggesting not only that insulin is vital in the brain but that the hormone may influence the brain's most precious functions: learning and memory.

    Several lines of work in both lab animals and humans suggest that when neurons in cognitive brain areas such as the hippocampus and cerebral cortex don't get enough insulin or can't respond to it properly, everything from very mild memory loss to Alzheimer's disease can result. “Insulin is active in the brain in more significant ways than people have assumed,” says behavioral neuroscientist Claude Messier of the University of Ottawa in Ontario, Canada, whose own work is contributing to that conclusion. “It's a hot topic,” adds Mony de Leon of New York University (NYU) School of Medicine, who is one of the researchers newly attracted to the field. Exploring insulin's role in cognition, experts say, might one day point the way to drugs that could reduce memory loss in Alzheimer's disease and normal aging.

    Other researchers aren't so sure. “There simply isn't enough information to say that insulin improves memory,” says psychologist Paul Gold at the University of Virginia, Charlottesville. One major problem with the insulin hypothesis is that even its proponents can't agree on how the hormone might influence cognition.

    Some experts suggest that insulin works in the brain much as it works elsewhere in the body—by chaperoning glucose into brain neurons, thereby helping them maintain their energy production. In that case, memory loss might result when brain cells lack insulin or become resistant to it, starving them of glucose—a condition that would amount to diabetes of the brain. But there are also hints that insulin has other beneficial roles, such as spurring neuronal growth and inhibiting the formation of brain lesions called neurofibrillary tangles that characterize Alzheimer's disease.

    Lighting up.

    Staining with radioactive insulin shows that the rat brain is well supplied with insulin receptors. White indicates the greatest receptor density and purple the least, with yellow in between.


    Early inklings that insulin might play a role in cognition came in the mid-1980s when a team led by diabetes expert Jesse Roth and neuroscientist Candace Pert, who then were both at the National Institutes of Health, discovered that parts of the rat brain important to learning and memory, including the hippocampus and parts of the cerebral cortex, are densely peppered with the receptors through which insulin exerts its effects on cells. Nobody knows just what the receptors are doing there. But neuroscientist Siegfried Hoyer at the University of Heidelberg in Germany began contemplating the heretical notion that they could help neurons metabolize glucose. At the time, virtually all experts believed that this does not require insulin, primarily because no one had found glucose-transporting molecules that respond to insulin in neurons.

    Early links to insulin

    But Hoyer's team soon found a hint that defective glucose metabolism could contribute to Alzheimer's disease. They showed that patients with early-stage disease have much more unmetabolized glucose in their cerebral blood than controls have. Because the brains of the patients showed no corresponding decrease in oxygen consumption, Hoyer concluded that they were keeping up their metabolic rates abnormally, by oxidizing chemicals other than glucose. Indeed, he suggested that the neurons, like starving people, might be devouring parts of themselves and thus contributing to the cell damage and death that occurs in Alzheimer's disease.

    Hoyer also reasoned that a defect in the ability of the patients' brain cells to respond to insulin might be what was keeping the glucose levels high in the blood coming from their brains, just as patients with type II diabetes have high levels of blood glucose because their liver, muscle, and fat cells are resistant to insulin. To test the idea, he decided to study the effect of disarming the insulin receptor in the brains of rats, making them insensitive to insulin.

    When his team injected streptozotocin, a chemical that damages the insulin receptor, into the brains of 18 rats, the researchers found that it seriously impaired the rats' ability to remember a compartment in which they had received an electric shock. And as yet unpublished work by the Heidelberg group now demonstrates that the memory loss that results from impaired insulin signaling in rats is progressive, like the cognitive decline seen in Alzheimer's patients. Concludes Hoyer: “We believe that some cases of Alzheimer's disease are like diabetes mellitus.”

    By the early 1990s, other lines of research also began suggesting a role for insulin—or at least glucose metabolism—in memory. Glucose had been shown to enhance memory in rats, and Gold and his colleagues found that temporary and modest increases in blood levels of glucose can improve memory in people as well, including both Alzheimer's patients and normal elderly adults. Because glucose injections into the brains of rats enhanced their memory, Gold concluded that glucose exerts its effects by acting directly on neurons. “Insulin cannot explain much of what we know about glucose enhancement of memory,” he maintains. But neuroscientist Suzanne Craft of the Seattle Veterans Administration Medical Center and the University of Washington and her colleagues thought that insulin might be behind effects such as those Gold saw.

    She and her colleagues set out to separate the effects of insulin from those of glucose alone in Alzheimer's patients. In an initial experiment, the researchers found that both insulin and glucose infusions produced striking improvements in verbal memory in both early-stage Alzheimer's patients and controls. For example, the patients' scores went from “borderline” dementia to “low average.” But because glucose infusions normally produce a rise in insulin, Craft and her colleagues repeated the experiment in another group of Alzheimer's patients, this time raising blood glucose to a level that previously improved memory while preventing an insulin rise. And they saw no memory improvement. Together, the two sets of experiments show that insulin does indeed mediate the cognitive enhancements originally seen with glucose, Craft reported at the 1996 meeting of the Society for Neuroscience.

    More recently, her team has found hints that something has gone wrong with the hormone in the brains of people with Alzheimer's. In the January 1998 issue of Neurology, her team reports finding both significantly higher plasma insulin levels and lower insulin levels in the cerebrospinal fluid (CSF) of Alzheimer's patients as compared to controls. The researchers also found a correlation between the ratio of CSF insulin to plasma insulin and severity of dementia in the 25 patients they studied, with the more severely afflicted patients displaying the lowest ratios.

    The imbalance might result because insulin isn't working effectively in the brains of these patients, Craft says. The pancreas might then churn out more insulin to compensate, which in turn might cause cells at the blood-brain barrier to produce fewer insulin transporter molecules, reducing the amount of the hormone that slips into the brain. Alternatively, abnormally fast breakdown of insulin in the brain could produce a deficit of CSF insulin and then send a signal to the periphery to rev up insulin production.

    Hoyer's team has also found hints of some kind of insulin defect in the brains of Alzheimer's patients. In a study to appear in the Journal of Neurotransmission, they found unusually high numbers of insulin receptors in the cortical areas of brains from 17 patients who died of Alzheimer's disease. At the same time, these receptors seemed unable to convey the insulin signal properly, because an enzyme that comprises part of them was less active than normal. The scientists interpret the proliferation of receptors as the brain's attempt to compensate for a lack of insulin—a deficit that, they speculate, is compounded by a defect in the receptor itself.

    Hoyer and others don't propose that insulin resistance is the primary cause of Alzheimer's disease, but they believe it could be one of several contributing factors, which include the accumulation of the small protein β amyloid into so-called plaques, a hallmark feature of the disease. Exactly how various factors might interact to produce dementia is not yet clear. However, some experts theorize that milder forms of insulin resistance, or perhaps insulin resistance in the absence of other factors linked to dementia, could lead to lesser memory deficits such as those that appear in normal aging and with type II diabetes, a disease that is often accompanied by memory problems.

    Unanswered questions

    Even if other experiments confirm that problems in insulin signaling can create cognitive deficits and contribute to dementia, researchers will still need to explain how. Hoyer, for example, has some evidence to support his idea that the insulin signaling problems could create a memory-sapping energy deficit by impairing the ability of neurons to take up and metabolize glucose. He has shown, for instance, that treatment with streptozotocin—the drug that inactivates the insulin receptor—interferes with glucose metabolism in rat brains. Other researchers suggest that inadequate glucose metabolism might also create a deficit of the memory-enhancing neurotransmitter acetylcholine, which requires acetyl-CoA, a product of glucose breakdown, for its synthesis.

    However, so far no one has shown conclusively that insulin promotes glucose uptake by neurons. Indeed, only now have researchers found insulin-sensitive glucose transporters in the mature mammalian brain. But even biochemist Ian Simpson, who, with his colleagues at the National Institute of Diabetes and Digestive and Kidney Diseases, demonstrated that such transporters exist in adult rodent brains, won't speculate about their role, which he calls simply “intriguing.”

    There is, however, evidence that insulin benefits neurons in other ways. Last August, for example, Ming Hong and Virginia Lee at the University of Pennsylvania School of Medicine in Philadelphia showed in neuronal cell cultures that insulin inhibits a key event in the formation of one of the pathological hallmarks of Alzheimer's disease, the neurofibrillary tangles. Researchers have previously linked the formation of the tangles to the addition of excess phosphate groups to a protein called tau, the principal tangle protein. Lee's team has now shown that insulin prevents this hyperphosphorylation, apparently by dampening the activity of one of the key tau-phosphorylating enzymes, glycogen synthase kinase-3.

    In addition, a pair of papers published last December in Molecular Brain Research by neurobiologist William Wallace of the National Institute on Aging in Baltimore and his colleagues suggests that insulin may also act as a neuronal growth factor. The researchers were investigating how amyloid precursor protein (APP), β amyloid's parent molecule, affects certain cultured rat cells. The Wallace team found not only that APP treatment caused these cells to send out neuronlike extensions but that APP promotes this growth by activating the same molecular signaling pathway that insulin does. The finding suggests that insulin too may promote neuron outgrowth—and may thus help maintain neuron health. “In addition to or instead of insulin's effect on glucose and metabolism, insulin may be acting along this pathway to promote growth,” Wallace says.

    But before the larger community of neuroscientists is convinced that insulin is doing anything to affect cognition in the adult mammalian brain, researchers face two challenges. They will have to identify the molecular ripples insulin sends out when it contacts neurons in living animals. And they will then have to pinpoint problems along those insulin-sensitive molecular pathways in the brains of animals and humans with signs of memory loss. “There's evidence suggesting that insulin ought to play a role in cognitive function, but it doesn't add up to a complete story,” cautions NYU's de Leon.

    If the gaps in this story are filled in, experts might design drugs that augment specific effects of insulin in order to counteract memory loss. They could also test their hunch that insulin resistance is contributing to the problem by trying to correct it with the next wave of diabetes drugs, such as the recently approved compound Rezulin, and seeing if memory improvements follow. Craft is optimistic: “In my mind, one of the best predictors of how you age cognitively is your glucoregulatory status,” she says.

    But the jury is still out. Says Ottawa's Messier: “A key regulating hormone in our body—that is, insulin—seems to have a profound effect on our brain, but we don't know what it's doing.”

    Additional Reading


    From a Turbulent Maelstrom, Order

    1. James Glanz

    Daniel Dubin and Dezhe Jin didn't set out to introduce a Zen koan—a paradoxical statement that stimulates the intuition—into physics. But the team at the University of California, San Diego (UCSD), is offering a notion that sounds very like a Zen paradox to explain a bizarre phenomenon seen 4 years ago in turbulent gases of electrons. The vortices that develop in these fluids arrange themselves in neat, long-lasting “crystals,” looking like a phalanx of tornadoes marching in perfect formation. Dubin and Jin have now shown that this kind of order can be the natural consequence of an increase in disorder.

    Theory as well as common sense had rebelled at the finding, because stable patterns should be anathema to the high entropy, or randomness, of turbulent flows. But in a flash of insight, the team realized that each large vortex in these electron fluids acts as a Mixmaster, stirring and randomizing the background flow. That entropy increase opens the way for the vortices to gel into predictable, orderly, crystalline patterns. “The moral,” says Dubin with the cryptic laugh of a Zen master, “is that entropy is maximized except where it isn't.”

    Other physicists are suitably bemused. “It's very hard to get your head around,” says Michael Brown, a physicist at Swarthmore College in Pennsylvania, of the theory. But the theory, which is accepted for publication at Physical Review Letters, accurately predicts not only the crystals but also the distribution of vorticity or “swirliness” seen in the random sloshing of the background. “It is very much in quantitative agreement with the experiment,” says UCSD's Fred Driscoll, whose own group discovered the crystals (Science, 9 December 1994, p. 1638). The new understanding should help the teams search for the behavior in other laboratory systems and ultimately in nature.

    In the original experiments, Driscoll and his colleagues, including Kevin Fine, Ann Cass, and others, caged about a billion electrons at a time in a vacuum chamber using strong magnetic fields. Electrons trapped in this way bounce back and forth so rapidly between charged plates capping the field lines that they smooth out any structures that might form in that direction. The experimenters focus on two-dimensional (2D) patterns that develop across the field lines, like the eddies in a spinning bucket of water or the swirls in a suspended soap film. The big difference between the electrons and other turbulent fluids is that Driscoll's magnetically caged plasma has almost no viscosity or friction with the walls, so it offers a purer picture of turbulence and any structures that may emerge from it.

    Driscoll generates turbulent initial states by injecting pulses of electrons from filaments mounted on one of the end caps. He then “photographs” the plasma after various time delays by dumping it onto a phosphorescent screen at the other end, which glows brighter where the electrons are concentrated. These snapshots showed that one or several strong vortices grew as the smaller vortices present in the stormy initial state merged. The vortices, embedded in weaker background eddies, eventually stopped rattling around and “chilled” into crystalline patterns.

    Dubin and Jin tried to understand the crystals by drawing on a venerable tradition dating back to work by David Montgomery, now at Dartmouth College in Hanover, New Hampshire, and others in the 1970s. The pair treated the 2D plasma vortices and eddies statistically. They analyzed the ways in which vorticity could be distributed in the fluid to find the most likely patterns, regardless of how they came about. It's essentially the same approach that says roughly equal numbers of heads and tails will come up when you toss 100 pennies, no matter how individual pennies spin and fall. Because the most likely patterns are also the most disordered ones, theorists call this the maximum-entropy approach.

    If the large vortices break up over time, maximum-entropy theory would predict a vorticity distribution something like sand thrown randomly into a pile, with a single, broad peak. But the two theorists recognized, says Dubin, “that the vorticity in the strong vortices is trapped and cannot be mixed. Those vortices are so strong that nothing can get to them.” These persistent, large vortices stir up the background, increasing its entropy and losing energy. And because the total entropy increases, the vortices can settle into an orderly pattern—a crystal.

    Bizarre as the outcome may seem, it's not the first time physicists have recognized that entropy can create paradoxical patches of order (Science, 20 March, p. 1849). And the theory accurately predicts the details of the vortex crystals Driscoll's group observed. “The amazing thing to me, as an experimentalist, is that the theory actually works,” says Driscoll.

    It may also work in systems far from the frictionless gas of electrons. For example, the energy of vortices in a large and nearly 2D system like the film of atmosphere on Jupiter might overwhelm viscosity, allowing such strange effects to emerge. They could only do so, however, if crystalline patterns can take shape from vortices that can spin in both directions, unlike those in the electron gas, which are forced to spin in the same direction by the interaction of the electrons and the magnetic field lines.

    So far Dubin and Jin's theory doesn't say whether that is possible. But both the theorists and the experimentalists are hoping to find out. Adding the electrons' positively charged, antimatter counterparts—positrons—to the laboratory systems would produce both left- and right-handed vortices. To paraphrase a famous koan, that should reveal whether the crystals amount to more than the sound of one hand clapping.


    Genes May Link Ancient Eurasians, Native Americans

    1. Virginia Morell

    Anthropologists have long assumed that the first Americans, who crossed into North America by way of the Bering Strait, were originally of Asian stock. But recently they have been puzzled by surprising features on a handful of ancient American skeletons, including the controversial one known as Kennewick Man—features that resemble those of Europeans rather than Asians (Science, 10 April, p. 190). Now a new genetic study may link Native Americans and people of Europe and the Middle East, offering tantalizing support to a controversial theory that a band of people who originally lived in Europe or Asia Minor were among the continent's first settlers.

    The new data, from a genetic marker appropriately called Lineage X, suggest a “definite—if ancient—link between Eurasians and Native Americans,” says Theodore Schurr, a molecular anthropologist from Emory University in Atlanta, who presented the findings earlier this month at the annual meeting of the American Association of Physical Anthropologists in Salt Lake City.

    Researchers studying unusual “Caucasoid-like” physical features in early American skeletons were immediately excited by the results. “It's an intriguing study,” says Richard Jantz, a biological anthropologist at the University of Tennessee, Knoxville. Because European peoples presumably must have passed through Asia to reach North America, “it suggests that there might have been a distribution of people 10,000 or more years ago throughout Asia who looked more European than [Asians] do now.” But “it's far too early and far more data are needed” before researchers can be sure of such history, cautions evolutionary geneticist Emoke Szathmary, who is president of the University of Manitoba in Winnipeg.

    The team, led by Emory researchers Michael Brown and Douglas Wallace, and including Antonio Torroni from the University of Rome and Hans-Jurgen Bandelt from the University of Hamburg in Germany, was searching for the source population of a puzzling marker known as X. This marker is found at low frequencies throughout modern Native Americans and has also turned up in the remains of ancient Americans. Identified as a unique suite of genetic variations, X is found on the DNA in the cellular organelle called the mitochondrion, which is inherited only from the mother.

    Researchers had already identified four common genetic variants, called haplogroups A, B, C, and D, in the mitochondrial DNA (mtDNA) of living Native Americans (Science, 4 October 1996, p. 31). These haplogroups turned up in various Asian populations, lending genetic support for the leading theory that Native Americans descended primarily from these peoples. But researchers also found a handful of other less common variants, one of which was later identified as X.

    Haplogroup X was different: It was spotted by Torroni in a small number of European populations. So the Emory group set out to explore the marker's source. They analyzed blood samples from Native American, European, and Asian populations and reviewed published studies. “We fully expected to find it in Asia,” like the other four Native American markers, says Brown.

    To their surprise, however, haplogroup X was only confirmed in the genes of a smattering of living people in Europe and Asia Minor, including Italians, Finns, and certain Israelis. The team's review of published mtDNA sequences suggests that it may also be in Turks, Bulgarians, and Spaniards. But Brown's search has yet to find haplogroup X in any Asian population. “It's not in Tibet, Mongolia, Southeast Asia, or Northeast Asia,” Schurr told the meeting. “The only time you pick it up is when you move west into Eurasia.”

    Researchers are continuing to check for the marker in Asia, but if it never appears there, “then we have a big gap to explain,” says Schurr. It's possible that the source X population began in Asia and then spread to both the Americas and Europe, but left no descendants in Asia. It may be somewhat more likely, however, “that a small Caucasian band with females migrated from Europe right across Asia and into North America,” says Brown. This group might have left no genetic traces of the journey in Asia because of its small size, or because its Asian descendants went extinct—“which is not unlikely,” says Schurr, given the high turnover rate of different peoples.

    Physical anthropologists say that this connection between Eurasia and early Americans may explain the puzzling features they see in the remains of some of the earliest Americans, such as the 9300-year-old Kennewick Man and his contemporary, the Spirit Cave Mummy from Nevada (Science, 10 April, p. 191). The Spirit Cave skeleton, for example, “doesn't look like anyone from any modern human population,” says Jantz, but rather has a mix of features, such as a long, narrow skull and moderately high but not widely flaring cheekbones. He and others think these features may reflect a more “generalized” human stock that spread across Europe and Asia and into North America more than 10,000 years ago.

    Other geneticists are impressed by the finding, but they urge caution in interpreting the data at this stage. “The connection [between Europe and North America] looks pretty good,” says David Glenn Smith, a molecular anthropologist at the University of California, Davis, whose team has also found X in some paleo-American remains. “But the Asian data, particularly those from Southwest Asia, need to be looked at very closely” before researchers can be sure X isn't present there.

    Indeed, a few teams have identified roughly similar variants in certain Asian populations. But the Brown team says those variants are distinct from the true haplogroup X, which they have defined rigorously for the first time as including both certain sequence mutations and also mutations found by slicing the DNA with certain enzymes. By those criteria, there's no sign of an Asian X in their own data or any other published results, he says.

    The team hopes its work will spur other labs to check their data for signs of X. “When that happens, we'll be able to see the true global distribution of X,” says Brown—and perhaps get a clearer picture of the first Americans.


    Versatile Gene Uptake System Found in Cholera Bacterium

    1. Elizabeth Pennisi

    Bacteria are promiscuous gene swappers. Their ability to pass genes for antibiotic resistance from one strain to another is legendary (Science, 15 April 1994, p. 375). Now, with more and more microbial genome sequences pouring out, researchers are stumbling across unexpected resemblances between the DNAs of evolutionarily distant species—some of which can best be explained by the transfer of other kinds of genes as well. Just how bacteria sustain this traffic in genes has been a puzzle. But in this issue of Science, a research team led by molecular microbiologists Didier Mazel of the Pasteur Institute in Paris and Julian Davies of the University of British Columbia in Vancouver may provide an answer.

    On page 605, the researchers describe new evidence showing that the cholera bacterium Vibrio cholerae has a versatile acquisition system—called an integron—that may capture many different types of genes. Until now, integrons, strips of DNA containing repetitive sequences that allow genes from one organism to be used by another, had been associated only with genes conferring antibiotic resistance.

    Genetic acquisitions.

    The integron of the cholera pathogen uses an integrase enzyme and DNA repeats to pick up genes.

    The Davies team's proof that Vibrio has a functioning integron system includes the cloning of a Vibrio gene for an enzyme that splices genes into the microbe's integron. In other bacteria, this enzyme, called an integrase, is needed to move antibiotic resistance genes into and out of their genomes, and the cloning of the Vibrio enzyme is the best evidence yet that this pathogen can acquire genes from its fellow microbes.

    But it's also a sign that the integrase/integron system could have a much broader role than people had thought, because Vibrio has an integron-like stretch of DNA that is studded with genes for everything from adhesion proteins to toxins—all possibly acquired from other bacteria. “[The work] is the first clear demonstration that the integron system is [used] for the dissemination of other types of genes [besides those for antibiotic resistance],” says integron discoverer Hatch Stokes of Macquarie University in Sydney, Australia.

    Other examples may soon follow, for researchers are also finding evidence that pathogens such as Escherichia coli, which has been linked to several deadly episodes of food contamination, have picked up genes from other organisms that make them more virulent. By boosting researchers' understanding of how bacteria use each other's genes to both enhance their virulence and counteract antibiotics, these findings should ultimately “lead to more effective therapeutics and new vaccines,” says James Musser, a microbiologist and pathologist at Baylor College of Medicine in Houston.

    Until now, most researchers would not have suspected that integrons play a part in the pathogenicity of microorganisms like the deadly E. coli, because all the known integrons were associated with antibiotic resistance. But Mazel, Davies, and a few other people thought they saw evidence in V. cholerae that the integron system might be capable of ferrying more genes than had been thought. Vibrio has repeated sequences that look like those in known integrons, and the sequences flank not only antibiotic resistance genes but also genes coding for toxic proteins and enzymes that put methyl groups on DNA as well as several genes of unknown function, says molecular microbiologist Paul Manning of the University of Adelaide in Australia. Manning discovered Vibrio's repeating sequences, which occur roughly 80 times in one region of the microbe's chromosome, in the 1980s, and in the December 1997 issue of Molecular Microbiology he suggested that the repeats are part of a giant integron.

    In the current work, Mazel and Davies tested whether the V. cholerae repeat is in fact part of an integron. In integrons, a repetitive sequence serves to signal the presence of a gene that can be incorporated into the genome. The researchers made a gene “cassette” that contained a Vibrio gene and a marker gene—which would enable them to tell whether these genes were being expressed—between two copies of the repeat. They then inserted the cassette into a circular piece of DNA called a plasmid and allowed the plasmid to be taken up by another bacterium, E. coli. The E. coli started using the foreign genes. “We showed we could move [the genes] using the Vibrio cholerae repeats,” says Davies. The E. coli cells used the integrase in their own integron, which carries antibiotic resistance genes, to splice the cassette into that integron where the genes can be expressed.

    But complete proof that Vibrio can both take up foreign genes and donate its own required the identification of the pathogen's integrase, and in their initial search, Mazel and Davies were unable to track down the enzyme. Then Mazel spotted a DNA segment in the incomplete Vibrio genome that seemed to have the right sequence to encode a portion of the integrase. He, Davies, and their colleagues went on to clone that gene and have now shown in E. coli that it can splice out gene cassettes. “The assumption is that if it can take [the gene] out, it can put it back in,” he says. “They've done the experimental work” to show that the Vibrio sequence is an integron, comments Milton Saier, a microbiologist at the University of California, San Diego.

    Davies and his colleagues also found integrons in Vibrio samples stored since 1888, which shows that the gene uptake system predates antibiotics. This finding, combined with the integron's large size—it takes up 5% of the chromosome and has 10 times as many genes as any known integron—has led Davies to suggest that it may even be the predecessor to integrons in other pathogens, which may have adapted them to acquire antibiotic resistance genes.

    Whether or not that's the case, this integron could bode ill for new vaccines being developed against cholera, Manning warns. He worries that a live vaccine that uses Vibrio strains whose virulence genes have been removed may still be capable of getting new virulence genes through its integron. “One would need to knock out the integrase,” he says.

    Researchers don't yet know whether integrons have also enabled other microbes to acquire virulence genes. Some genes with an inherent ability to be expressed may have gotten into bacteria and thus wouldn't require integration through an integron. But there is plenty of evidence that, somehow, such gene transfers do take place. For example, at the Conference on Microbial Genomes, which was held in February in Hilton Head, South Carolina, geneticist Fred Blattner reported that his team at the University of Wisconsin, Madison, has found that the pathogenic E. coli strain 0:157 has a million extra base pairs of DNA compared to a laboratory strain. This extra DNA includes a few genes that are quite similar to genes that code for toxins produced by Yersinia, the flea-borne pathogen that causes bubonic plague.

    And Dieter Söll, Michael Ibba, and their colleagues at Yale University have discovered the gene for an enzyme that seems to have escaped from a microbe that lives in hot environments and taken up residence in the spirochetes that cause Lyme disease and syphilis. Although the gene is not directly related to virulence, its enzyme product might still be a good target for therapy because it is not found in most bacteria. This could lead to a spirochete-specific antibiotic, Söll says.

    As researchers decipher more and more microbial genomes, the transfer of virulence genes by integrons may become a common theme, says Stokes. His prediction: “What we're seeing is the tip of the iceberg.”


    Models Win Big in Forecasting El Niño

    1. Richard A. Kerr

    Predictions of the most recent El Niño were widely regarded as a stunning success: Forecasters warned of torrential rain in California this winter and drought in Indonesia, and they were right. But if meteorologists had dared to rely more heavily on their computer models, those predictions could have been even better—and next time, they may be. That's because this year's El Niño, one of the strongest in a century, was a proving ground for the models, showing which types do the best job at predicting this warming of the tropical Pacific and its effects on global weather patterns. When it comes to models, forecasters learned this year, bigger is better.

    In a recent ranking of predictive efforts, the most ambitious models—which chew up hours of supercomputer time simulating how winds, water, and heat shuttle among land, ocean, and atmosphere—all came in near the top, while less sophisticated models often faltered. “For the first time, the big models got it right,” says tropical meteorologist Peter Webster of the University of Colorado, Boulder. As meteorologist Eugene Rasmusson of the University of Maryland, College Park, puts it, “the more bells and whistles, the better.” Knowing which models to trust, says Jagadish Shukla of the Institute of Global Environment and Society in Calverton, Maryland, “is a big breakthrough. We now have confidence in 6-month forecasts based solely on a model.”

    Even veteran El Niño forecasters who have seen the models fail in the past now say that this year's success has won the models a larger role in forecasting. “My guess is that next time we will rely much more heavily on the [big computer] models than we did this time,” says Ants Leetmaa, director of the Climate Prediction Center (CPC) at the U.S. Weather Service's National Centers for Environmental Prediction (NCEP) in Camp Springs, Maryland, and co-developer of one of the most sophisticated models. That next test—predicting whether El Niño will linger through the fall or switch to its mirror image, La Niña—is already looming. By Christmas, this more difficult test will provide even more convincing proof of the models' mettle—or expose their weaknesses.

    The value of the more complex El Niño models emerged when climate forecaster and statistician Anthony Barnston of the CPC rated a dozen different methods on how well they predicted the Pacific warming, which peaked at the end of last year. As he describes in a paper in the proceedings of the October 1997 Climate Diagnostics and Prediction Workshop, Barnston looked at the predictions each model was offering in February and March of 1997 for the coming fall. Six of the models were so-called empirical models, which make no attempt to simulate the real-world interplay of winds and currents that actually leads to an El Niño. Instead, these models are in essence automated rules of thumb, doing what human forecasters do but in a more objective way. They compare current observations of the tropical Pacific Ocean and atmosphere with comparable data for the periods leading up to El Niños of the past 40 years and issue predictions based on the resemblance. But as a group, these models did poorly this time, as they often have in the past. Three of the six called for only a moderate El Niño by the fall, while three predicted weak warmth or normal conditions.

    Even a more complex model, which won fame in 1986 by being the first to successfully predict an El Niño (Science, 13 February 1987, p. 744), “fell flat on its face” this time, observes Barnston. This so-called dynamical model, run by Mark Cane and Stephen Zebiak of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, does simulate ocean-atmosphere interactions, although only in the tropical Pacific. This time the model predicted only a gradual warming to near-normal conditions rather than intense warming. Cane can't say exactly why the model failed so spectacularly, but it seems to have something to do with the wind observations used to get the model started, which are sparse in the southeast Pacific.

    In contrast, the most sophisticated modeling efforts rated by Barnston scored an impressive success. These more complex models also couple ocean and atmosphere but do so worldwide, like the large-scale models that scientists have developed over several decades to forecast global warming. Researchers have been struggling to construct these “coupled” models for much of this decade by cobbling together parts of weather and climate models; their creations perform millions of calculations and have insatiable appetites for computing time.

    In early 1997, all four of the bigger coupled models Barnston rated called for at least moderate warming in the tropical Pacific by the fall of 1997. The NCEP model, perhaps the most sophisticated of the group, predicted the strongest warming of any model, empirical or dynamical, albeit still only half the strength of the real-world event. A fifth sophisticated coupled model, run by the European Center for Medium-Range Weather Forecasts (ECMWF) in Reading, England, didn't fit Barnston's scoring because it offers predictions only 6 months ahead. But it did exceptionally well in calling for a rapid warming relatively early in 1997—a hallmark of this El Niño—and by June the ECMWF model had correctly predicted the eventual end-of-year peak temperature.

    Not only did the big coupled models successfully predict El Niño's timing, they helped human forecasters do their best job ever of predicting its dramatic effects on regional weather patterns. Using their coupled model as a starting point and adding their experience with past El Niños, CPC forecasters predicted in November that in December, January, and February precipitation would be heavy coast to coast in the southern United States and light in the Ohio Valley and Montana—and generally they were right. They also predicted unusual warmth across the northern third of the United States, and again they were right. On a scale of forecasting skill that runs from 0 (no better than chance) to 100 (perfection), the precipitation forecast scored 36. That's a major accomplishment, for precipitation forecasts by the current long-range forecasting program have been stuck at 0 since they began in 1995 (Science, 23 December 1994, p. 1940).

    Worldwide, the models also helped weather forecasters get it largely right. Dry weather struck Indonesia, northern South America, and southern Africa, and heavy rains hit East Africa, Peru, and northern Argentina. The only major failed predictions were those of a weak Indian monsoon and drought in northeast Australia.

    Forecasters concede that the overwhelming power of this El Niño—one of the two strongest of the past 120 years—probably accounts for much of their success at predicting how it would alter regional weather; effects that might have been lost in the noise during a milder event stood out clearly. But they also point to signs of surprising predictive power in the coupled models. Tim Stockdale of the ECMWF notes that their model successfully predicted heavy summer rains in southern Europe and a mild winter across Europe, even though El Niño's effects there were thought to be subtle and unreliable. “The model seems to give us not just the standard El Niño,” he says, “but also the difference between this event and others.”

    Forecasters will soon have the chance to test their models again. El Niño is only one-half of the climate cycle in the equatorial Pacific; its less famous sibling is the unusual cooling of tropical waters dubbed La Niña. It too has effects on weather in the tropics and around the world, although they are the opposite of El Niño's and are therefore less dramatic; parching South America's coastal deserts is not as devastating as is drowning them with torrential rains.

    “We've got a test ahead of us: exactly when La Niña may start,” says meteorologist Kevin Trenberth of the National Center for Atmospheric Research in Boulder, Colorado. In the past, La Niña has proved even more difficult to predict than El Niño, notes Trenberth, and moderate events, which the next one is expected to be, are harder to predict than big ones.

    So far, it looks like the NCEP coupled model will either win big or suffer an embarrassing defeat. All the other models—both empirical and coupled—are predicting a return to normal ocean temperatures by late summer and a continued chilling into a full-fledged La Niña by the end of the year. But the NCEP model calls for the current tropical Pacific warmth to decline but linger through the fall. The models may have turned a corner in prediction, but their creators are still anxious about their performance. “There's still a lot of nail biting going on,” says Leetmaa, noting the unexpected collapse of the Lamont model. “There are still some unknowns out there.”


    Spying on Solar Systems in the Making

    1. Govert Schilling
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Want to see how our solar system formed? Take a ride on one of the sensitive new cameras that astronomers have been pointing at young stars. These instruments are giving scientists unprecedented views of swirling disks of dust surrounding the stars—probably the nurseries of planets like our own.

    A crop of new images unveiled this week shows several disks with mysterious bulges—perhaps dust-cloaked giant planets—and others with holes torn in them, apparently by the gravitation of planets. “What we see is almost exactly what astronomers orbiting nearby stars would have seen if they had pointed a … telescope at our own sun a few billion years ago,” says Jane Greaves of the Joint Astronomy Center (JAC) in Hawaii. In one case—the youngest disk ever seen around a full-grown star—astronomers may be spying on the very moment of planet birth.

    These views come courtesy of a new generation of electronic detectors, sensitive to the midinfrared and submillimeter wavelengths in which the disks are brightest. Greaves and her colleagues on a British- American team led by Wayne Holland of JAC and Benjamin Zuckerman of the University of California, Los Angeles (UCLA), used a camera called SCUBA, mounted on the 15-meter James Clerk Maxwell submillimeter telescope at Mauna Kea, Hawaii, to observe the stars Vega, Fomalhaut, and Beta Pictoris. Fifteen years ago, the American-Dutch Infra Red Astronomical Satellite had shown that these three stars (and a handful of others) emit more infrared radiation than expected, probably because they are ringed with disks of warm dust.

    The Beta Pictoris disk has already been photographed in visible light. But the SCUBA images, which appear in this week's Nature, offer the first direct views of the dust disks around Vega and Fomalhaut. The images of Vega and Beta Pictoris also show the mysterious bright blobs, at distances from the star several times greater than Pluto is from the sun. The view of Fomalhaut shows that the disk peters out close to the star. The missing dust, says Holland, “might have formed rocky planets like the Earth.”

    The stars that host those disks are all several hundred million years old—well past prime planet-forming age. But two other groups of astronomers have now made a similar finding around a much younger star, HR4796A, in the southern constellation Centaurus. Last month, using a midinfrared camera called OSCIR on the 4-meter telescope at the Cerro Tololo InterAmerican Observatory in Chile, Ray Jayawardhana of the Harvard-Smithsonian Center for Astrophysics and his colleagues from the University of Florida, Gainesville, spotted a flattened dust disk around the star. At a mere 10 million years old, the star is a “perfect [age] for planets to be forming in its disk,” says Jayawardhana. A second team, from the California Institute of Technology and Franklin and Marshall College in Lancaster, Pennsylvania, independently photographed the disk with the 10-meter Keck II Telescope on Mauna Kea. Both groups announced their findings at a press briefing last Tuesday.

    Earlier measurements by Michael Jura of UCLA had suggested that HR4796A has a dust shroud in which planets could be coalescing. Jura couldn't see the disk directly, but by measuring the infrared brightness of the star at various wavelengths, he calculated that it is surrounded by dust with an average temperature of about 110 kelvin. Because dust close to the star would be much warmer, Jura concluded that the dust must be sparse in the inner regions of the disk, perhaps because planets are forming there. Astronomers are also intrigued by the discovery because, like more than half of the stars in our galaxy, HR4796A is part of a binary star system. It has a faint companion star orbiting it at a distance of some 75 billion kilometers. Some theorists had thought that the gravitational effects of a companion might prevent a star from sprouting a protoplanetary disk.

    The HR4796A disk implies that this restriction doesn't hold, at least for binaries as widely separated as this one. Together with the holes and bulges seen in the other stars' disks, it suggests that once a star has a dust disk, planets are likely to follow, says Rens Waters of the University of Amsterdam. “Apparently, it's not hard to make planets,” he says. “As soon as a star is surrounded by a disk of gas and dust with the right density and composition, you end up with a solar system.”


    Catalytic Explanation for Natural Gas

    1. Robert F. Service

    Dallas—Frank Mango believes that the textbook version of how natural gas forms in Earth's crust is all wrong. As geology books tell the story, natural gas deposits occur at or near hot spots where high temperatures break down the long hydrocarbon chains in petroleum to the short hydrocarbons found in natural gas: methane, ethane, propane, and butane. But in recent years, Mango, a geochemist at Rice University in Houston, has argued that it's not heat that breaks down petroleum but catalytically active metals in the ground. At a meeting of the American Chemical Society here earlier this month, Mango offered new evidence to support this view: laboratory results showing that the catalytic breakdown of petroleum produces component gases with the exact same mixture of heavy and light carbon isotopes as is found in natural gas deposits.

    “I think he's on to something,” says Everett Shock, a geochemist at Washington University in St. Louis, of Mango's latest work. But not all geochemists are won over to Mango's ideas. Martin Schoell, a geochemist and natural gas expert with the Chevron oil company in La Habra, California, says he finds Mango's experiments “elegant and very interesting.” But, he adds, “I feel his mechanism does not explain what we observe in nature.” In particular, it has trouble explaining the relative amounts of the four component gases of natural gas in certain types of rock formations.

    Ironically, it was the distribution of the component gases that first pushed Mango toward the notion that catalytic metals must be involved in the creation of natural gas. In most natural deposits, methane comprises at least 80% of the total gas present, with the other light hydrocarbons accounting for the rest. Yet, when elevated temperatures are used in the lab to break down petroleum into lighter hydrocarbons, the result is a very different mix, with methane making up between 10% and 60% of the total. Other lab studies suggest that at temperatures at which petroleum is thought to break down in the Earth—between 150 and 200 degrees Celsius—the heavy hydrocarbons are so stable that this mechanism cannot account for the formation of natural gas even over the eons of geologic time.

    In 1992, Mango and his Rice University colleagues suggested that transition metals such as nickel and vanadium, which are invariably found in petroleum, may act as catalysts to speed the reactions along. Since then they've also shown that the types of rock where natural gas is commonly found carry transition-metal compounds that are catalytically active and that passing a stream of petroleum through these rocks liberates methane and other light hydrocarbons in the same proportions that are commonly found in natural gas reservoirs deep in the Earth.

    At the Dallas meeting, Mango described new lab experiments that further bolster his hypothesis. He looked at the isotopic composition of the gases that were formed as petroleum was catalytically broken down into lighter hydrocarbons. In typical natural gas deposits, the components not only follow a standard distribution pattern, but each gas typically has a distinctive ratio of heavy to light carbon isotopes—carbon-13 to carbon-12. Methane, for example, is richer in carbon-12, while the heavier gases contain progressively more carbon-13. And when Mango catalytically broke down petroleum using nickel and cobalt catalysts, he found that the product gases came out with the most common isotope mixes found in natural deposits. This supports but doesn't nail down the theory, says Mango, as isotopic measurements from some heat-driven petroleum breakdown experiments produce similar results.

    Schoell argues that the catalytic mechanism doesn't explain everything. Three years ago, for example, he and his colleagues published a study of a natural gas deposit in North Dakota, known as the Bakken formation. It is thought that most natural gas arises in source rocks that are rich in organic matter and then migrates through porous rocks to a reservoir where it's confined. In the Bakken formation, however, the natural gas did not filter to a new home but has remained locked in the source rock. And when Schoell and his colleagues looked at the distribution of component gases and their isotopic concentrations, they found that they closely matched the results of pyrolysis experiments, the lab tests which simply use high temperatures to transform petroleum to natural gas. “If we look into the kitchen of natural gas formation, we find that they are not methane-rich gases but the gases we see in pyrolysis experiments,” says Schoell.

    Schoell suggests that natural gas deposits found in reservoir rocks end up rich in methane because as the gases flow through the porous rocks, the higher hydrocarbons are filtered out. Regions like the Bakken formation, he adds, represent just the first step in this process, where the petroleum is originally broken down. But Mango doesn't buy this explanation, arguing that this extra filtering step that Schoell proposes should leave heavier hydrocarbons behind, which he says is not observed in nature.

    For now, that leaves both sides with a little explaining to do, says Washington University's Shock. Mango needs to be able to explain why deposits such as the Bakken formation don't have elevated levels of methane like other deposits—especially as the oil there is rich in transition metals. And Schoell and his colleagues need to explain why the filtering mechanism hasn't been detected.

    Whether one or both of these mechanisms turn out to rise to the top could have practical implications, says Shock. In particular, if the catalytic formation theory of natural gas proves correct, it may give oil companies new insight into where to find rich gas deposits. For now, says Shock, “I would say that [the debate] is still not resolved. Our whole society depends on fossil fuels. Yet we still understand so little about how they form. It's astonishing.”


    Will New Catalyst Finally Tame Methane?

    1. Robert F. Service

    The bright orange flares of natural gas burning near oil wells are a dramatic sight. But they illuminate a paradox: Natural gas is a vast and valuable natural resource, but it's often cheaper to burn than to use. Unless pipelines are already in place, carting natural gas from remote sites such as the north slope of Alaska or Siberia costs more than the gas is worth. As a result, the gas is either flared off or pumped back into the ground. Now, on page 560, a team of researchers in California reports developing a new catalyst that may change all that.

    The compound efficiently converts methane, the primary component of natural gas, to a derivative of methanol, a liquid fuel that can easily be transported in trucks and tankers, much like petroleum. As such, if used in plants near remote wellheads, the new catalyst could make use of vast remote natural gas reserves around the world. “This is a major breakthrough in terms of doing something with methane,” says Jay Labinger, a chemist at the California Institute of Technology in Pasadena who has worked on similar catalysts.

    The oil industry has for years been grappling with the problem of methane. Occasionally, companies use energy-intensive schemes to either liquefy natural gas or convert methane to methanol with high-temperature steam. But in the mid-1980s, researchers first discovered that some metal-containing organic compounds could catalyze the conversion of methane to methanol without adding extra energy. The problem was that less than 2% of the methane was converted, making such catalysts commercially worthless. The new catalysts, developed by researchers at Catalytica Advanced Technologies in Mountain View, California, transform 70% of methane to a final compound known as methyl bisulfate, which itself is easily transformed to methanol. And that yield is “a big deal,” says Labinger.

    Converting methane to methanol is actually extremely easy: All it takes is a match. At about 625 degrees Celsius, molecules of methane, each made up of a carbon atom bound to four hydrogens (CH4), begin to burn, with oxygen displacing the hydrogens. Methanol (CH3OH) is one of the first byproducts. But the trouble is that once methane begins to burn, “you can't stop that reaction,” says Roy Periana, who led the Catalytica team. In no time, oxygen atoms oust all the hydrogens, leaving only everyday carbon dioxide. The trick is to stop the process in midburn.

    Periana likens the challenge to rolling a ball down a hill and getting it to stop in the shallow valley at the bottom rather than continuing up and over the small incline on the other side and then off a cliff. What's needed, says Periana, is a way to lower the height of the first hill—the amount of energy needed to start methane burning—or raise the second hill—the barrier to methanol or any other product reacting. Periana and his colleagues have managed to do both.

    They first accomplished these tasks in 1993 with a catalyst of mercury-based salts. In a bath of sulfuric acid, this catalyst converts 43% of methane to methanol in a single pass (Science, 15 January 1993, p. 340). But because mercury is toxic, the researchers kept searching for a better alternative. Now, Periana and his colleagues have developed a new catalyst based on platinum—an expensive but nontoxic metal—that also contains a small, nitrogen-rich organic group called a bipyrimidine, which helps control the metal's reactivity. When the catalyst is dissolved in a bath of sulfuric acid, it encourages methane to shake loose one hydrogen—transforming methane to methyl—and form a bond with the platinum. But it does so at just 200 degrees, which means that the ball in this case starts atop a much smaller hill. Next, the sulfuric acid solvent swipes a pair of electrons from the platinum. That frees the methyl to grab a bisulfate group (OSO3H) from the surrounding solvent, creating the final methyl bisulfate (CH3OSO3H), which then drops off the catalyst.

    The reaction stops at that point. “The catalyst doesn't like the bisulfate,” says Periana, so rather than pulling another hydrogen off the methyl bisulfate, the catalyst grabs a fresh methane molecule and works on that. In effect the second hill—the barrier to the methyl bisulfate reacting—is pushed higher. The outcome is that more than two-thirds of the methane molecules that pass through the catalyst-loaded solution are converted to methyl bisulfate and then left alone.

    The hitch is that the end product is methyl bisulfate rather than methanol. To finish the job, the researchers must separate out the sulfuric acid and add water to convert the methyl bisulfate to methanol. Those extra steps currently make it difficult to say whether the process will be economical on an industrial scale, says Labinger. Consequently, Periana and his Catalytica colleagues plan to continue their efforts to find catalysts capable of making methanol directly at high yields, as well as operating in more benign solvents than sulfuric acid. But even without these advances, the current catalyst may open the door to making use of some of the world's vast untapped reserves of natural gas.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article