News this Week

Science  21 Nov 1997:
Vol. 278, Issue 5342, pp. 1397

    A Womb With a View

    1. Wade Roush


    For researchers studying how embryos develop, model organisms such as sea urchins, nematodes, and zebra fish have a clear advantage: lucidly transparent embryos that develop outside their mothers' bodies, making it easy to observe their development under a microscope. By contrast, scientists' picture of mammalian embryos, which grow deep in a darkened womb, has long been relatively opaque. Not until the invention of ultrasound imaging in the 1980s did they get their first live—albeit grainy and cryptic—pictures of fetuses romping in the womb. Now, however, mammalian embryology's dark ages may finally be coming to an end.

    Pristine perspective.

    MRI gives a noninvasive view of a mouse embryo's innards.


    By adapting established technologies such as ultrasound imaging, confocal microscopy, and magnetic resonance imaging (MRI), researchers are developing new ways to make images of once-hidden embryos. They are also harnessing computers to represent existing molecular data on development in its anatomical context. As a result, they are getting noninvasive views of mouse embryos and preserved human embryos that are sharper, deeper, more dynamic, and—most important—more informative than ever before.

    Flawless fetus.

    3D ultrasound image reveals no facial defects in this fetus.


    The new technologies are adding a third and a fourth dimension—depth and time—to the two-dimensional (2D) still images found in most scientific reports in embryology. “When we studied [embryogenesis] in school, all we had to look at was a series of tissue sections, and it would take weeks to figure out how they were connected,” says developmental biologist Steven Klein of the National Institute of Child Health and Human Development (NICHD) in Bethesda, Maryland. “Here's the chance to take all those sections, reconstruct them into a computer model, and do this for different stages. The result is an interactive movie: You watch it from the front, from the top, and from the side, and you understand development completely.”

    These new imaging capabilities couldn't have come at better time, developmental biologists say. Over the past decade, advances in molecular biology have allowed researchers to identify many of the genes that drive mammalian embryogenesis. But as Sally Moody, a developmental neurobiologist at the George Washington University Medical Center in Washington, D.C., points out, “Gene function is extremely difficult to ascertain if you don't know the morphology of what's actually going on in the embryo.” Now, she says, the high-resolution, three-dimensional (3D) movies of embryogenesis that can be created with the new tools are bringing about “a real melding of genetics and morphology. It's necessary, and it's wonderful.”

    Moody's comment captures the enthusiasm evident at a developmental imaging workshop held this September at NICHD.* Participants said the new techniques can help answer questions they couldn't even ask in the past. How do neural precursor cells fare, for example, when they are transplanted from one spot of the embryonic mouse brain to another? How do subtle malformations in an embryo's cardiac muscles disrupt blood flow through the heart? And how do the membranes and subcellular components of individual cells throb and undulate as the cells migrate to their destined locations?

    Beyond such basic-science questions, the meeting even gave a few hints of the eventual practical benefits of the techniques. Physicians at the University of California, San Diego, for example, showed 3D ultrasonograms of fetal faces so clear and lifelike that some pregnant mothers have claimed to see a family resemblance—and have stopped smoking or drinking as a consequence.

    New dimensions

    The approaches to embryo imaging laid out at the NICHD workshop ranged from the dizzyingly high-tech—for example, miniaturized MRI machines reminiscent of Isaac Asimov's Fantastic Voyage—to methods not much more sophisticated than those used to make topographical maps. On the high-tech end, neurobiologist Russell Jacobs and colleagues at the California Institute of Technology in Pasadena are using MRI to add new dimensions to the atlases of development often consulted by researchers and students.

    Existing photographic atlases offer 2D images of selected slices of mouse, chick, frog, and other embryos at specific times in development. Jacobs's goal is to produce 3D versions of these 2D atlases, then bring in the missing time dimension, stringing together 3D snapshots like an animated cartoon. Magnetic resonance is well-suited to these tasks. Because it uses electromagnetic pulses, rather than harmful x-rays or radioactive dyes, it can be repeatedly applied without damaging an embryo. With the aid of a computer, MR images can then be rendered either as solid, 3D volumes or as 2D cross sections sliced in any direction.

    In his first effort, Jacobs has produced MR images of mouse embryos removed from the uterus that are so detailed it's possible to distinguish tissue layers only 50 micrometers thick. But ultimately, Jacobs would like to create such images without removing the embryo from its sanctuary. That's a challenge, he explains, as clinical MRI machines typically produce images with “voxels,” or 3D pixels, of about 1 cubic millimeter, but high-resolution images of tiny mouse embryos require voxels 10 million times smaller. When the object is so small relative to an MRI machine's receiver coils, its signal tends to get swamped by background noise, especially if it's surrounded by a lot of other tissue—the mother.

    But by experimenting with higher magnetic field strengths, different pulse frequencies, and other adjustments to conventional MR imaging machines, Jacobs and his colleagues are gradually increasing signal-to-noise ratios to acceptable levels. “Once you can do [imaging] in vivo, a lot of things open up that are hard to do in vitro,” he says. “For one thing, you can follow events over time in the same specimen.”

    At Duke University Medical Center's Center for In Vivo Microscopy in Durham, North Carolina, researcher Bradley Smith and colleagues are taking a different approach to the noise problem: shrinking the MR receivers down to embryo size, to reduce the size differential between receiver and subject. Smith, who started out as a medical illustrator and then earned a Ph.D. in anatomy so he could “get a peek inside these objects I was drawing,” uses the devices to flesh out collaborators' studies of mice or rats with developmental mutations. “They'll perform some manipulation, then turn a pregnant mouse specimen over to me and ask me to investigate the embryos to determine when and where changes are occurring,” Smith explains.

    Using MR microscopy, for example, Smith has helped researchers compare the vasculature of 12-day-old mouse embryos treated with retinoic acid, a compound that induces birth defects, with normal embryos. The treated embryos lacked blood vessels in their tails and lower limbs, indicating that retinoic acid interferes with developmental signals in the rear half of the embryo.

    Right now, getting a high-resolution image requires sacrificing the embryos in order to fix them inside the miniature MR coils. But like Jacobs, Smith says he and other Duke researchers will soon be using the devices to examine live embryos in utero. And in the meantime, other, more serendipitous, uses for the technology are emerging.

    One came from a team of physicists, who asked Smith's help in working out how much radiation human embryos are likely to absorb when, for example, female nuclear workers or scientists using radioactive chemicals are exposed before realizing they are pregnant. The physicists needed data about the size and volume of embryonic organs at different times in development in order to estimate how much of the radiation reaching the womb an embryo would actually absorb. Smith provided this data by doing MR studies of the human embryos in the historical Carnegie Collection at the Armed Forces Institute of Pathology in Washington, D.C., and then deriving 3D images of the embryos.

    A feeling for the organism

    At the Skirball Institute of Biomolecular Medicine at New York University School of Medicine in Manhattan, developmental neurobiologist Dan Turnbull is adapting another established imaging technique to carry out delicate microsurgical procedures on mouse embryos still in the womb. Turnbull studies the transformation of ectodermal tissue into brain cells in the mouse embryo. To learn exactly when neural progenitor cells in different regions of the primordial mouse brain become committed to their ultimate fates, he and co-workers Martin Olsson and Kenneth Campbell are using high-frequency, high-resolution ultrasound imaging to guide the needles used in cell transplantation experiments.

    When the researchers grafted marked cells from the forebrains of 13-day-old mouse embryos to specific spots in mid-hindbrain and vice versa, they reported in the October issue of Neuron, they found that the dislocated forebrain cells metamorphosed into hindbrain cells, while the grafted hindbrain cells failed to adapt to their new location—indicating that the fates of hindbrain cells are set before those of forebrain cells.

    Just as important as this result was the precedent set by the experiment. Such cell transplants had never before been attempted on such young mouse embryos. But the technique allowed the researchers to distinguish the tiny structures—the entire embryonic mouse brain at that stage is only a few millimeters thick—sufficiently well to perform the transplants. “A lot of people who have come to our institute and seen us doing these procedures have been getting excited,” says Turnbull. “Everybody sees the future applications where we can start introducing labeled cells into mutant mice, to look at how the differentiation process is altered.”

    Perhaps the greatest excitement at the NICHD workshop, however, was sparked by another effort to image individual cells. Researchers at the University of Iowa's W. M. Keck Dynamic Image Analysis Facility in Iowa City have melded a confocal microscope with a 3D movie camera and a computer to create the world's only instrument for monitoring the full range of movements and shape changes that cells undergo during development. The microscope changes its focal plane 30 times per second, and the computer records the resulting “optical sections” for reconstruction into a 3D computer model that highlights cell membranes and internal surfaces such as those of the nucleus, mitochondria, and vesicles. The process is repeated every 2 seconds, and a QuickTime movie is the result.

    So far, the technique has been used only on cells that can crawl in a lab dish. David Soll, director of the Keck facility, handed out red-and-blue glasses at the workshop and treated viewers to a 3D movie of the sluglike colonies that the normally unicellular slime mold Dictyostelium forms when it needs to reproduce. Like all amoeboid creatures, the Dictyostelium colony moves by continuously assembling and disassembling its internal skeleton, made of the protein actin. Soll's movie showed how the colonies pulsate as their actin-filled pseudopods appear and disappear, dragging along the entire mass.

    Using the 3D motion analysis system, Soll and his collaborators have demonstrated that Dictyostelium strains engineered to lack certain of the proteins known to regulate actin display specific flaws in the way they create or absorb pseudopods. These flaws are a sign that the numerous cytoskeletal regulators aren't redundant, but specialized. According to George Washington's Moody, that result would probably elude a researcher viewing mutant Dictyostelium colonies under a conventional 2D microscope.

    And that, in the end, may be the strongest rationale behind the new surge in developmental imaging. Researchers using 2D still images of their subjects have to spend years acquiring an ethereal, intuitive “feeling for the organism” before they can understand its behavior in three dimensions over time, Moody argues. “But having these new technologies out there means that people will quickly be able to form a visual understanding they can rely on. … It's going to be terrific.”


    Information Displays Go 3D

    1. Wade Roush

    Some of the most intriguing technologies shedding new light on mammalian embryos don't produce images, but better ways of displaying and studying existing data. Biologists at the Jackson Laboratory in Bar Harbor, Maine, and the University of Edinburgh in the United Kingdom, for example, are attempting to meld three-dimensional (3D) images like those produced by Russell Jacobs of the California Institute of Technology in Pasadena (see main text) with information about the shifting networks of gene expression and protein activity that mold the embryo—information piling up in developmental biologists' lab notebooks and hard drives.

    Heart art.

    Computer model yields an image (top) used to sculpt a plastic mouse embryo heart (bottom


    “Ninety-five percent of the data [developmental biologists] generate they are not able to store in appropriate ways. … We have to develop an infrastructure to store that data and make it accessible,” explains molecular biologist Martin Ringwald, leader of the Jackson Lab's team. The project's eventual result will be a kind of “virtual embryo” that shows how gene activity and the production of the corresponding proteins vary with time and location in the embryo. The database will be accessible to all researchers via the World Wide Web, and it could go a long way toward helping biologists understand how the signals that pulse through developing tissues, turning genes on and off or incrementally adjusting their expression, guide development.

    At Oregon Health Sciences University in Portland, developmental physiologist Kent Thornburg is pioneering a more tangible way to display information about embryonic development: producing actual 3D models of human fetal hearts. In collaboration with Adrianne Noe, director of the National Museum of Health and Medicine at the Armed Forces Institute of Pathology in Washington, D.C., Thornburg takes a relatively low-tech approach, first tracing the outlines of tissue layers in hearts in AFIP's embryo collection onto a computerized drawing tablet. The computer stacks these outlines into wire-frame simulations the researchers can examine from any angle, including from inside the heart's inner cavities. Using “morphing” technology, the researchers can also examine how different tissue layers expand and move as the heart develops.

    The same data can be fed into a stereolithography machine that sculpts a solid model from plastic. In this way, Thornburg has used computer maps of hearts that are only 0.8 millimeters across in the embryo to build 5-centimeter-wide models of hearts that he can hold in his hand. By forcing water through the lumen, or inner cavities, of normal and malformed model hearts, Thornburg can see how specific developmental flaws alter the flow of blood through the heart. “This wouldn't be possible in any other model,” says Thornburg. “It's a beautiful thing.”


    How Does HIV Overcome the Body's T Cell Bodyguards?

    1. Michael Balter

    Marnes-la-Coquette, FranceIn 1854, Emperor Napoleon III created an elite squadron called the “Cent Gardes” for his own protection. Thirty years later, Louis Pasteur turned one of the squadron's barracks in this small town just outside Paris into laboratories. Pasteur died here in 1895, but his disease-fighting tradition lives on: Today, the Cent Gardes building hosts one of the world's most prestigious AIDS meetings. At this year's gathering,* an elite squadron of researchers grappled with still-unsolved questions about how HIV destroys the immune system and how they can fend off its attacks.

    The Life and Times of T Cells

    It might be said that AIDS researchers have come to know the virus that causes the disease, HIV, inside and out. They have isolated its proteins, sequenced its genome, and identified the receptors it uses to dock onto the CD4 T lymphocytes that are the virus's primary target. Yet the central mystery of AIDS remains unresolved: How does the virus cause the severe loss of CD4 cells, which wrecks the immune system, that is the hallmark of the disease? This question has stimulated heated discussion in recent years, and new findings presented at the meeting by David Ho, director of the Aaron Diamond AIDS Research Center in New York City, and immunologist Paul Johnson of Harvard Medical School in Boston are fanning the flames of the debate.

    For many researchers, a major clue to the riddle was revealed in January 1995 with the publication of two papers in Nature indicating staggeringly high rates of HIV replication and CD4 cell turnover in a typical HIV-positive patient. The findings—by Ho and his collaborator Alan Perelson of the Los Alamos National Laboratory in New Mexico, and by George Shaw at The University of Alabama, Birmingham, and his co-workers—which have since been refined in more recent papers, suggest that about 100 billion new viral particles are produced every day and 1 billion to 2 billion CD4 cells are dying and being regenerated each day as well.

    These extraordinarily high numbers led Ho to propose what has come to be known as the “sink model” for CD4 cell loss. In Ho's view, the high levels of HIV production keep both the sink's tap (the immune system's production of new CD4 cells) and its drain (their destruction by the virus) wide open. Because the body's ability to generate new cells can only be stretched so far, the sink slowly empties, until the CD4 cells are lost and the immune system is exhausted.

    Last year, however, the sink model was challenged by a team of researchers led by immunologist Frank Miedema of the Netherlands' Red Cross Blood Transfusion Service in Amsterdam (Science, 29 November 1996, p. 1543). Miedema and his co-workers made their own estimate of CD4 cell turnover by measuring changes in the length of the T cells' telomeres, the extreme ends of chromosomes, which shorten slightly each time a cell divides. The telomere length can provide an estimate of how many times a cell has divided during its lifetime, and thus an indication of overall turnover rate in a cell population. The Amsterdam team found that the telomeres in CD4 cells from HIV-infected people were not appreciably shorter than those of uninfected controls, and the team concluded that turnover rates in these two groups were essentially the same—a result that directly contradicted Ho's sink model. Miedema's team proposed that the loss of CD4 cells was not due to a major increase in their rate of destruction—an open drain—but rather that HIV was interfering with production of new cells, thus turning down the tap.

    At the meeting, Ho delivered a riposte to Miedema, reporting new experiments in rhesus monkeys designed to measure directly the effects of virus infection on T cell turnover. Ho's group, working again with Perelson, used a compound called bromodeoxyuridine (BrdU)—which is taken up by actively dividing cells—to label the T cells of monkeys infected with SIV, the simian version of HIV, as well as those of uninfected control animals. After 3 weeks of administering BrdU to the monkeys in their drinking water, the compound was withdrawn. The rate at which the BrdU label first increased and then disappeared from the T cell population provided a measure of the production and loss of those cells. Ho reported that the death rates of CD4 cells in the SIV-infected animals was about six times higher than in uninfected monkeys, which also provides an upper limit for the increase in turnover rate. In a separate talk, Johnson presented results from similar experiments, estimating that the overall CD4 cell turnover rate in SIV-infected monkeys was about two to three times as high as in control animals.

    “Our results directly contradict the estimates of Frank Miedema and his colleagues,” Ho told the meeting. “Our study is a direct measure of lymphocyte turnover, whereas the telomere method is very indirect.” Immunologist Bruce Walker, director of the Partners AIDS Research Center in Charlestown, Massachusetts, comments that “we now have two independent studies showing that turnover rates are clearly increased” in infected animals.

    But Miedema—who was not at the Cent Gardes meeting—told Science that the Ho and Johnson findings are roughly consistent with a new mathematical analysis of his telomere data performed in collaboration with theoretical biologist Rob de Boer at the University of Utrecht in the Netherlands. According to the analysis, CD4 cell turnover rates up to three times as high as normal would not give rise to overall telomere shortening in an HIV-infected cell population, because some cells would die before their telomeres could shorten. While Miedema concedes that this reworking of his data is consistent with higher HIV-induced turnover rates than suggested in his original Science paper, he argues that neither these rates nor those derived from the BrdU results are high enough to support the sink model. Indeed, both rates are much lower than those estimated in Ho's and Shaw's Nature papers. The new data from Ho and Johnson are “not compatible with the idea of the renewal capacity of the immune system becoming exhausted” by high CD4 cell production rates, Miedema says. “The drain might be slightly more open, but the tap must be running more slowly due to HIV infection.”

    Although the Ho-Miedema dispute sparked a lively discussion at the meeting, many participants were reluctant to take sides. For example, Norman Letvin, an immunologist at Harvard Medical School, says that neither Miedema's telomere data, nor Ho and Johnson's BrdU results, provide definitive support for their competing models of CD4 cell loss. “If you don't have enough CD4 cells, it's either due to decreased production or increased destruction,” Letvin says. “These studies are reasonable attempts to quantify T cell turnover rates, but they can't distinguish between these two possibilities.” Thus, in the minds of many AIDS researchers, the riddle of CD4 cell loss remains unresolved. “We are still very confused about the mechanisms that lead to CD4 depletion,” says Johnson. “But at least now we are confused at a higher level of understanding.”

    A Little Help From Friends

    In HIV-infected people, loss of CD4 T cells is a key marker for progression to full-blown AIDS. But the immune system suffers subtle dysfunctions even in earlier stages of infection, before symptoms appear. Long before they are lost, the CD4 T cells—also known as T helper cells—begin to lose their ability to help fight off infections. The job of the T helper cells is to recognize foreign antigens, such as proteins from viruses or bacteria, and alert other immune cells—particularly CD8 T cells, also known as cytotoxic T lymphocytes (CTLs)—so they can move in and kill the invaders. Unfortunately, the immune systems of most HIV-infected people mount at best a weak attack against HIV antigens, a finding that has led researchers to conclude that the specific subset of T helpers that are primed to identify these proteins are either lost very early in infection or are never really produced in significant numbers.

    Lightening the load.

    Patients (solid dots) with stronger T-helper responses control HIV better.


    But new work presented at the meeting by immunologist Bruce Walker of the Partners AIDS Research Center in Charlestown, Massachusetts—and published on page 1447 of this issue—indicates that HIV-specific T helpers may be playing a key role in controlling the virus in HIV-infected people known as long-term nonprogressors, those who do not develop AIDS even after many years. Even better, Walker and other researchers believe the new results provide some hope that these T helpers could be boosted even in people who show little evidence of having them, possibly by combining powerful antiviral therapies with anti-HIV vaccines.

    Walker says that he and his colleagues began the study when they were contacted by a hemophiliac who had been infected with HIV for 18 years, but had normal CD4 counts and undetectable amounts of the virus in his blood. When the team exposed his blood cells to HIV proteins, HIV-specific T helpers rapidly proliferated. The team went on to study other HIV-infected individuals with a wide range of virus loads in their blood and found that the strength of their T helper responses directly correlated with how well they were controlling the virus. Finally, Walker and his co-workers tested patients who had been put on powerful antiviral therapies very early in their HIV infection, even before they began to form HIV antibodies, and found that this group was able to generate strong anti-HIV T helpers once their viral loads were reduced to undetectable levels.

    “From the standpoint of HIV-infected patients, Bruce's talk was the most exciting at the meeting,” says virologist Andreas Meyerhans of the University of Freiburg in Germany. “You can now think about taking patients whose viral loads are already being controlled by antiviral therapy and vaccinate them [with an anti-HIV vaccine] to try to reboost these anti-HIV helper responses.”

    Walker says his findings provide strong evidence that if antiviral drugs are given very early, these T helper responses may be preserved. And they may also help explain why patients ultimately fail to control the virus, even though they continue to show anti-HIV CTL responses throughout much of their infection. “The quirk of HIV is that it precisely targets immunologically activated cells,” he says. As a result, the very HIV-specific T helpers that march in to meet the viral challenge are among the first to become infected and die, “leaving those CTLs alone, without helpers to battle the virus over the course of infection.”

    But the good news, Walker says, is that AIDS researchers and clinicians can now start thinking about creative new ways, such as therapeutic vaccines, to get the lost helpers back. Says Meyerhans: “We might be able to turn HIV progressors into long-term survivors. That is a very attractive consequence of what Bruce is saying.”

    • * 11th Colloquium of the Cent Gardes, Marnes-la-Coquette, France, 27 to 29 October 1997.


    Will Fossil From Down Under Upend Mammal Evolution?

    1. Bernice Wuethrich
    1. Bernice Wuethrich is an exhibit writer at the Smithsonian's National Museum of Natural History in Washington, D.C.

    Eight months ago, Nicola Barton cracked open a rock on a beach in southern Australia and found a tiny tooth. Still embedded in the rock was the rest of the fossil: a total of four teeth in a 2-centimeter jaw. The tooth fairy could not have been kinder. The fossil fragment Barton discovered could cause researchers to rethink some long-held views about the early history of mammalian evolution. Says Richard Cifelli, curator of vertebrate paleontology at the Oklahoma Museum of Natural History in Norman: “It will have the scientific world at the edge of its seat.”

    Under southern lights.

    The newfound mammal—its jaw highlighted in this reconstruction—co-existed with dinosaurs at a time when Australia lay close to the South Pole.


    Paleontologist Thomas Rich of the Museum of Victoria in Melbourne, who oversees the dig where Barton was working as a volunteer, has spent 26 years looking for the extinct ancestors of Australia's fantastic mammalian fauna with his wife-colleague Patricia Vickers-Rich of Monash University in Clayton, Australia. Until the discovery of the 115-million-year-old jaw, practically all they had ever found were dinosaurs. “The hardest fossil to find is the first one,” says Rich. The fossil, which the Riches and their colleagues describe in this issue of Science (p. 1438), is a first in many ways. Called Ausktribosphenos nyktos, it is the oldest mammal fossil yet found in Australia. And if Rich's suspicions are correct, it is a most un-Australian mammal. Instead of being an ancestor to the continent's pouched marsupials or egg-laying monotremes, he believes it may be a placental mammal-one that nourishes its developing embryo within the mother's uterus. That would put placental mammals down under 110 million years earlier than believed, and it would upend paleontologists' ideas about mammal evolution. “All hell would break loose,” says paleontologist David Archibald of San Diego State University. “Both the time and the place of origin of placentals would be off.” But even if the jaw's shrew-sized owner isn't a placental, it could still rearrange mammals' family tree by altering the timing of its branching points or adding a new limb. “It's extremely important, whatever it is,” says Cifelli.

    The family tree of mammals is rooted more than 200 million years ago. Most paleontologists believe that monotremes arose early and that the higher mammals-placentals and marsupials-diverged from a common ancestor in the Early Cretaceous period, between 144 million and 98 million years ago. At that time, the continents were grouped into two large land masses, Laurasia in the north and Gondwana in the south. Based on the smattering of known Cretaceous fossils, paleontologists believe that placental mammals originated in Asia, then migrated to North America.

    Placentals and marsupials remained confined to the Northern Hemisphere until about 65 million years ago, when an island chain may have allowed both kinds of mammals to enter South America. Marsupials alone, however, continued south across Gondwana and into Australia. Bats followed, but terrestrial placentals are not thought to have entered Australia until about 5 million years ago, long after that continent had broken free of Gondwana. By then it had drifted close enough to Southeast Asia for island-hopping rodents to finally reach it.

    Rich thinks the teeth and jaw of A. nyktos may tell a dramatically different story. Like other placentals and marsupials, the fossil has tribosphenic molars, specialized for both cutting and crushing food. But because the jaw lacks other features seen in the lower jaw of marsupials, Rich puts it in the placental camp. He has also inferred the shape of the upper molars from the wear patterns of the teeth and deduced the creature's “dental formula,” the number of each type of tooth. Both the upper molars and the dental formula-five premolars and three molars-may link A. nyktos with placental mammals, Rich says.

    If a placental mammal was in Australia more than 100 million years ahead of schedule, says Cifelli, “all bets are off” as to where placentals originated: “If they were in Australia then, there's no reason they couldn't have also been in South America, Antarctica, and perhaps Africa. You could make an argument for any place of origin.” Rich agrees, noting that shrew-size fossils like A. nyktos might be waiting to be discovered elsewhere. “You're not looking at elephants. We could have easily missed them; they could be on every darn continent.” He and others add that the presence of early placental mammals in both Asia and Australia-far apart in the Early Cretaceous-could push back the time of origin for higher mammals. They might have diverged from their primitive forebears before the breakup of Pangaea some 180 million years ago, when it would have been relatively easy for them to disperse across the single unified land mass.

    But other researchers say that the evidence is ambiguous, noting that the jaw has an odd mixture of primitive and advanced traits. Rough patches inside the lower jaw, says Cifelli, could be marks left by a set of small postdentary bones, common in ancient mammal-like reptiles but absent in higher mammals. At the same time, certain patterns in the teeth seem to belong to an advanced placental mammal, says William Clemens, a curator at the Museum of Paleontology in Berkeley, California. Nor is the dental formula conclusive, says Guillermo Rougier of the American Museum of Natural History in New York City. “That [5/3] formula is known in animals more primitive than placentals,” he says. “It may be a very primitive formula that neither precludes nor supports A. nyktos as a placental.”

    Clemens believes that the fossil is not a placental at all, but a remarkable new animal that should spark new thinking about mammal evolution on the southern continents: “We've been thinking in terms of the old Sherwin-Williams paint ads-a globe and a can of paint being poured on the North Pole and dripping down.” Convergent evolution could explain why this southern creature has dental features that resemble those of placentals, Clemens adds. Paleontologist Rosendo Pascual at the La Plata Museum in Argentina agrees: “This mammal is something totally different.” Along with other recent Cretaceous finds on southern continents, it indicates that mammals had an extensive evolutionary history in Gondwana at that time, he says.

    Cifelli sees a third possibility: that A. nyktos may be an advanced member of the peramurans, an early group of mammals believed to have spawned placentals and marsupials. To him, the slender jawbone and molar structure also look reminiscent of monotremes, considered among the oldest and most primitive of all mammals. “This new thing appears to be intermediate between peramurans and monotremes, suggesting that monotremes share a much more recent ancestor with higher mammals than previously thought,” he says. If so, monotremes would jump up the family tree by about 50 million years, making these odd egg-laying mammals closer cousins to placentals and marsupials.

    Other paleontologists think the fossil is too fragmentary to start redrafting the family tree. Instead, says Rougier, it should spur paleontologists to sharpen all their thinking about the early evolution of mammals: “The discovery will force us to take another look at the evidence for an early origin of placentals and to evaluate what features diagnose the major groups of mammals.”

    As for Rich, he welcomes the discussion. “Science is a democracy,” he says. He hopes to advance the debate by finding more bones. A complete skull would help, as would other parts of the skeleton. “Now we know there is one hole in all of Australia where this kind of fossil occurs,” Rich says. “This is where Pat and I can spend the rest of our lives, God willing.”


    Clusters Point to Never-Ending Universe

    1. Andrew Watson
    1. Andrew Watson is a writer in Norwich, U.K.

    Until recently, the outlook for the expanding universe has varied with cosmological fashion. At times, the consensus view held that the universe contains enough mass for gravity to slow its expansion to a stop; at other times, cosmologists have doomed it to expanding forever. Lately, however, evidence ranging from galaxy clusters to distant supernovae has favored an ever-expanding, “open” universe. Now add another vote for an open universe, from the number of images from light-bending “gravitational lenses” in the sky.

    A team of German and British researchers has simulated how the vast clusters of galaxies that bend the light of objects behind them should evolve over time in universes containing different densities of matter. They then compared the results to the number of gravitational lens images actually seen in the sky. The verdict: The simulation that best matches the observed lensing images is one that assumes a low-density, open universe. “[This] work may be the strongest single piece of evidence [for an open universe] if it holds up,” says Neil Turok of Cambridge University.

    According to the general theory of relativity, light from a distant galaxy that passes close to a massive galaxy cluster on its way to Earth will be bent, with the cluster acting like a lens and producing an arc-shaped image of the distant galaxy. “In order for this process to be effective, you need the cluster—that is the gravitational lens—at roughly half the distance from us to the sources,” says Matthias Bartelmann of the Max Planck Institute for Astrophysics near Munich, Germany, head of the modeling team. The clusters develop through the entire lifetime of the universe, at a rate that depends on the overall density of matter in the universe. So the number of dense clusters that have ended up in the right place at the right time to act as lenses is a powerful probe of the universe's overall mass density. “The lensing effect is very nonlinear—it reacts very strongly to changes in cluster evolution or in the compactness of the clusters,” says Bartelmann.

    In work that will be published in Astronomy and Astrophysics, he and Munich colleagues Andreas Huss and Jörg Colberg, along with Adrian Jenkins and Frazer Pearce of the University of Durham in the United Kingdom, began with two computer models that simulate the evolution of galaxy clusters. As input, the models required an estimate of the distribution of matter in the primordial universe, along with the universe's mean mass density and a parameter known as the cosmological constant—a hypothetical energy embedded in empty space, which may also influence the overall expansion of the universe. The researchers allowed the models to evolve until they reached the current age of the universe. The group then computed the lensing ability of the clusters that had emerged in each model and the number of arcs that would be seen from Earth.

    Both simulations showed that a universe with just one-third of the mass density needed to stop its expansion—along with a small or zero value for the cosmological constant—would have dense clusters in the right numbers and places to produce about 2500 arcs across the sky. That number is a good match to the observed number of arcs, which is somewhere between 2300 and 2700. When the researchers wound up the cosmological constant, “the number of arcs goes down to about 250,” says Bartelmann, “and when I then turn up the matter density … the number of arcs reduces by another factor of 10 to about 25 on the whole sky.” The dramatic differences in the outcome show, says Harvard University's Ramesh Narayan, that “this is quite a sensitive method to distinguish among models.”

    Narayan cautions that simulating cluster formation is “delicate.” Bartelmann adds that the density distribution of the primordial universe is the largest uncertainty in the modeling, one that won't be dispelled until the cosmic microwave background—which records primordial density fluctuations—is mapped in detail by NASA's forthcoming Microwave Anisotropy Probe mission and the planned European Planck satellite.

    Princeton University's Neta Bahcall notes, however, that Bartelmann's results on cosmic mass density are in “excellent agreement” with her own, based on direct counting of galaxy clusters. The findings are also in accord with direct measurements of how cosmic expansion has changed over time (Science, 31 October, p. 799). This forecast for the fate of the universe may be more than current fashion.


    Life's Winners Keep Their Poise in Tough Times

    1. Richard A. Kerr

    More than a century ago, paleontologists discerned three great waves of animals in the fossil record. First came the trilobites, the clamlike stalked brachiopods, and other “old life” of the Paleozoic era; next was the “middle life” of the Mesozoic, including reptiles and marine ammonites; and last was the “new life” of the Cenozoic, when mammals, clams, snails, ray-finned fish, and others rose to dominance. But what caused these great tides in evolutionary history, and what gave later organisms the edge over their predecessors? Now, a pair of paleontologists say they have identified a trait that may have helped decide winners in the fight for evolutionary dominance: the ability to buffer the body from environmental insults.


    Most brachiopods vanished in the Permo-Triassic extinction.


    At the annual meeting of the Geological Society of America last month in Salt Lake City, paleontologists Richard Bambach of the Virginia Polytechnic Institute and State University in Blacksburg and Andrew Knoll of Harvard University reported that at least in the sea, what they call “buffered physiology” played a central role in determining evolutionary winners and losers during the past half-billion years. Organisms that could buffer their internal organs from changes in ocean chemistry were less likely to be wiped out and more likely to rebound after a mass extinction than were their more sensitive neighbors, these researchers found. “I think they're onto something,” paleontologist David Bottjer of the University of Southern California in Los Angeles says of the team's work. “Why does the last 550 million years conveniently break out into three long intervals? Maybe Bambach and Knoll have the solution.”

    A winner.

    Mollusks' robust physiology may have helped them survive.


    If Bambach and Knoll have even the beginnings of an explanation for the history of life, it's because they had already tried to explain one brief chapter of it: the greatest of all the mass extinctions, which ended the Paleozoic 250 million years ago between the Permian and Triassic periods (Science, 7 November, p. 1017). They and their colleagues proposed that a stagnant deep sea belched a massive slug of toxic carbon dioxide into shallow waters, wiping out 90% of all the animal species in the ocean (Science, 1 December 1995, p. 1441). To support that contention, they sorted animals by their inferred sensitivity to high carbon dioxide levels. They found that sensitive animals suffered greatly, while those more tolerant of carbon dioxide suffered far less.

    Bambach and Knoll have now generalized this notion of physiological sensitivity to the past 550 million years. Using a database of all known marine genera prepared by John Sepkoski of the University of Chicago, the team sorted animals according to whether they have some control over the flow of gases and ions between their tissues and seawater; that ability might determine an animal's sensitivity to environmental stresses such as high carbon dioxide, low oxygen, toxic metals, or high acidity. For example, organisms with closed circulation systems, special gas-exchange organs like gills, or both—such as most mollusks, worms, and arthropods—tended to be buffered from the vagaries of seawater chemistry. But animals whose tissues were chemically open to the sea, such as most sponges, sea urchins, corals, brachiopods, and the flowerlike crinoids, lacked this physiological advantage.

    Plotted through time, the buffered and unbuffered animal genera behaved quite differently. Early in the Paleozoic, the unbuffered creatures quickly diversified to about 750 genera, twice as many as the buffered, although Bambach can't say exactly why this group was the early winner. After 200 million years of dominance by unbuffered animals, things changed at the Permo-Triassic extinction. Both groups suffered greatly, but the unbuffered organisms were hit harder, losing 90% of their genera versus 50% for the buffered. Then, although both groups recovered, the buffered group overtook the unbuffered, eventually attaining the level of diversity the unbuffered had enjoyed in the Paleozoic. The rules of the game changed again 65 million years ago at the Cretaceous-Tertiary mass extinction. Once again, the unbuffered suffered more, but this time the buffered animals diversified rapidly enough after the extinction to quickly gain a 2:1 edge in diversity; their continued dominance can be seen in the seas today.

    The long ascendancy of unbuffered animals in the Paleozoic, says David Jablonski of the University of Chicago, is “yet another validation of the idea that incumbency is important.” A species occupying a particular ecological niche may be no better adapted to it than a newcomer species, but the incumbent, which already has a lock on the available resources, usually wins out. “Paleozoic diversity was stuck,” says Bambach, with the unbuffered dominating the presumably more capable buffered animals, until the Permo-Triassic extinction broke down the system and allowed the buffered genera to proliferate into the emptied niches.

    This new perspective on the history of life also elevates the Cretaceous-Tertiary extinction to the status of a megaevent in the oceans as well as on land. When gauged solely by the number of species lost, the Permo-Triassic extinction towers over the Cretaceous-Tertiary and three other mass extinctions, at least among sea creatures. But the buffered-unbuffered distinction makes the transforming role of the Cretaceous-Tertiary clear. “The two era-ending extinction events are involved in altering the diversity of life in the oceans so completely that ecosystems had to reorganize the way they work,” says Bambach.

    Paleontologists who heard Bambach describe the work at the meeting are impressed. “The physiological explanation is a very seductive one,” says Jablonski. “It's going to be really worth digging into. My only hesitation is that correlation is not necessarily causation. It may not be physiology that actually determines the pattern,” but some other trait that happens to accompany it.

    Bambach and Knoll are open to the idea that something else may be the real cause of the pattern they see. Bambach notes one candidate: Buffered animals are typically also more ecologically and anatomically plastic, capable of taking up new modes of life when given the opportunity. “A coral will never have teeth or swim,” as he puts it. He and Knoll are seeking to identify such traits, says Bambach: “This is the beginning of the study, not the end.”


    Rat Model for Gulf War Syndrome?

    1. Ingrid Wickelgren

    From 25 to 30 October, New Orleans was host to 24,000 neuroscientists at the 27th annual meeting of the Society for Neuroscience. With more than 14,000 presentations, the offerings were as rich as a New Orleans gumbo, spiced with new tidbits about topics ranging from prion proteins to Gulf War syndrome.

    At the close of the 1991 Persian Gulf War, thousands of soldiers returned home apparently healthy, but reporting subtle problems ranging from muscle aches and fatigue to learning deficits or confusion. The U.S. government has officially attributed the collection of symptoms, known as Gulf War syndrome, to wartime stress, but that conclusion remains contentious. Many veterans, and a few researchers, point to another suspect: exposure to organophosphates (OPs), ingredients of chemical weapons and insecticides that were present in the Gulf.

    But while OPs in high concentrations are extremely toxic, there has been little evidence that the low doses to which the Gulf War veterans were presumably exposed are harmful. Now, neuropharmacologists Jerry Buccafusco, Mark Prendergast, and Alvin Terry of the Medical College of Georgia and the Veterans Administration Medical Center in Augusta, and their colleagues, have found the first hints in an animal model that low-levels of OPs produce long-term problems in cognitive areas of the brain.

    Buccafusco reported at the meeting that in a study of rats, OP treatment reduced the numbers of a receptor that enables nerve cells to respond to the neurotransmitter acetylcholine in an area of the brain—the hippocampus—known to be involved in learning and memory. The animals also developed learning deficits. And both the receptor loss and the learning problems lasted for weeks after the chemicals were gone.

    While the result won't solve Gulf War syndrome, it is, says Philip Bushnell, a toxicologist at the Environmental Protection Agency in Research Triangle Park, North Carolina, “the first time anyone has seen cognitive deficits persisting this long after low-level exposure.” The “interesting thing” about the work, he adds, is the correlation between the persistence of those deficits and the reduction in the number of the acetylcholine-binding “nicotinic” receptors, which a large body of evidence has shown to be important in cognitive function.

    The idea that low OP doses might have subtle effects on the brain emerged in the 1960s when anecdotal reports suggested that agricultural and industrial workers experience memory and concentration problems after chronic, low-level exposure to organophosphates. Those reports attracted little attention, however, until similar symptoms cropped up among some Gulf War vets, and researchers began investigating the cognitive consequences of low-level OP exposures. But none of these efforts revealed deficits or brain changes that lasted beyond the period of exposure until this year, when Buccafusco's team did their study.

    The researchers injected each of 70 rats with either a salt solution or a low dose—one that would produce no acute symptoms—of an organophosphate called diisopropylphosphorofluoridate (DFP) every day for 2 weeks. When they then evaluated the animals in the Morris Water Maze—a test of spatial memory that involves finding a hidden platform in a pool of water—they found that the DFP-treated rats took up to 30% longer than the saline-injected controls did to learn to navigate the maze. This learning deficit persisted for 3 weeks after the last DFP injection. It also appeared to spare long-term memories: Rats that had learned the maze before the DFP exposures performed as well as control rats.

    In addition, the team has discovered a biochemical change in the brain that may explain the behavioral deficit. OPs block an enzyme that breaks down the neurotransmitter acetylcholine, causing the chemical to build up in the synapses between nerves and their targets. Because such increases may lead to compensatory decreases in the neurotransmitter receptors, the researchers looked at how DFP affects the brain concentrations of the two types of acetylcholine receptor: the muscarinic receptor, so called because it also binds the chemical muscarine, and the nicotine-binding nicotinic receptor.

    They found that DFP immediately depressed levels of both receptor types in brain areas involved in cognition, including the hippocampus, striatum, and parts of the cortex. The muscarinic receptor quickly rebounded in all areas after the DFP treatment was stopped. But the numbers of nicotinic receptors in the hippocampus stayed low, bottoming out at 30% of baseline levels about a week after the treatment and remaining significantly depressed until the experiment was ended another 2 weeks later.

    Those results hint that DFP might be causing learning problems by somehow depressing the numbers of nicotinic receptors. To test the idea, the Georgia researchers gave nicotine, which activates the receptor, to another 12 of 24 DFP-exposed rats just before they were plopped in the water maze. The results: The nicotine-treated rats performed just as well as rats that had received only saline injections.

    Buccafusco is the first to concede that it's too soon to draw any firm connection to Gulf War syndrome. He points out, for example, that neither his experiments nor anyone else's can duplicate the chemical exposures of Gulf War veterans because it's been impossible to determine exactly what they were. “We don't know whether the troops were exposed to the same levels of organophosphates in the same time course as our rats. Nor do we know what other chemicals they were exposed to,” Buccafusco says. But even if the findings are totally irrelevant to the Gulf War, he adds, with the widespread use of pesticides and the worldwide stockpiling of nerve agents, studying the effects of OPs will be highly relevant in the future.


    Protective Role for Prion Protein?

    1. Marcia Barinaga

    From 25 to 30 October, New Orleans was host to 24,000 neuroscientists at the 27th annual meeting of the Society for Neuroscience. With more than 14,000 presentations, the offerings were as rich as a New Orleans gumbo, spiced with new tidbits about topics ranging from prion proteins to Gulf War syndrome.

    With the award of the Nobel Prize last month to University of California, San Francisco, neuroscientist Stanley Prusiner, prions have been in the news a lot lately. But in spite of the work by Prusiner and others implicating these rogue proteins in a variety of fatal brain diseases, including the United Kingdom's bovine spongiform encephalopathy (BSE or “mad cow disease”), the story has some gaping holes. One of the largest: No one knows the function of the normal prion protein, known as PrPc. Data presented at last month's neuroscience meeting by neurobiologist David Brown of the University of Cambridge in the United Kingdom may help remedy that.

    His new findings suggest, Brown says, that PrPc protects neurons by binding copper ions. When they run free, these reactive ions are highly toxic to cells. If the normal prion protein does help sequester the ions, the abnormal prion folding and clumping thought to cause BSE and other prion diseases could cause nerve cell death by interfering with this protective function.

    Researchers who study copper binding are all abuzz about Brown's results. “All my friends in neuroscience e-mailed me [after Brown's talk] to ask what is going on with this,” says Jonathan Gitlin, an expert on copper-binding proteins at Washington University in St. Louis. Gitlin himself says that PrPc could well play a role in protecting nerve cells from copper ions.

    But Brown's theory that a loss of the copper binding contributes to prion disease is more controversial. Many prion researchers believe that the destructive effects of the malformed prion protein, rather than loss of normal PrPc function, cause the diseases. As prion researcher Adriano Aguzzi of the University of Zurich in Switzerland points out, mice that make no PrPc because the gene has been knocked out do not get them.

    Because those mice don't show obvious ill effects from the loss of PrPc, Brown took a different route to identifying the role of the normal protein. Previous studies by other researchers had shown that a fragment of PrPc binds copper in the test tube. Working in collaboration with Hans Kretzschmar of the University of Göttingen in Germany, Brown went on to see what effect this copper binding might have on brain cells.

    In his first experiments, he cultured brain neurons from normal mice and from mice lacking a functional PrPc gene. He found that cells without PrPc are much more susceptible than the cells from the normal mice to poisoning by copper sulfate. Brown then went on to show that a peptide containing the copper-binding site of PrPc could protect the mutant cells from copper's toxicity.

    Further evidence that PrPc may protect cells from copper came in experiments indicating that the protein binds the metal in living brains. PrPc is normally found in the membranes of brain cells, and Brown reported that membranes prepared from the brains of normal mice “have a lot of copper”—almost 20 times more than those from the knockout mice. Because PrPc is concentrated in the membranes of synapses, the specialized gaps where neurons are chemically coupled to one another, Brown postulated that it may serve to sop up copper ions released into synapses when neurons fire. In support of that idea, he showed that copper diminishes electrical activity in cerebellar neurons from the knockout mice that lack PrPc, but not in cerebellar neurons from normal mice.

    Brown proposed that PrPc could have a more indirect role as well: It might pass the copper to other proteins, which could ferry it to enzymes inside the cell that need it for their activity. These include superoxide dismutase, which protects cells from damage by oxidizing chemical species that can form in cells from normal activities, such as energy metabolism. Indeed, he and Kretzschmar found less resistance to oxidative damage in neurons from PrPc knockout mice compared to those of normal mice.

    Even though prion researchers generally blame the abnormal protein for the cell death seen in prion diseases, Aguzzi says Brown's findings raise the possibility that the loss of a normal function could also contribute in ways that have been missed or somehow compensated for in the knockout mice. If so, the normal prion protein may end up stealing some of the attention its malignant counterpart is now getting.


    Heat Shock Protein Linked to Stroke Protection

    1. Ingrid Wickelgren

    From 25 to 30 October, New Orleans was host to 24,000 neuroscientists at the 27th annual meeting of the Society for Neuroscience. With more than 14,000 presentations, the offerings were as rich as a New Orleans gumbo, spiced with new tidbits about topics ranging from prion proteins to Gulf War syndrome.

    Just a few years ago, neurologists believed they could do little to treat strokes. But recent work has shown that if they act quickly to restore blood flow with a clot-busting drug, they can limit the brain damage from strokes—and the accompanying deficits in memory, speech, or mobility (Science, 3 May 1996, p. 664). Now, neuroscientists Midori Yenari, Sheri Fink, Robert Sapolsky, Gary Steinberg, and their colleagues at Stanford University have identified a molecule that could lead to a new kind of stroke therapy.


    Rat brain neurons given a gene for heat shock protein 72 (blue stain) survive an experimental stroke.


    As Yenari reported at the meeting, the researchers found that introducing the gene for a heat shock protein (HSP)—so-called because it is produced in response to increased temperatures or other stresses—into the brains of rats reduced the number of neurons that died after their blood supply was cut off. “It's a very exciting result,” says Dennis Choi, a neurologist at Washington University in St. Louis. “It provides good evidence that heat shock protein has its arms around neuroprotective levers in the cell.” While the technical obstacles of converting the finding into a therapy for stroke are formidable, adds Choi, “it's not beyond the pale.”

    The Stanford researchers undertook the study because earlier work had shown that HSP concentrations increase in the brain after experimentally induced strokes. Although the proteins are known to protect cells against damage caused by various stresses, “people didn't know whether they were doing anything useful” in stroke, Sapolsky says.

    To find out, the Stanford team inserted the gene for HSP 72, which is particularly abundant in times of stress, into a modified version of the genome of herpes simplex virus (HSV), which can infect nerve cells. The researchers then injected this vector into the brains of 11 rats in a region called the striatum. They also injected another eight rats with a control HSV vector that did not contain the HSP gene.

    After 12 hours, the team induced strokes in all the rats by temporarily choking off the artery that feeds the striatum on one side of the brain. Two days later, when they assessed the damage in the affected area, they found that only 60% of the striatal neurons carrying the control vector had survived, while 95% of those carrying the HSP-expressing vector were still alive. The researchers hypothesize that the HSP might be protecting neurons by binding to key cellular proteins and thus helping them keep their shape and function after a stroke.

    Experts caution that a comparable gene therapy for human stroke patients is a long way off, at best. Experimental neurologist Howard Federoff of the University of Rochester Medical Center in New York points out, for example, that no known viral vector can spread widely enough in the much bigger human brain to cover the region typically damaged during a stroke. In addition, Sapolsky's team has yet to show that they can protect neurons by injecting the HSP gene after a stroke occurs. Other experiments by the Stanford group have shown that genes inserted into the HSV vector become active about 2 hours after the injections, with peak activity some 10 hours later. It's not yet clear whether that is soon enough for the additional HSP to ward off cell death.

    Even if it isn't, it might be possible to use the gene therapy preventatively when the risk of stroke is high, as in certain surgical procedures. Alternatively, investigators could design small-molecule drugs that stimulate extra production of the HSP or mimic its cell-sparing effects.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution

Navigate This Article