News this Week

Science  31 Oct 1997:
Vol. 278, Issue 5339, pp. 799

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    New Light on Fate of the Universe

    1. James Glanz


    In the flash of stellar explosions seen halfway back to the big bang, two groups of astronomers have read clues to the future of the universe. With the orbiting Hubble Space Telescope and ground-based observatories, they have analyzed light from these remote cataclysms to estimate their distances and determine how fast the stars were rushing away from Earth billions of years ago when they exploded. Their goal is to learn how the universe's expansion rate has changed over time—whether it has been slowed by gravity, or perhaps boosted by large-scale repulsive forces. The groups, longtime rivals, have been working independently, but their results agree: The universe's expansion rate has slowed so little that gravity will never be able to stop it.

    The new results imply that the universe contains far less mass than many theorists had hoped: less than 80% of the amount that would be needed to slow its expansion to a halt, and perhaps far less than that. The results even leave open the possibility that a so-called cosmological constant—a hypothetical property of empty space that might generate repulsive forces—is at work, giving the universe an expansive antigravity boost. “The results are very exciting and the method is very promising,” says Neta Bahcall of Princeton University.

    Bahcall points out that the small numbers of stellar explosions, or supernovae, analyzed by the groups mean that the conclusions are not definitive. But the agreement between the two results, coming on the heels of other hints of a low-density universe, has many cosmologists taking them seriously. And because the supernova technique directly measures how the makeup of the universe is affecting its evolution, says astrophysicist Michael Bolte at the University of California, Santa Cruz, “I think this is the surest way to make some of these measurements.”

    Both groups stress that they need to analyze more supernovae to reduce the uncertainty in their results, reported in two just-completed papers. One of the papers, by the Supernova Cosmology Project led by Saul Perlmutter of Lawrence Berkeley National Laboratory and the University of California, Berkeley, is in press at Nature. The other, by the High-Z Supernova Search Team led by Brian Schmidt of Mount Stromlo and Siding Spring Observatory in Australia, was under review at Astrophysical Journal Letters as Science went to press but is publicly available on the Los Alamos National Lab electronic preprint server (

    If the results hold up as the groups add more supernovae to their samples, they could have a major impact on how theorists picture the universe's first few moments. Already, as word of these developments makes its way through the astrophysics community, the findings are adding to a growing sense that the simplest version of the reigning cosmic creation theory, known as inflation, may not work. Inflation traces key features of the universe to a burst of exponential growth in the first fraction of a second after the big bang, and its simplest version predicts a universe that contains just enough matter for gravity to stop the big-bang expansion after an infinite time—a mass density that would make the large-scale geometry of space-time “flat.”

    “What we are finding is that matter cannot be the only source of a flat universe,” says Peter Garnavich of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, lead author of the High-Z Supernova Team's paper. The results still leave an opening for some theories in which matter plus its equivalent in energy, supplied by the cosmological constant, add up to a flat universe. But that picture is far less palatable to most astronomers. “If I were a theorist, I'd be getting worried at this stage,” says Alexei Filippenko of Berkeley, a co-author on the Garnavich team's paper.

    Candles in the dark

    Both groups are looking for clues to the fate of the universe by extending a simple line, which plots the distance of far-off objects against their velocity as they are swept from Earth by cosmic expansion. Nearby, within a few hundred million light-years, astronomers already know that the line is straight. The recession velocity of galaxies increases steadily with distance from Earth, implying that space itself is expanding at the same rate everywhere. But objects seen at greater distances, billions of light-years away, emitted their light much earlier in cosmic history. The line should subtly bend at great distances, and the bending should reveal how gravity or a cosmological constant has changed the expansion rate over time.

    Measuring how fast an object in the distant universe is flying away from Earth is straightforward: Just determine the “redshift” of its light, a stretching of its wavelengths analogous to the drop in pitch of a receding train's whistle. Measuring distance is another matter, requiring objects that can be seen far out in the universe and have a roughly constant intrinsic brightness, so that their apparent brightness can be taken as a distance indicator. That's where the exploding stars called type Ia supernovae enter the picture.

    Type Ia's are thought to be white dwarf stars that suck material from a companion star until they blow up as a star-sized hydrogen bomb. “They're very bright, so you can see them across the universe,” says Michael Turner of the University of Chicago. And because white dwarfs all have about the same mass, all type Ia's should have roughly the same intrinsic brightness, turning them into appealing “standard candles.”

    Both teams now have identified dozens of type-Ia supernovae in the distant universe using an efficient discovery technique developed by the Perlmutter group. Researchers compare survey images of the same regions of sky, made weeks apart. A computer “subtracts” one image from the other, and any new point of light in the hundreds of galaxies in each image pair jumps out. Then the teams go to large ground-based observatories like the 10-meter Keck Telescope in Hawaii or, lately, to the Hubble Space Telescope. There they confirm that the bright spot really is a new type Ia, measure its redshift, and record its light curve as it brightens to a peak and then declines over the following months.

    Collecting those measurements is just the beginning. Because type Ia's don't reach exactly the same peak brightness, “they certainly are not ideal standard candles,” says Mario Hamuy of the University of Arizona, a co-author of the Garnavich paper. Fortunately, he adds, “we can correct for the variations.” Mark Phillips of the Cerro Tololo Inter-American Observatory in Chile, co-author Adam Riess of Berkeley, Hamuy, and others have shown that the “light curve” declines more slowly for intrinsically brighter supernovae. By studying about 30 nearby supernovae, Hamuy and Riess tightened up the relationship so that observers can use it to correct each new supernova's brightness.

    Both groups have used these data to calibrate their supernovae. They also have had to beware of other factors that might prevent the explosions from serving as perfect standard candles—for example, the dimming of their light by interstellar dust between a supernova and Earth. “You spend a lot of time making sure you get [the corrections] right,” says Perlmutter of the Berkeley Lab.

    Last year, Perlmutter and his colleagues in the Supernova Cosmology Project finally worked through the corrections for a handful of supernovae observed from the ground. The supernovae had redshifts of up to about 0.4—a few billion light-years away. When the researchers plotted brightness against distance, the line had a slope that was broadly consistent with a flat universe, containing a full complement of matter (Science, 4 April, p. 37). But the game changed just after those results became public, when both teams were granted observing time on the Hubble.

    “The Hubble is a much more precise instrument than ground-based telescopes for measuring these light curves,” explains Gerson Goldhaber of Berkeley and the Berkeley Lab, a co-author of the Nature paper. Hubble observations of a supernova over its months-long fade aren't plagued by moonlight scattered in the atmosphere. And Hubble's high resolution makes it much better at separating the light of a supernova from the shine of its host galaxy.

    Universe without end

    When the Supernova Cosmology Project added just one Hubble supernova to its sample, at a redshift of 0.83 (a distance of roughly 7 billion light-years), the future of the universe began to look different. The data are now most consistent, says Perlmutter, with a universe containing far less than the critical density of matter. If the universe is flat, matter may account for only 40% to 80% of the critical density, with the cosmological constant making up the rest. If the universe lacks any cosmological constant, the supernovae imply that the mass density of the universe, known as omega-matter, is still lower, and the universe is destined to expand forever.

    Those conclusions match those of Garnavich and his colleagues, who analyzed Hubble observations of three type Ia's—the most distant at a redshift of 0.97—and one seen only from the ground. Their analysis suggests still lower mass densities for a flat universe (10% to 70%), but the error bars for the two sets of results easily overlap. “The fact that we're coming out with [omega-matter] numbers that are pretty consistent and low is very exciting,” says Filippenko of Berkeley.

    Even so, Perlmutter says, kernels of doubt remain. Although the spectrum from his most distant event is “strikingly similar” to those of nearby supernovae, he says, the mix of galactic environments may have been quite different at those remote times. Such differences might have an effect on the relationship between the light curve and the peak brightness. And then there are the small numbers of supernovae behind the conclusions. Four Hubble events, says Berkeley's Marc Davis, who was not involved in the work, “are not enough to define the solution to a long-standing cosmological puzzle.”

    But Davis adds: “I think it's fairly clear this is going to be the way to do it.” Chicago's Turner agrees: “For me, the exciting thing is what's to come. We have a very promising standard candle, and we have two groups [using] it; this is the tip of the iceberg.” The teams say they are seeking more Hubble time and now are analyzing many more ground-based observations.

    And it escapes no one's attention that these first conclusions fall broadly in line with an increasing number of other observational hints that the cosmic mass density may be low. One of the most recent appeared in the 20 August Astrophys. J. Lett., where Princeton's Bahcall and two colleagues showed that massive clusters of galaxies have changed little over recent cosmic history, implying that large-scale gravitational forces are feeble and pointing to a matter density of just 40% of the critical value. Other hints of a low-density universe emerge from computer simulations of how different mass densities would affect the formation of giant clusters of galaxies, and from searches for invisible dark matter in our cosmic neighborhood. “If you look at the observational data, they all suggest a low density,” says Bahcall.

    Inflation can be modified to cope with a low-mass universe, says Andrei Linde, a theorist at Stanford University who helped develop the theory. But “at some point you can't patch a theory too much before it gets too ugly to accept,” says Bolte of Santa Cruz. “That's what's going to come under fire, I think: whether inflation is the correct model or not for the early universe.”

    With those debates still to come, along with plenty more supernovae, “it's early times, my friend,” says Princeton University's Jim Peebles. “You shouldn't start paying off your bets.”


    Teeth and Bones Tell Their Stories at Chicago Meeting

    1. Anne Simon Moffat

    CHICAGO—About 900 fossil lovers gathered here from 8 to 11 October for the annual meeting of the Society of Vertebrate Paleontology. Highlights ranged in time from the dawn of land vertebrates more than 300 million years ago to the extinction of the dinosaurs and other creatures 65 million years ago.

    Shark and Ray Extinctions

    During the disastrous extinction that occurred about 65 million years ago, at the so-called Cretaceous-Tertiary (K-T) boundary, the world lost all of its dinosaurs, as well as many of its land plants and animals dwelling in shallow water. Now add sharks and rays to the list of casualties, say John Hoganson of the North Dakota Geological Survey in Bismarck and his colleagues.

    Long gone?

    Squalicorax pristodontus teeth were absent in a Tertiary sample.


    Smaller life-forms in the sea, such as plankton and the shellfish called ammonites, are known to have gone extinct in droves at the K-T boundary, when an asteroid is believed to have struck Earth. But extinctions of large marine animals have been tough to document, in part because the sedimentary rocks that may record them are mainly underwater and inaccessible. Now, by combining separate rock records, one dating from before the extinction and the other from after it, Hoganson and his colleagues say they can demonstrate large-scale extinctions of sharks and rays in the seaway that covered much of central North America before and after the K-T boundary.

    Because sharks and rays have skeletons of cartilage, which have long since decomposed, the researchers classified fossilized teeth from marine rocks in two areas, the Fox Hills and Cannonball Formations of North Dakota. They found 22 shark and ray species at the Fox Hills site, dated to the late Cretaceous by the presence of other fossils, and 15 at the Cannonball site, which belongs to the early Tertiary. None of the Cretaceous species at Fox Hills were present at Cannonball, indicating that they had all gone extinct, although other species belonging to some of the same families as the sharks and rays did survive from one era to the next.

    Not everyone is convinced that the fish went extinct, however. Says paleontologist David Archibald of San Diego State University, “They don't have the [complete] geological section; they're missing at least 1 million years, at the K-T boundary.” A change in the environment, such as receding seas, could have yielded the same fossil evidence by, for example, forcing fish to move to distant, more hospitable environments. Therefore, he says, “their work says nothing about K-T extinctions.”

    Others find the evidence more persuasive, noting that many extinctions are known only from isolated data points. “There are very few places in the world that have marine rock that span the boundary,” says paleobotanist Kirk Johnson of the Denver Natural History Museum. But he adds, “Hoganson has stratigraphic units that come close.” For his part, Hoganson suggests that confirmatory evidence might be found in rocks along the Gulf Coast in Texas and Alabama, which have late Cretaceous and early Tertiary marine sediments that can be studied.

    Famous Dinos Misidentified

    Back in 1979, field paleontologist John Horner of the Museum of the Rockies in Bozeman and his colleagues found a rare deposit of 75-million-year-old dinosaur eggs in Montana. Because the egg clutches were surrounded by bones of the herbivorous dinosaur Orodromeus malekei, Horner and other paleontologists originally thought that Orodromeus had laid the eggs. But last year, he made a discovery that not only negates that conclusion but also has implications for understanding dino behavior: The eggs belonged not to Orodromeus, but to the carnivorous Troodon, Horner now says.

    This reversal implies that the Orodromeus bones are the remains of food brought back to the nest for the young by the parent Troodons. If so, dinosaurs may have been more nurturing than thought, lending further support to the still-controversial idea that today's birds are descendants of the dinosaurs. “There is increasing evidence that the behavioral features we associate today with birds were found also in dinosaurs,” says Horner.

    Horner's reanalysis was triggered by the discovery of another nest just 80 kilometers away, where the presence of Troodon bones on the eggs left little doubt that the carnivore was brooding its eggs. So Horner decided to reanalyze the supposed Orodromeus eggs and embryos. After removing more rock from the specimens to get a better look at the embryos, Horner discovered that all the eggs belong to Troodon. He found, for example, that the crests of the embryonic humoral bones were identical to those of Troodons studied elsewhere. Horner noted the error in a brief “Scientific Correspondence” in the 5 September 1996 issue of Nature. But, he says, not many paleontological researchers noticed, and at the meeting he offered a public mea culpa, telling everyone “we're correcting a mistake.”

    Dinosaur curator Mark Norell of the American Museum of Natural History in New York City is not surprised by the misidentification. Indeed, he notes that a fabled dino hunter from his own institution, Roy Chapman Andrews, made a similar error in 1923, classifying eggs from a Mongolian site as the herbivorous Protoceratops. In 1993, Norell found eggs that, based on both their gross appearance and chemical composition, were identical to those that Andrews had found. But the embryos at the new site turned out to be a carnivore called Oviraptor (Science, 4 November 1994, p. 779).

    Oviraptor (so named because it was originally thought to be an egg predator) is now thought to have brooded its eggs, but Troodon may have displayed even more advanced behavior. The Orodromeus remnants found around the clutches were not fragments of dead juveniles, Horner says, but were instead the remains of animals delivered to the nest by a Troodon parent to feed its young. While such advanced social behavior is common in birds, it is not often associated with dinosaurs. In addition, Horner says, Troodon shows other birdlike behaviors, including nesting in colonies, laying eggs at regular intervals in neat clutches, and producing oblong eggs.

    To Horner and Norell those findings strengthen the relationship between dinosaurs and birds, although not everyone will be convinced. Just last week, for example, Ann Burke and Alan Feduccia of the University of North Carolina, Chapel Hill, reported in Science (24 October, 666) an analysis of digit development in the avian hand, which they say supports a different conclusion.

    Early Land-Dweller Found

    If dinosaurs are the megastars of paleontology, the amphibian-like microsaurs barely rank as bit players. But these 12- to 15-centimeter-long creatures played a crucial behind-the-scenes role: They were among the first four-legged vertebrates to crawl onto land, more than 300 million years ago. As such, microsaurs have ardent fans among paleontologists who want to understand the adaptations that made this epochal change possible. Now these researchers have a new star to follow: a specimen that offers the earliest look yet at the kind of vertebrate that made the leap to land.

    At the meeting, paleontologist John Bolt of Chicago's Field Museum of Natural History reported that he and colleague R. Eric Lombard of University of Chicago had identified microsaur remains in a hand-sized sample collected about 10 years ago in Mississippian rocks of southern Illinois by paleontologists from the University of Kansas. Until now, the oldest microsaur fossils had been dated to the early Pennsylvanian period, about 322 million years ago. But finding the new microsaur in Mississippian rock means that it lived roughly 10 million years earlier.

    It is “unquestionably the oldest known microsaur,” says paleontologist Robert Carroll of McGill University in Montreal. And while 10 million years isn't much on a geological timescale, it has brought microsaurs a big step closer to the critical time, about 360 million years ago, when four-legged vertebrates crept onto land, and is thus giving paleontologists a better view of what those pioneers may have looked like.

    This particular fossil shows classic microsaur features, Bolt says. These include uniquely shaped spine bones and a simplified skull, containing just one bone instead of three. But it also has a feature that sets it apart from later microsaurs: the proatlas, a pair of rodlike bones that attach between the first vertebrae and skull and can limit head rotation to up-and-down movements. The large, but more primitive, amphibians that predated microsaurs also had these bones.

    Bolt adds, however, that he was disappointed to find that this new form offers few clues to what its immediate predecessors might have been. “We would have liked to see features that were shared only between microsaurs and some other group of early amphibians,” he says. Bolt hasn't given up hope of uncovering such links. “We'll find them,” he predicts, “but we'll have to dig deeper into the first quarter or half of the Mississippian.”


    El Niño Slows Greenhouse Gas Buildup?

    1. Nigel Williams

    As anyone who tunes in to weather forecasts should know, the periodic warming of the eastern Pacific known as El Niño takes the rap for a lot of bad weather—everything from the hurricane that swept Acapulco earlier this month to the blizzard that dumped up to a meter of snow over the U.S. heartland last week. But El Niño also has an upside that may help researchers better understand global climate change. On page 870, earth scientist Rob Braswell of the University of New Hampshire, Durham, and his colleagues describe new results suggesting that, by warming global climate, an El Niño or any other warm period may help temporarily brake the ongoing rise in atmospheric carbon dioxide due to human activity. The mechanism: a delayed burst in plant growth worldwide that appears to sop up excess levels of the greenhouse gas.

    The findings implicate ecosystem processes—perhaps interactions between soil microbes and plants—as a middleman between warming and plant growth. “These results are a major step forward in providing evidence for mechanisms that explain terrestrial responses to climate change,” says ecologist Stuart Chapin of the University of California, Berkeley. Experts say it's unclear, however, whether such plant growth might restrain carbon dioxide buildup over the long haul.

    Atmospheric carbon dioxide concentrations have increased more or less steadily over the past 20 years, continuing a trend more than a century old that is attributed largely to rising consumption of fossil fuels and large-scale destruction of forests by slash-and-burn agriculture and logging. Braswell's team analyzed shorter term fluctuations in carbon dioxide levels and—using powerful satellite-based techniques—global temperatures and plant growth after unusual warm spells, some of which are attributable to El Niño events. “We really didn't know what was going to happen, and we weren't confident we'd see anything conclusive,” says Braswell.

    But to their surprise, they found that the rate of increase of atmospheric carbon dioxide levels slowed significantly about 2 years after each of four warm spells that occurred between 1980 and 1991, including the major El Niño of 1982 to 1983. Global vegetation growth—as measured by light reflected from photosynthetically active leaves—also sped up after a comparable time lag, suggesting that the plants were removing the excess carbon dioxide. “It's a surprise to see such a clear delay given all the variables in global climate and plant growth,” he says.

    The 2-year gap between the warming events and the changes in vegetation and atmospheric carbon dioxide concentrations indicates that the responses weren't due simply to higher temperatures spurring plant growth. “Ecologists are familiar with lags from field experiments, but such a long delay is surprising,” says Braswell. Indeed, adds climate modeler Peter Cox of the Hadley Centre for Climate Research and Prediction at the Meteorological Office in Bracknell, United Kingdom, the lag “is difficult to understand, but is probably associated with processes in the soil involved with the availability of nutrients such as nitrogen.” He and others suggest that warming increases the activity of microbes that make fertilizers available in the soil, increasing plant growth after a delay. The hunt is now on for exactly which soil microbes or other factors dictate how ecosystems respond to warming.


    Mutant Mice Mimic Human Sickle Cell Anemia

    1. Marcia Barinaga

    In this age of genome research, a new disease gene seems to be discovered almost every week—each one carrying its own promise of a cure. But even as researchers bag more and more of the genes at fault in hereditary diseases, sickle cell anemia stands as a sobering reminder that just identifying a mutant gene doesn't necessarily mean the problem is solved. The mutation that causes this painful, debilitating, and eventually fatal disease has been known for 50 years, yet there is no cure and few effective treatments for the hundreds of thousands of people around the world born with the condition each year. But new results described in this issue may help change that.


    The top micrograph shows the sickled red blood cells from a genetically altered mouse, while the bottom one shows the consequences of the sickling: The deformed red blood cells clog blood vessels and cause tissue damage in the kidneys of the mutant mice.


    On pages 873 and 876, two teams, one headed by Tim Townes of the University of Alabama, Birmingham, and the other by Eddy Rubin at Lawrence Berkeley National Laboratory in Berkeley, California, report that they have created mouse models of sickle cell anemia that closely mimic the human disease. As adults, these mice make only the disease-causing, mutant form of hemoglobin, which clumps together when it is not bound to oxygen, distorting red blood cells into a sickle shape. The animals also seem to have the same dismal array of symptoms as human sufferers. They develop severe anemia as a result of the premature death of the abnormal blood cells. Their blood vessels become clogged with sticky, sickle-shaped red cells. And their organs show damage from the resulting oxygen and nutrient starvation.

    As a result, says Mary Fabry, who works on other sickle cell mouse models at Albert Einstein College of Medicine in New York City, the new mice will be “indispensable” for testing drug and gene-therapy strategies for controlling the disease. Franklin Bunn of Harvard Medical School in Boston, an authority on hemoglobin diseases, agrees, calling the models “a critically important advance.”

    Until recently, researchers studying sickle-cell disease and looking for new therapies had to conduct their experiments on red blood cells from sickle-cell patients. But all the complex tissue interactions that figure in the disease couldn't be studied in the test tube, nor could the full range of possible treatments be tested. “We obviously needed an animal model,” says Townes.

    To try to create such models, several research teams in the late 1980s and early 1990s introduced the mutant gene that causes sickle-cell disease into mice, although their efforts weren't completely successful. The mutation underlying the condition is a change of one base in the gene that codes for β-globin, a protein that combines with its partner, α-globin, to make up the hemoglobin molecule that carries oxygen in our blood. If just one of an individual's two β-globin genes carries this mutation, the person doesn't get sick because the good hemoglobin keeps the red cells from sickling. Similarly, when researchers put various mutant forms of the human β-globin gene into mice, they found that the normal mouse globin blunts the sickling effect of the mutant human gene.

    Some groups partially overcame that problem by using a “supersickling” version of the human β-globin gene with several mutations. It produces a hemoglobin that induces sickling even in the presence of mouse globins. While those mice were useful for studying some aspects of the disease, some researchers became convinced that “we just can't have any mouse globins at all, to make the right model for sickle-cell disease,” says Mohan Narla, a member of the LBNL team.

    For their models, then, both the Townes and Rubin teams decided to eliminate completely the mouse α- and β-globin genes, which make hemoglobin in adult mice, in addition to introducing a complement of human genes including the sickling mutant form of human β-globin. Chris Pászty, then a postdoc with Rubin and Narla, and Tom Ryan, then a postdoc with Townes, both set out to do this, unbeknownst to each other.

    By early 1995, Ryan had produced “knockout” mice that were missing the β-globin genes, while Pászty had succeeded in eliminating the α-globin genes. When the two groups learned of each other's work, Pászty recalls, it seemed “pointless to continue” to make mice the other team had already produced. They shared their knockout mice, and each team bred the two knockouts with each other, then bred the products of those crosses with transgenic mice each team had created that carry both the normal human α-globin and the mutant human β-globin genes.

    To keep the resulting transgenic mice from dying of sickle-cell disease before they were born, the researchers also introduced an extra set of human genes. Like humans, mice have a set of embryonic globin genes that are shut off partway through fetal development, but they lack an equivalent of the fetal hemoglobin that tides humans over until their adult genes switch on after birth. Mice normally just turn on their adult hemoglobin after the embryonic genes shut off. Consequently, both teams worried that if the altered mice switched to making the sickling form of adult human hemoglobin midway through gestation, they wouldn't survive to be born. “We wanted them to switch like humans,” says Townes. So the teams introduced normal human fetal globin genes into their mice as insurance to get the mice through fetal development.

    That move was a bit more successful in Ryan's mice than in Pászty's: Ryan's mice make human fetal hemoglobin until after birth, whereas Pászty's mice turn off the human fetal gene before birth, causing many of them to suffer severe sickling just hours after they are born, and die. But the surviving mice from both labs show a fuller complement of sickle-cell symptoms than earlier mice. They have much more severe anemia and also much more infarction, tissue death resulting from clogged blood vessels. “In Dr. Pászty's model, we are actually seeing infarcts much more frequently than in other models,” says Elizabeth Manci, a pathologist at the University of South Alabama Medical Center in Mobile who has studied human sickle-cell disease and the mouse models for many years.

    Researchers can use the mice to study the changes in the red-cell membranes that make them stick to blood vessel walls, says Bunn, and perhaps devise new strategies for blocking the process. What's more, unlike earlier mouse models that used supersickling β-globin, the new mice have exactly the same mutant β-globin (known as βs) as the vast majority of humans with the disease, notes University of California, San Francisco, biologist Y. W. Kan, a pioneer of research on hemoglobin diseases. That, he says, makes them ideal subjects in the search for drugs that may inhibit the hemoglobin clumping that causes red blood cells to sickle.

    Bunn notes that one of the most promising ways to block sickling is to keep the normal fetal globin genes switched on after birth, to provide a source of normal, antisickling hemoglobin. One drug, hydroxyurea, has already proven effective at keeping the genes active, but researchers hope to find better ones. “The fact that this current mouse model expresses fetal hemoglobin during fetal development, with a switch to adult βs-globin,” Bunn says, “opens up the opportunity to expose these animals to various agents, to ask which ones may [prolong fetal] globin expression.”

    The Berkeley Lab and Alabama teams haven't compared their mice side by side yet, but it is already clear that the disease progresses somewhat differently in each one. That, too, could be an asset; each may have particular strengths and weaknesses for testing therapies. “Having more than one model is a tremendous advantage” for double-checking the effectiveness of any experimental treatment, says Marie Trudel of the Institut de Recherches Cliniques in Montreal, who created some of the earlier mouse models. Perhaps this multiplicity of models will usher in a better array of treatments for sickle cell patients.


    Y Chromosome Shows That Adam Was an African

    1. Ann Gibbons

    In the beginning, there was mitochondrial Eve—a woman who lived in Africa between 100,000 and 200,000 years ago and was ancestral to all living humans. Geneticists traced her identity by analyzing DNA passed exclusively from mother to daughter in the mitochondria, energy-producing organelles in the cell. To test this view of human origins, scientists have been searching ever since for Eve's genetic consort: “Adam,” the man whose Y chromosome (the male sex chromosome) was passed on to every living man and boy.

    Son of Adam.

    Y chromosome links some Ethiopians to a genetic “Adam.”


    Now, after almost a decade of study, two international teams have found the genetic trail leading to Adam, and it points to the same time and place where mitochondrial Eve lived. Described this month at a symposium on human evolution at Cold Spring Harbor Laboratory in New York, the genetic trail is so clear that it allows researchers to compare the migration patterns of men and women tens of thousands of years ago (see sidebar). It even pinpoints the living men whose Y chromosomes most resemble Adam's: a few Ethiopians, Sudanese, and Khoisan people living in southern Africa, including groups once known as Hottentots and Bushmen.

    The descent of men.

    The common ancestor of all men's Y chromosome was inherited from an African, whose descendants spread worldwide.


    The findings, by teams based at Stanford University and the University of Arizona, are the latest in a series of genetic studies that point to Africa as the recent birthplace of modern humans—people who then spread around the world and replaced other human populations. “It's very comforting to see the Y is giving us the same picture as mitochondrial Eve,” says molecular anthropologist Mark Stoneking of Pennsylvania State University in University Park, who was part of the team that identified “Eve” as an African. “There's no doubt that there's some clear event of modern humans coming out of Africa.” Now the question is whether the wave of modern humans from out of Africa completely replaced the people on other continents, or whether there was some interbreeding.

    In science, unlike the Old Testament, Eve came before Adam—specifically, in 1987, when Stoneking and other researchers in the lab of the late Allan Wilson at the University of California, Berkeley, announced that they had found our mitochondrial ancestor. They had compared mitochondrial DNA variants found around the world and traced a common ancestor by sorting out the variants on an ancestral phylogenetic tree. But Eve ignited fierce debate, as some scientists challenged the methods and assumptions used to place her in Africa from 100,000 to 200,000 years ago (Science, 14 August 1992, p. 873). Even if the Wilson team was right—and intense debate continues—“the mitochondrial DNA is just one gene lineage,” says University of Michigan, Ann Arbor, paleoanthropologist Milford Wolpoff. Different genes might trace back to different ancestors on a different continent.

    The only way to test the story of mitochondrial Eve was to trace the ancestry of other genetic lineages in the nucleus of the cell to see if separate lines of evidence also led back to African ancestors. And the obvious place to start was the Y chromosome, which is passed from fathers to sons and therefore provides an independent test of the mitochondrial Eve hypothesis. The bulk of the Y chromosome remains unchanged through generations, except for rare mutations hidden in regions that don't code for proteins. By comparing variation at the same site in different individuals' DNA (known as polymorphisms), geneticists can sort out which populations are most closely related and then build a phylogenetic tree that traces the descent of men, just as the mtDNA traces the descent of women. And by using average mutation rates for nuclear DNA, they can estimate how long ago particular mutations appeared, and thus how long ago each limb of the tree branched off.

    Sketching the branches of a Y chromosome tree has been painstakingly slow, however, because few polymorphisms in the Y chromosome had been found (Science, 26 May 1995, p. 1183). But new tools to speed the detection of variation in DNA sequences have recently given a boost to research on the Y. One big break has come at Stanford University, where biochemist Peter Oefner and molecular biologist Peter Underhill have developed an automated system for rapidly detecting subtle differences in DNA sequences. Now they have harnessed this system to find genetic markers for evolutionary studies. The method involves mixing and heating amplified copies of Y chromosomes from two men to unzip their double-stranded DNA. Then the single strands from the two Ys are reannealed. If their sequences match, they emit a single peak of ultraviolet light. If they differ by as little as one nucleotide, they emit two or more peaks.

    The Stanford researchers got the system working full-time this year. So far, they have found 93 new polymorphisms in the Y chromosomes of men from around the world. “It's a major breakthrough—unquestionably,” exults geneticist Luca Cavalli-Sforza, head of the lab where Underhill has done this research. And when the team started sorting these polymorphisms into a phylogenetic tree, they found one particularly ancient marker, called M42. In its most ancient form, shared by other primates, this marker is an A, or adenine. Today, it is found only in Africa, in just a few of the 900 males they scanned—15% of the Khoisan, and 5% to 10% of the Ethiopians and Sudanese. These men must have inherited this ancient form from a common ancestor. “We think we have tagged Adam,” says Underhill, who reported the work at this week's annual meeting of the American Society of Human Genetics in Baltimore.

    The data further show that sometime in the past 100,000 to 200,000 years, this M42 site underwent a mutation—a change from A to T (thymine) in one of Adam's descendants. While men with the A stayed in Africa, some of the Africans with the T left the continent and spread around the globe. Today, “all men outside of Africa, as well as most African men, carry the T,” says Underhill.

    Meanwhile, another team headed by geneticist Michael Hammer at the University of Arizona, Tucson, surveyed another noncoding region of the Y chromosome in 1544 men worldwide and found the same pattern. The DNA sequences varied among individuals, but Hammer found that the variants cluster into 10 major groups, known as haplotypes, which occur in different frequencies in different populations.

    Haplotype 1A, defined by an A at a particular site, appears to be ancestral because the A is found in chimpanzees, and Hammer's team found that in humans, it occurs only in some Africans. “It's at the highest frequency in the Khoisan,” he says—the same population fingered by Underhill's team. Although the ancient form of 1A persisted in some groups in Africa, it also underwent a change to a G (guanine) 150,000 to 200,000 years ago in one descendant of Adam's. Like the T in Underhill's site, this form was carried out of Africa when men moved away and replaced other males around the globe.

    Both sets of results bolster the so-called Out of Africa model of human origins. “We think that anything existing in Asian males was replaced by this,” says Hammer. Underhill agrees: “I think this speaks persuasively for an Out of Africa origin for modern humans.”

    But Hammer's group, whose findings are in press in the journal Molecular Biology and Evolution, has added a twist to the scenario. These researchers also see evidence that some of Adam's descendants who emigrated to Asia later returned to Africa, with a new mutation on the Y that arose in Asia. In addition, Hammer says that African males could have interbred with Asian females, and traces of those women's genes may still be in our nuclear genome. Indeed, other researchers have already reported signs of an ancient Asian origin for a β-globin gene (Science, 25 April, p. 535).

    So, Hammer favors a model that shows a wave of modern people coming out of Africa, replacing most of the genes of ancient people, but interbreeding enough to add some ancient non-African genes to our genome. Paleoanthropologist Fred Smith of Northern Illinois University in De Kalb, who proposed one such intermediate model, predicts that like so much of human history, the real story “won't be black or white. Genes and fossils are showing that population dynamics are a lot more complex than we thought.”


    The Women's Movement

    1. Ann Gibbons

    Genghis Khan, it appears, was an exception. When it comes to spreading genes around the world, scholars have often focused on the movements of men, sometimes picturing bands of males like Genghis Khan's army sweeping across wide geographic regions and fathering more than their share of children. But with new results from the Y chromosome (see main text), researchers are finding that spreading genes into new terrain may be chiefly women's work.

    The data show that variants in the Y chromosome, which sons inherit from their fathers, have a different geographic distribution from variants in mitochondrial DNA (mtDNA), which is passed from mother to daughter. Particular mtDNA markers are widespread: Women on different continents often carry the same markers, albeit at different frequencies. But most variations in the Y chromosome are restricted to small geographic areas, according to a report at a recent Cold Spring Harbor meeting by Harvard University biology graduate student Mark Seielstad and Stanford University statistician Eric Minch.

    Using Y polymorphisms detected in Stanford geneticist Luca Cavalli-Sforza's lab, they found that only 3% are distributed across continents and that most are restricted to local groups, such as Bantu-speaking males in Africa, who exhibited their own “private” polymorphism. This shows, says Cavalli-Sforza, that “women move more than men.”

    This may seem counterintuitive because studies have shown that in hunter-gatherer societies, men typically travel greater distances in their lifetimes than women do. One explanation is that when it comes time to settle down and have children, men go home to their birthplaces, at least in the 70% of modern human societies that are patrilocal. This means that women move into their husbands' homes and have their children farther from their birthplaces. Thus, over the millennia, women spread their genes farther than men do—eventually across entire continents, says Seielstad.

    This doesn't mean that Ghenghis Khan and other roving males didn't leave a trail of offspring. “But from a demographic viewpoint, it hasn't been as major a component of gene transmission as have women's movements over their lifetimes,” says Seielstad.


    Versatile Chemical Sensors Take Two Steps Forward

    1. Robert F. Service

    Any scientist who has seen an episode of Star Trek has to be a bit covetous of 23rd century sensor technology. With a flick of a switch on his handheld “tricorder,” Mr. Spock can instantly detect minute concentrations of any compound he's interested in. Sensor technology has a long way to go to reach that final frontier, but researchers are pushing it a bit closer to this science fiction fantasy. Two separate groups report that they have created new chemical detection schemes that promise to be compact, cheap, and versatile.

    Silicon forest.

    Side view of porous silicon, the basis of a new sensor.


    The new systems detect compounds when they bind to recognition molecules embedded in the sensor material, changing the way it interacts with light. One scheme, described on page 840 of this issue of Science, exploits the optical properties of a silicon chip that has a surface etched into a forest of pillars and pits; the other, reported in last week's issue of Nature, relies on an assemblage of tiny plastic spheres trapped in a polymer- and water-based gel. Both, says Thomas Bein, a chemist at Purdue University, are “exciting and interesting advances” over today's sensors, which tend to be costly, cumbersome, and specialized.

    Bein and others note that the new sensors are based on dirt-cheap starting materials. Yet they can detect a wide range of compounds with high sensitivity-the silicon-chip sensor, for example, can detect DNA strands at concentrations of one part in a quadrillion, a sensitivity 100 times better than conventional sensors can achieve. As a result, they could pave the way to low-cost detectors for medical diagnosis, industrial monitoring, and environmental testing. “These new techniques are exactly what's needed to make [such sensors] take off,” says David Grier, a physicist at the University of Chicago.

    To make the silicon sensor, chemists M. Reza Ghadiri, Victor Lin, and Kianoush Motesharei at The Scripps Research Institute in La Jolla, California, along with Michael Sailor and Keiki-Pua Dancil of the University of California, San Diego, start with a wafer similar to those that are turned into computer chips. A chemical etchant chews away some of the silicon, creating the forest of tiny silicon pillars. Next, the researchers use well-established chemistry to bind chemical recognition groups to the sides of these pillars. Then they expose the chip to a solution that may contain the target molecule and shine light on it.

    Light striking the chip bounces off both the top of the pillars and the forest floor, 5 millionths to 10 millionths of a meter below the surface. Because light waves reflected from the forest floor have traveled farther, they emerge out of step with those that bounce off the top of the pillars. As a result, some of these waves cancel each other out, creating an interference pattern. This pattern is the key to the technique's sensitivity.

    On their way to and from the forest floor, the light waves pass through the transparent pillars. How fast they travel depends on chemical interactions along the sides of the pillars. When target molecules-proteins, DNA, or small organic molecules-bind to the recognition molecules on the pillar surfaces, the electronic characteristics of the interface between the silicon and the solution change. That alters the electronic structure of the silicon itself, says Sailor, changing its refractive index-the speed at which light moves through it.

    The index change shifts the relative timing of the two sets of reflected waves, altering the interference pattern produced when they merge. Because a modest change in the electronic structure of the porous silicon can have a large effect on the material's refractive index, the technique winds up having an extremely high sensitivity. “It's a pleasant surprise,” says Philippe Fauchet, a porous silicon expert at the University of Rochester in New York. A charge-coupled device, similar to light detectors in video cameras, records the change in the interference pattern, which is fed into a nearby computer for interpretation.

    Detecting new compounds is as simple as creating silicon chips with different chemical recognition groups attached, says Ghadiri. Down the road, the researchers expect to create sensor arrays in which each sensor carries a different concentration of recognition groups. These arrays should be able to go beyond detecting the target compound to measuring its concentration.

    The gel-based sensor has similar advantages. Its developers-University of Pittsburgh chemists John Holtz and Sanford Asher-start with tiny polystyrene beads that have acid groups covalently bound to their surface. When the beads are immersed in water, the acid groups shed their positively charged hydrogens, leaving each sphere covered with a multitude of negative charges. Because of the repulsion between like charges, the beads struggle to put as much distance as possible between themselves and their neighbors, in the process forming a periodic array, like atoms in a crystal. Such a colloidal crystal, made with beads of just the right size, diffracts visible light, making it shine with an iridescent glow.

    To transform this unusual crystal into a sensor, Asher and Holtz form a water-saturated polymer gel around the crystalline arrangement of beads. The polymer is festooned with chemical linking groups designed to recognize the target compound. The linker molecule is chosen so that when it binds to the target, the resulting complexes have a net electric charge.

    When a drop of solution containing the target molecule is added, the complexes add to the density of repulsive charges in the material. This causes the gel to suck in water and swell. “The water would like to equilibrate the concentration of the [charges] throughout the solution,” explains Asher. The influx of water pushes apart the polystyrene beads, changing the color of the light diffracted by the crystal.

    “It's great stuff,” says Grier. The Pittsburgh team has already shown that its gels can detect lead ions and glucose. And like the silicon sensor, the gel-based device can be tailored to detect a broad array of compounds, just by changing the chemical recognition groups on the polymer. Tricorders may be in for some competition.

Stay Connected to Science