News this Week

Science  21 Aug 1998:
Vol. 281, Issue 5380, pp. 1118

    Geneticists Debate Eugenics and China's Infant Health Law

    1. Dennis Normile

    Beijing—A freewheeling discussion last week at an international genetics meeting here* may have cleared the air on a controversial Chinese law to reduce infant mortality. The 1994 law, although aimed at improving pre- and postnatal health care, provoked a fierce outcry among some Western scientists because it appeared to forbid individuals with “certain genetic diseases” from marrying unless they agreed to be sterilized or take long-term contraceptive measures, and it also seemed to encourage abortions for fetuses with abnormalities. The provisions triggered a boycott of the meeting by the British, Dutch, and Argentine genetics societies, but several researchers from those countries came anyway. They and others came away from the meeting persuaded that the law is not as Draconian as it seemed and that in any case it is not being enforced.

    The focal point of last week's debate was a 2-hour workshop on the science and ethics of eugenics. Officials of the International Genetics Federation (IGF) insisted that Chinese organizers add the session to the group's quadrennial meeting after the new law went into effect in 1995. The informal gathering, moderated by outgoing IGF secretary Anthony Griffiths of the University of British Columbia, drew about 150 scientists, several of whom spoke extemporaneously from microphones scattered around the room. The topic was also explored during an earlier session on ethical issues in genetics research and was an undercurrent in other presentations and in hallway conversation. “I think people are concerned about areas of abuse,” says Jonathan Hodgkin, a geneticist at the Medical Research Council's Laboratory of Molecular Biology in Cambridge, U.K., adding that he was “relieved” to learn more about the law and its lack of adverse impact. Ann McLaren, a geneticist at the Wellcome/CRC Institute in Cambridge, U.K., says she was pleasantly surprised “to find areas of general agreement.”

    “The ethical application of genetic technologies is something the entire international community needs to continue to discuss,” Griffiths says. Toward that end, the congress adopted an eight-point statement at the close of the meeting supporting the use of genetic technologies and counseling to help individuals make informed and free choices and urging scientists to educate physicians, decision-makers, and the public on the topic.

    Chinese scientists at the workshop argued that critics have been misled by a flawed official translation, although most agree the law is rife with ambiguities. For example, a clause requiring consent is missing from the English translation of an article that appears to allow marriage between individuals with serious genetic diseases “only if” the couple agrees to be sterilized. In addition, most Chinese say the wording of the law does not expressly forbid marriage if the couple refuses. In the section on abnormal fetuses, one article reads: “The physician shall … give [the couple] advice on the termination of pregnancy,” while another clause stipulates that the written consent of the woman is required for the termination of any pregnancy. “On paper, it's the same as in London,” says George Fraser, a geneticist at Britain's Oxford Radcliffe Hospital.

    Alec Jeffreys, a geneticist at Leicester University and one of the organizers of the boycott by the British Genetical Society, said from England that he “remains to be convinced” that Chinese couples really have the right to give or withhold consent. “The restriction of one child per family already presents a framework which is not fully free in terms of reproductive rights,” he says. And Chen Zhu, director of the Ministry of Public Health's Laboratory of Human Genome Research in Shanghai, notes that the idea of a law to cover such human activities “is something new for the Chinese people.” But Chen adds that there have been “few or no negative effects” because the law is not being enforced.

    One roadblock to its implementation, says Qiu Renzong, a philosopher at the Chinese Academy of Social Sciences and a member of the ethics committee of the international Human Genome Organization, is the lack of a list of relevant genetic diseases. Until such diseases are defined, he says, local authorities and physicians cannot recommend sterilizations or abortions. Another obstacle, says Mao Xin, a molecular biologist at Britain's Institute of Cancer Research in Surrey, is the small number of facilities for genetic testing and qualified health care personnel. “The wording seems very serious, but there is no real effect from this law,” says Mao, who has criticized the law in letters and articles in several English language medical journals.

    Several Chinese speakers said they hope the law will be modified to clarify ambiguities and strengthen its provisions on patient rights. Qiu says that the authorities have promised to consult geneticists in defining which genetic diseases should be covered by the law. But Yang Huanming, director of the Human Genome Center at the Chinese Academy of Sciences' Institute of Genetics, is dubious. “Scientists have a small voice in forming such regulations,” he says.

    One point of scientific consensus is that such laws won't reduce the prevalence of recessive disease genes in the population. “Population genetics shows [that eugenics] doesn't work,” says Walter Bodmer, a British geneticist and former director-general of the Imperial Cancer Research Fund. Yet, researchers are concerned that the public, including many health care providers, may not understand that point. Mao, who was formerly at the West China University of Medical Sciences in Chengdu, says a 1994 international survey of genetic counselors found that all Chinese respondents thought the purpose of genetic counseling was to reduce the bad genes in the population, while few Western counselors gave that answer. Government officials quoted in the official Xinhua News Agency about the law echoed those sentiments. But others note that the problem of misunderstanding the possibilities and limitations of genetic technologies is not confined to China. “The overriding responsibility we have as geneticists is public education in genetics and its potential benefits,” Bodmer says.

    • *18th International Congress of Genetics, 10–15 August, Beijing.


    Doubled Genes May Explain Fish Diversity

    1. Gretchen Vogel

    Mont Rolland, Quebec—Take a dive to visit the rainbow-hued denizens of a coral reef and you will find it easy to accept that the ray-finned fishes—which include everything from goldfish to sea horses to flounder—are the most diverse group of vertebrates. Now, a new study of the genome of the zebrafish may explain how the 25,000 ray-finned species came to evolve such diverse forms.

    In an early ancestor of the zebrafish—a common aquarium-dweller and research model—the entire genome doubled, according to John Postlethwait of the University of Oregon in Eugene, who presented his case at a recent evolution meeting.* He suggests that the ray-finned fish put their extra copies of genes to diverse uses and so evolved a wealth of different body shapes, for example, using an extra fin-bud gene to make the stinging fins of the lionfish “mane.” The genome duplication also has implications for the zebrafish's role as a model organism, perhaps allowing researchers to spot dual functions of genes that would be hard to discern in species that have only one copy. “It's very exciting,” says geneticist and hematologist Leonard Zon of Children's Hospital in Boston. “It's likely that, because of the duplication, otherwise hidden gene functions will be revealed.”

    Postlethwait's analysis could upset the common explanation for why ray-finned fish seem to have extra copies of certain proteins and genes when compared to mammals. For years, many biologists have assumed that a mammalian ancestor had lost the extra copies.

    Postlethwait and his colleagues focused on the developmental genes called Hox genes, which control some of the earliest patterns in a developing embryo. Most vertebrates, including mammals, have four Hox clusters, suggesting that two genome duplications occurred since these lineages split from the invertebrates, which typically have only one Hox cluster.

    But after sequencing and mapping all the Hox genes they could find in zebrafish, Postlethwait, graduate student Allan Force, and postdoc Angel Amores and their colleagues found that the fish have seven Hox clusters on seven different chromosomes. Two clusters closely resemble the mammalian Hoxa, two resemble Hoxb, and two resemble Hoxc. Both mammals and fish have only a single copy of the Hoxd genes. Although zebrafish have two copies of the Hoxd chromosome, one is missing the Hox gene segment. Because the team found duplicates of all four chromosome regions, they believe the extra genes are not simply due to occasional gene duplications but stem from an event in which the entire genome was duplicated, with some genes then lost.

    That conclusion is strengthened by the team's reanalysis of the published arrangement of Hox genes in the puffer fish, or fugu. Unlike the zebrafish, the fugu has an especially small genome, apparently with only four groups of Hox genes. The first three look very similar to mammalian Hoxa, -b, and -c, and the researchers who mapped the genes originally thought the fourth might be a much-remodeled Hoxd. But the leader of that effort, developmental geneticist Samuel Aparicio of the Wellcome/CRC Institute for Cancer and Developmental Biology in Cambridge, U.K., says Postlethwait's new analysis makes a clear case that this fourth Hox group is really a second copy of Hoxa. The researchers suspect that a fugu ancestor, too, once duplicated its genome, most of which was later lost.

    Since the last common ancestor of fugu and zebrafish lived more than 200 million years ago, Aparicio says, the doubling might have occurred very early in the ray-fin lineage. That fits with having the extra genes power the great fish radiation of about 300 million years ago, he notes.

    But geneticist Chris Amemiya of Boston University School of Medicine says he's waiting for more solid evidence. “They are the most successful group of vertebrates on Earth,” he says of ray-finned fishes, but “I'm not sure it's because of genome duplication. … It's something that really needs to be shown empirically.” And proving it will be tough, Postlethwait admits, requiring evidence that duplicate copies of genes have given rise to specific new traits such as the huge jaws of the anglerfish, or the lean body of the trumpetfish. But ongoing efforts to sequence genes from other fishes, including salmon and swordtail, will also help to test the theory, Postlethwait says. He is also analyzing the Hox genes of more primitive fishes such as sturgeon to better estimate the timing of the duplication.

    As more and more “extra” zebrafish genes were discovered, they were initially seen as a major blow for the fish's status as a model of mammalian development. Researchers induce mutations in the fish with chemicals, then observe the effects on the transparent embryos. But they feared that duplicate genes might mask the effects of mutations, or that if an extra fish gene had evolved a new function, it might not model mammals.

    But Postlethwait and other zebrafish researchers say the doubling may be a benefit. For example, the engrailed-1 gene in mammals is expressed in both the hindbrain and the limb buds. In zebrafish, however, there are two copies of this gene, and each specializes in a different region—one is expressed in the hindbrain and one in the earliest stages of fin development.

    If this division of labor is common, says Zon, zebrafish mutants may reveal functions that would go undetected in mice. For example, if a gene is crucial for early development but also has a later function, knocking it out will kill embryos before the second role is revealed. But knocking out each gene separately in zebrafish would reveal both functions. Far from being a discouragement, Zon says, “I see it as good news all over.”

    • *The annual meeting of the Canadian Institute for Advanced Research Program in Evolutionary Biology, 25–29 July.


    A Second Private Genome Project

    1. Eliot Marshall

    It's hard to imagine competition in the human genetics research business getting much hotter, but this week the temperature rose a notch. Incyte Pharmaceuticals Inc., a genetic data company in Palo Alto, California, announced that it plans to invest $200 million over the next 2 years to sequence the protein-coding regions of the human genome. It will also hunt for simple variations in the genetic code—also known as single nucleotide polymorphisms, or SNPs—and locate them and about 100,000 human genes on a computerized map. These data will be kept in a proprietary trove, available only to those willing to pay Incyte's stiff fees. Researchers expect to use SNPs to trace patterns of inherited vulnerability to disease and develop new drugs targeted for individuals likely to benefit from them.

    As part of this new hunt for SNPs, Incyte said on 17 August that it will acquire a small British company, Hexagen Inc. of Cambridge, U.K., which developed a proprietary method for identifying variant genes in the mouse. Judging by the proposed budget, this project could rival the controversial sequencing and SNP-collection effort announced earlier this year by Perkin-Elmer Corp. of Norwalk, Connecticut, and J. Craig Venter of The Institute for Genomic Research (Science, 15 May, p. 994). And it will produce a SNP collection possibly larger—and much earlier—than a fast-moving public project funded by the U.S. National Human Genome Research Institute (Science, 19 December 1997, p. 2046).

    “Incyte is going to focus on genes and polymorphisms of interest for pharmaceutical development,” says Randall Scott, the company's chief scientific officer. He will head a new division that expects to receive “$20 million to $30 million in cash” from Incyte as start-up money. It will raise the remainder, according to Scott, from the sale of new stock, subscriptions to its database, and partnership deals with drug companies. This is a risky undertaking, since SNPs have not proved their commercial value. Scott says: “There wasn't any clear-cut pharmaceutical interest 2 years ago, but we've seen a dramatic change just over the last 6 months.” Now, he insists, “There's a huge interest.”

    Other genetic researchers were impressed by Incyte's investment but were cautious about the likely payoff. Fred Ledley, CEO of a SNP-based pharmaceutical company called Variagenics of Cambridge, Massachusetts, said “it won't be easy” to find SNPs or make them useful in drug research, but added that “Randy Scott has a record of taking on ambitious projects and succeeding.” Says Eric Lander, director of the MIT Whitehead Center for Genome Research: “More data is good; I'm just sorry it isn't going to be available to the public.”


    DNA Chips Survey an Entire Genome

    1. Robert F. Service

    Rack up another jackpot for DNA chips. Already these postage stamp-sized arrays of DNA snippets have proven themselves adept at providing snapshots of all the genes expressed in particular tissues. Now they appear poised to reshape the future of gene mapping, the effort of tracking down which genes contribute to different traits. Earlier this year, one U.S. team used DNA chips to map thousands of commonly varying sites in DNA fragments sampled throughout the human genome (Science, 15 May, p. 1077). And now, in work described on page 1194, researchers report for the first time that they have used chips to map an entire genome—that of yeast—in one fell swoop.

    The success, say genetics researchers, could pave the way to a better understanding of conditions, such as virulence in pathogenic microorganisms and susceptibility to heart disease in humans, thought to be caused by the contributions of several genes. “This is a brand-new age. It's really exciting,” says Jasper Rine, a geneticist at the University of California, Berkeley. Still, because current chips offer complete surveys only of relatively small genomes, or samples of larger ones, advances in identifying the microbial genes are expected to come much faster.

    Pinpointing the location of the genes for traits controlled by single genes is relatively straightforward. Researchers simply compare the DNA of families that share the trait in question to see how often it's inherited together with “markers” at known locations along the chromosomes. But conventional gene-mapping techniques can look only at how a relatively small number of genetic markers differ between individuals, and it becomes unwieldy when tracking so-called quantitative traits that may be caused by the combined workings of several genes scattered throughout the genome.

    To circumvent that problem, geneticists Elizabeth Winzeler, Dan Richards, and Ronald Davis, along with their colleagues at Stanford and Duke universities, turned to the DNA chip technology currently under development at the biotech firm Affymetrix, in Santa Clara, California. Affymetrix researchers had previously designed chips composed of over 150,000 snippets of yeast DNA dotted across a 2-centimeter-square silicon wafer. These snippets correspond to overlapping DNA fragments from a complete yeast genome sequenced in 1996. Researchers at Affymetrix and elsewhere have shown that they can create an instant snapshot of all the proteins being made in given cells by exposing such chips to the cell's RNA molecules—each of which corresponds to an active gene—and seeing where the RNAs bind to the known sequences on the chip (Science, 3 June 1994, p. 1400).

    To map yeast genes, the Stanford, Duke, and Affymetrix researchers took advantage of the same basic chip technology. Their strategy consisted of two parts. First, they created a map of the yeast genome with known genetic landmarks along the way. Next, they showed they could use this map to track the inheritance patterns of markers—and thus of genes—through generations of yeast.

    For the mapping phase, the researchers obtained DNA from two different yeast strains, called S96 and YJM789, and used enzymes to break them into small pieces. After tagging the pieces with a fluorescent compound that allows them to be detected, the team applied the fragments to the chips, one yeast strain at a time. If the two sequences matched exactly, the pieces would bind to those in the array. The result was two chips, each displaying different patterns of bright spots showing where the DNA fragments being tested correspond to those on the chip.

    Because the S96 genome is virtually identical to that of the strain used to prepare the chip, almost all of the S96 fragments matched those on the chip. But tests on the second strain—YJM789—revealed over 3000 spots on the array with little fluorescence, denoting mismatches, or differences, with the reference strain. The researchers have not yet determined what most of those differences are. But even without that information, Winzeler says, the spots can still serve as genetic markers.

    To make a map, the researchers had to figure out where each marker belonged on the yeast genome. That was straightforward, because the full sequence of the yeast genome is known along with the sequence of each of the snippets on the chip. By comparing the sequences of the snippets that didn't bind to any of the test yeast fragments with the genome sequence, the researchers could determine where each of their genetic landmarks fell on the genome.

    In the next step in their study, the researchers used this map to locate—simultaneously—four control genes whose positions in the yeast genome were already known, plus a previously unidentified gene involved in yeast's resistance to an antifungal compound known as cycloheximide. Strain YJM789 is susceptible to the drug, while S96 is not. So the researchers mated the organisms and tested their progeny. By looking for genetic markers shared by all the drug-resistant progeny, the researchers were able to pinpoint the location of the drug resistance gene to a 57,000-base pair region on chromosome 15. The researchers also found that the control genes mapped to their expected locations. The team hopes to extend this demonstration to track down the multiple genes that contribute to virulence, says Duke team member John McCusker.

    Rine says the work is “a terrific demonstration” of how researchers can now analyze entire genomes in a single step—an achievement that should aid efforts to map multigene traits. But Kenneth Weiss, a geneticist at Pennsylvania State University in University Park, cautions that identifying the genes responsible for complex human traits may still be difficult. He notes, for example, that some 50 genes may influence susceptibility to heart disease and that many different combinations of mutations in those genes may all lead to the same outward manifestation: heart trouble. That's likely to frustrate efforts to link the condition to just a few key genes. But at least the chips will allow researchers to try something that was previously impossible.


    How Embryos May Avoid Immune Attack

    1. Trisha Gura*
    1. Trisha Gura is a science writer in Cleveland, Ohio.

    The portrait of an infatuated mother coddling her newborn baby is a fixture in most family photo albums. But before such a touching picture could be taken, the baby had to survive a potentially fatal conflict with its mother: developing as an essentially foreign tissue within the womb without triggering a hostile immune attack. How the fetus does that has long been a mystery, but new results in this issue may provide an answer.

    Previous explanations have included the possibility that the mother somehow suppresses her immune responses to the baby, or that the placenta acts as an anatomical barrier to her immune cells. But on page 1191, Andrew Mellor, David Munn, and their colleagues at the Medical College of Georgia in Augusta report evidence indicating instead that the embryo actively shuts down the mother's natural defenses. In their scenario, embryonic cells in the placenta manufacture an enzyme known as indoleamine 2,3-dioxygenase (IDO), which destroys an amino acid, tryptophan, that the mother's immune sentries, known as T cells, need to do their job. Immunologist Phillipa Marrack of the National Jewish Hospital in Denver, Colorado, describes the observations as “very striking, interesting, and provocative.”

    In addition to explaining, as Mellor puts it, “why we, as mammals, survive gestation,” the results—if they hold up—might help women with a history of spontaneous miscarriages. If those miscarriages are due to failure of the fetal cells to produce enough IDO, then drugs might be developed that mimic the enzyme's T cell-dampening effects. Such drugs might even be useful for preventing transplant rejection and treating autoimmune diseases. And conversely, compounds that inhibit IDO might lead to abortifacients that work by boosting the mother's innate rejection response.

    Munn, a pediatric oncologist, did not set out to determine what protects the fetus from immune attack. He was searching for ways in which immune cells known as macrophages might activate the tumor cell-killing potential of T cells. Instead, Munn's team found that in their cell culture system, the macrophages were sedating the T cells, apparently because they were somehow destroying tryptophan. Since T cells, like other human cells, can't make their own tryptophan—it has to be supplied in the diet—they were unable to replicate, as they usually do when activated. To try to figure out whether tryptophan depletion might have a role in living animals, the oncologist teamed up with Mellor, a reproductive immunologist, and together the two groups showed that the amino acid declined in the cultures because the macrophages were making IDO.

    Other researchers had shown in the early 1990s that the enzyme, in addition to being made by macrophages, is also produced in the placenta by fetus-derived cells called syncytiotrophoblasts. That finding suggested another possibility. We hypothesized, Mellor recalls, “that IDO prevents the maternal T cell response to [genetically foreign] fetuses, and inhibiting the enzyme would cause the mothers to abort.”

    That is exactly what the researchers have now found. For their experiments, Munn, Mellor, and their colleagues used two groups of pregnant mice. One set had been bred to genetically identical fathers of the same inbred strain while the other was bred to fathers from a genetically different strain. When the researchers implanted time-release capsules containing either the IDO inhibitor 1-methyl-tryptophan or a control substance under the skin of the pregnant animals, they found fetal rejection in only one group: mice that had been given the inhibitor and were carrying genetically foreign fetuses. The embryos developed normally at first, but then inflammatory cells moved in and extensive hemorrhaging occurred around the embryos. “The mother is rejecting the placenta and eventually the embryo chokes off and dies,” Munn says.

    Other experiments pointed to the mothers' T cells as instigators of the attack. The researchers mated mice that were genetically identical, except that the males had been engineered to carry a gene for an immune system protein that induces potent T cell responses. Treating the pregnant females with 1-methyl-tryptophan caused them to reject their fetuses, indicating that that single antigen was all it took to get the response. More direct proof for T cell involvement came when the researchers added T cells that recognize the antigen to mice that can't produce T or B cells and found that this could restore the response to the inhibitor.

    From these results, Munn and Mellor propose that once the embryo implants and begins establishing connections with the mother's blood supply, fetal-derived cells located in the placenta begin making IDO. By destroying tryptophan, the enzyme then suppresses the activity of maternal T cells that would otherwise make their way through the placenta and attack the fetal blood supply.

    Some researchers have reservations about this scenario. As immunologist Joan Hunt of the University of Kansas in Kansas City points out, for example, since the embryo can't make the tryptophan it needs to produce proteins and grow, it's hard to understand how it could survive if the amino acid is destroyed in the placenta.

    Munn and Mellor concede that more work will be required to show that loss of tryptophan, and not some currently unsuspected consequence of IDO action, is behind the embryo's ability to ward off an immune attack. They say they intend to pursue this issue in further mouse studies. And the investigators also want to see if possible defects in IDO production or action in the placenta might be linked to the repeated miscarriages experienced by some women.

    In addition, immunologists will want to explore hints that IDO might have a broader role in immune regulation. The Georgia team has evidence in lab animals that the enzyme also suppresses the activity of T cells that might otherwise attack the body's own tissues. If so, then the researchers may have tapped into a new arena from which to look at the immune system's checks and balances, especially in patients with autoimmune illnesses. “We have come up with a natural immunosuppressive mechanism that is linked to an evolutionarily ancient mechanism: nutrient depletion,” Mellor says. “And placental mammals have adapted it in a dramatic way to protect their fetuses.”


    Neptune's Hasty Moon Poses Celestial Puzzle

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Ever since Newton, astronomers have been calculating the orbits of planets and moons and getting them exactly right. But last week, a team of observers reported that Galatea, a small satellite of the planet Neptune, is a few minutes ahead of schedule. To explain this puzzling haste, astronomers are blaming everything from the gravitational tug of Neptune's mysterious Adams ring to the pull of other, undiscovered moons to an error in the original orbital predictions.

    A team led by Claude Roddier of the Institute for Astronomy of the University of Hawaii, Honolulu, learned that the 160-kilometer moon was straying from its orbital timetable on 6 July, when they tracked it down with the 3.6-meter Canada-France-Hawaii Telescope on Mauna Kea. The observations—the first in the 9 years since Galatea was discovered by the Voyager 2 spacecraft—showed that Galatea was 5±1 degrees ahead of its predicted position, or 8.6 minutes ahead of schedule. The difference, they said in an 11 August circular of the International Astronomical Union, is “possibly due to [Galatea's] interaction with Neptune's Adams ring.”

    The Adams ring, lying a mere 1000 kilometers outside Galatea's orbit, has a strange, arclike appearance, indicating that its dust particles aren't spread evenly around its full circumference. Galatea's gravity is presumably sweeping the particles into clumps, as Carolyn Porco of the Lunar and Planetary Laboratory of the University of Arizona, Tucson, showed in 1991 (Science, 30 August 1991, p. 995). But for the ring to pull back strongly enough to affect the satellite's orbit, Porco says, it “would have to have substantial mass.” She speculates “that there are bigger bodies within [the arcs], which are the source of the dust that we actually see.”

    Brian Marsden of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, isn't so sure that there's a deviation to explain. For Galatea's orbit to accumulate five degrees of drift in 9 years, its half-day period would have to differ from its predicted value by a mere 0.07 second. “My own inclination is that the prediction is off simply because the observations used for it were only [a limited number of images] from Voyager,” he says. Porco disagrees. “There were lots of observations of Galatea by Voyager,” she says. “I doubt they are in error.”

    If the prediction isn't at fault, Marsden says, the gravitational effects of other satellites, or of Neptune's own oblate shape, could have skewed Galatea's orbit, as could a perturbation from a small unknown satellite in nearly the same orbit as Galatea. “Perhaps,” agrees Porco, “[but] it would have to be small enough to have escaped detection by the Voyager cameras,” which, she says, could spot a 6-kilometer object.

    She notes, however, that there's a problem even if the Adams ring is to blame. An interaction between satellite and ring could speed up Galatea, but only if the objects and particles in the ring are colliding with one another “because otherwise the gravitational interaction is not ‘shared,’ so to speak, among all the bodies in the ring.” But, Porco adds, “if there are colliding particles, then the arcs wouldn't stick around very long. The net result: a faster Galatea leaves us with a big puzzle, and I wonder if [the new observation] will stand the test of time.”


    Institute Copes With Genetic Hot Potato

    1. Martin Enserink*
    1. Martin Enserink is a science writer in Amsterdam.

    A premature warning about the potential dangers of transgenic potatoes sparked a global media frenzy last week and appears to have ended the career of a food safety expert at the Rowett Research Institute in Aberdeen, Scotland. In a press statement, the institute said it regretted “the release of misleading information about issues of such importance.”

    The incident is the latest high-profile setback for agricultural biotechnology, which in Europe is still struggling to gain consumer acceptance (Science, 7 August, p. 768). Indeed, activists have torn up dozens of trial plots in Europe over the last year, and in a June interview with the Daily Telegraph, Prince Charles declared that tinkering with genes for food production “takes mankind into realms that belong to God and God alone.”

    That was the backdrop for the 10 August British TV show “World in Action,” on which Rowett researcher Arpad Pusztai announced findings on rats fed potatoes containing the gene for concanavalin A, or Con A, a compound found in jack beans. Con A is a member of the lectins, a huge family of insecticides that occur naturally in plants. Biotech companies have spliced lectin genes into various crops, to try to get them to resist insect pests. Pusztai warned, however, that rats in his experiments suffered from stunted growth and suppressed immune function. He said more safety research was needed, adding: “If you gave me the choice now, I wouldn't eat it.”

    Even before the show aired, the institute was flooded with calls from journalists who had received a press release touting Pusztai's comments. In most of the ensuing coverage, reporters failed to distinguish between genetic engineering and the specific use of lectins, making it appear that Pusztai warned against eating anything transgenic. The publicity alarmed consumer groups and prompted several members of the British Parliament to call for a moratorium on genetically engineered foods. Biotech companies staged a defense.

    Facing “a megacrisis that we didn't remotely anticipate,” Rowett director Philip James decided to look into the details of Pusztai's experiments himself—only to discover that these were, he says, a “total muddle.” The data presented on the TV show were from a trial in which the rats had been fed nontransgenic potatoes, with Con A added later, instead of transgenic potatoes. “I couldn't believe what I was suddenly being told,” says James. He says Pusztai's team had also carried out some experiments with transgenic potatoes, but these contained GNA—a different lectin found in snowdrops.

    After the discovery, James suspended Pusztai indefinitely. “We immediately sealed the laboratories and took the data, according to the guidelines of the Medical Research Council,” says James. He ordered Rowett senior scientist Andrew Chesson, a member of the European Union work group on transgenic food safety, to analyze the data and report to the British Ministry of Agriculture, Fisheries, and Food and to the European Union. James says Pusztai, 68, will retire; he was unavailable for comment. “He's totally overwhelmed, the poor guy,” says James.

    The incident has left a bitter taste in the mouths of biotech boosters. It “caused a tremendous amount of confusion among consumers, which will take years to undo,” claims Anthony Arke of EuropaBio, a Brussels-based biotech association. Even if the studies show that lectin-containing potatoes are harmful to rats, says Arke, that would be little reason for concern, because detecting hazards early on is exactly what experiments like the ones carried out at Rowett are for. Says Arke: “This only proves that the safety assessment procedures are fine.”


    Report Urges U.S. to Take the Long View

    1. David Malakoff

    A White House advisory panel on information technology is urging President Clinton to turn back the clock and recreate the funding strategies that nurtured the Internet and other developments that now fuel the U.S. economy. The panel's overall message, that the United States needs to do more to retain its lead in the field, is expected to prompt top Administration officials to push for more funding in the upcoming 2000 budget request. But its suggestion that the National Science Foundation (NSF) should play the leading role is likely to be more controversial.

    Last week, the President's Information Technology Advisory Committee (PITAC), a 26-member panel of prominent computer scientists and industry executives, recommended that the government add $1 billion over 5 years to the estimated $1.5 billion it's now spending each year on information technology (IT) research. The new money would go to revitalize basic research on software, hardware, and computer networks. The committee's interim report also called on the government to revive the large, long-term projects that proved so productive in the 1970s and '80s. “The future great ideas that are not going to pan out for 15 years aren't getting enough support now,” says computer scientist Ken Kennedy of Rice University in Houston, Texas, cochair of the panel, the latest of several to call for more federal IT spending (Science, 7 August, p. 762).

    Economists have estimated that one-third of U.S. economic growth since 1992 has come from the blossoming of the Internet and other computer-related businesses. But the basic research that spawned these profitable technologies was conducted decades ago. Reacting to concerns that government isn't doing enough to keep the country on top, President Clinton last June asked his new science adviser, Neal Lane, to prepare an IT funding plan. The PITAC's recommendations, says panel member Larry Smarr, director of the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign, should allow Lane “to hit the ground running” by providing a framework for Lane's report, expected later this year.

    In its report, the PITAC warns against a dangerous trend among federal agencies: the funding of small, short-term projects, such as building deadlier missiles or writing better flood-forecasting software, to the detriment of larger, longer term basic studies. The panel estimates that the government spends as little as 5 percent of its IT budget on basic studies lasting more than 5 years. To bolster basic research, committee members would like to see a return to grant-making strategies that once allowed funders, such as the Pentagon's Defense Advanced Research Projects Agency (DARPA), to put dozens of researchers on problems for decades at a time. The DARPA strategy, says the report, gave researchers “enough resources and time to concentrate on the problem rather than on their next proposal. … It is this spirit that the Committee would like to see reborn and replicated.”

    In particular, the panel wants to see more research into robust software, faster supercomputers, and “scalable” communications networks able to shoulder the burden of a billion users—a number the Internet is expected to hit by 2005. Private companies, it says, simply aren't able to make the necessary long-term commitments. The committee also wants social scientists to study how the new technologies will shape society.

    Whether NSF, the preeminent supporter of single-investigator studies in the nonbiomedical sciences, is up to orchestrating such a revival of large-scale basic research, however, is an open question. Kennedy and others say that the panel picked NSF to dole out up to half of any new funds and to coordinate the overall effort because it was not feasible to create a new agency and because NSF has a broad perspective. “But committee members have a lot of reservations about whether NSF can fulfill this role,” Kennedy admits. To succeed, the panel says, NSF must elevate the influence of IT researchers within its ranks and add more computer scientists to its policy-setting National Science Board.

    New NSF director Rita Colwell says the agency is ready and willing “to take up the challenge. We are used to looking at the big picture.” Juris Hartmanis, who heads the foundation's $295 million computer sciences directorate, agrees that “adjustments may have to be made, but NSF is already managing large projects.”

    The next step for the committee is a series of meetings with community and federal leaders to flesh out specific funding proposals for a final report to be delivered early next year. While those meetings will come late in the Administration's 2000 budget-making process, Smarr and others hope that they will still influence the president's budget request to Congress next February. “We burned some midnight oil to get [the report] out,” he says. “We wanted the budget-makers to hear what the leaders in IT think needs to happen.”


    London, Cambridge Lead Europe in Output

    1. Daniel Clery

    When it comes to total output of scientific papers, London is Europe's most prolific city. But in terms of number of research papers published per capita, nearby Cambridge can claim bragging rights, according to a new study that attempts to rank the scientific productivity of major cities throughout Europe.

    View this table:

    The findings—to be presented this week at a meeting of the International Geographical Union's Urban Systems Commission in Bucharest, Romania, and submitted to the journal Urban Studies— are likely to be used by local governments seeking to attract high-tech industry. “Such figures could be very helpful for city politicians to see how important their research base is,” says urban geographer Christian Matthiessen of the University of Copenhagen, one of the authors of the study. And they point to continuing disparities between Eastern and Western Europe. “Western investors in Eastern Europe may be wary if knowledge there is low,” says Matthiessen.

    Matthiessen and Annette Schwarz of the Technical Knowledge Center of Denmark carried out the study as part of their ongoing research into competition between urban centers. Their first problem was the lack in Europe of a standard definition of a city. Rather than relying on densities of buildings or people, they developed a “functional” definition based on daily flows of people, goods, information, and money. Cities within 45 minutes of one another were grouped together, and the final delimitation was marked out on topographical maps. The researchers then added in publication data from the Institute of Scientific Information (ISI) in Philadelphia to create a geographic portrait of European science.


    London comes out as the preeminent science center in Europe. Its total output of papers—nearly 65,000 in 1994 to 1996—topped Paris, which ranked second, by some 19,000 papers. It also dominated many individual fields: Of the 162 fields analyzed by ISI, London produced the most papers in more than half. The study identified two other “megacities” of research: Moscow and the Dutch conglomerate of Amsterdam, The Hague, Rotterdam, and Utrecht. A different pecking order emerges, however, if one takes population into account. The small city of Cambridge, dominated by its large and ancient university, heads a per capita ranking by a wide margin, followed distantly by Oxford-Reading and Geneva-Lausanne. The megacities are scattered well down this list, with London ranked 16th and Paris in 22nd place.

    A breakdown by specific field also shows some interesting trends. Three of the four megacities, not surprisingly, rate near the top of the publication league in a wide range of disciplines, while Moscow is strong in physics and the traditional natural sciences but noticeably weak in medicine and modern biology. That regional difference in diversity can be seen in the statistics from other cities. Although high output generally leads to all-around strength —14 of the 17 top-producing centers can be considered genuine all-arounders—all cities with broad strengths are situated in Western Europe. Several Eastern European cities show biases similar to Moscow's toward the natural sciences.

    The study proves the fairly obvious point that “big cities have big universities,” says geographer John Goddard of the University of Newcastle upon Tyne, and reflects the centralized political systems of the United Kingdom and France. Although the results may please the citizens of London or Paris, a more decentralized research landscape, as exists in the United States, might bolster industry in outlying regions. Says Goddard: “A more even distribution might provide bigger benefits to the economy as a whole.”


    The Next Oil Crisis Looms Large--and Perhaps Close

    1. Richard A. Kerr


    Nature took half a billion years to create the world's oil, but observers agree that humankind will consume it all in a 2-century binge of profligate energy use. For now, as we continue to enjoy the geologically brief golden age of oil, the conventional outlook for oil supply is bright: In real dollars, gasoline has never been cheaper at the pump in the United States—and by some estimates there are a hefty trillion barrels of readily extractable oil left in known fields. Thanks to new high-tech tricks for finding and extracting oil, at the moment explorationists are adding to oil reserves far faster than oil is being consumed. So, many who monitor oil resources, especially economists, see production meeting rising demand until about 50 years from now—plenty of time for the development of alternatives.

    Comforting thinking—but wrong, according to an increasingly vociferous contingent, mainly geologists. They predict that the world will begin to run short of oil in perhaps only 10 years, 20 at the outside. These pessimists gained a powerful ally this spring when the Paris-based International Energy Agency (IEA) of the Organization for Economic Cooperation and Development (OECD) reported for the first time that the peak of world oil production is in sight. Even taking into account the best efforts of the explorationists and the discovery of new fields in frontier areas like the Caspian Sea (see sidebar on p. 1130), sometime between 2010 and 2020 the gush of oil from wells around the world will peak at 80 million barrels per day, then begin a steady, inevitable decline, the report says.

    “From then on,” says consulting geologist L. F. Ivanhoe of Novum Corp. in Ojai, California, “there will be less oil available in the next year than there was in the previous year. We're not used to that.” Scarce supply, of course, means a higher price, especially because optimists and pessimists alike agree that the Organization of Petroleum Exporting Countries (OPEC), which triggered the oil crises of 1973 and 1979, will once again dominate the world oil market even before world oil production peaks (see sidebar on p. 1129). At the peak and shortly thereafter, as more expensive fuel sources such as hard-to-extract oil deposits, the tarry sands of Canada, and synfuels from coal are brought on line, prices could soar. “In the 5 to 10 years during the switch, there could be some very considerable price fluctuations,” says an IEA official. “Then we will plateau out at a higher but not enormous price level.” In other words, gas lines like those of the Arab oil embargo 25 years ago could return temporarily, followed by permanently expensive oil.

    The down side of the curve

    The debate over just when the end of cheap oil will arrive pivots on an interplay of geology and technology. There's only so much oil in the ground, geologists and technology-loving economists agree, but how much of it geologists can find and engineers can extract at a reasonable cost is much in contention. Geologists considering the past record of finding and extracting oil see a fixed, roughly predictable amount left to be produced and put the production peak sooner rather than later. Their case for the past being the best predictor of the future depends heavily on their success in predicting the oil production peak of the lower 48 states of the United States, the only major province whose oil production has already peaked.

    For projections of future oil production, many geologists rely on a kind of analysis pioneered by the late geologist M. King Hubbert. In 1956, when he was at Shell Oil Co., he published a paper predicting that, based on the amount and rate of past production, output in the lower 48 states—which was then increasing rapidly from year to year—would peak between 1965 and 1970 and then inexorably decline. “The initial reaction to this conclusion was one of incredulity—‘The man must be crazy!’” Hubbert later recalled. But production peaked right on schedule in 1970 and has declined since.

    Hubbert based his successful prediction on what seemed to him a fundamental law governing the exploitation of a finite resource—that production will rise, peak, and then fall in a bell-shaped curve. He constructed his curve by noting that extraction of oil begins slowly and then accelerates as exploration finds more of the huge fields that are too big to miss and that hold most of the oil. That's the ascending side of his bell-shaped curve.

    After this fast start, production begins to stall. By this point, exploration has turned up most of the easy-to-find huge fields. The smaller fields, although far more numerous, are harder to find, more expensive to drain, and can't match the volume of the big fields. At the same time, the gush of oil from the big fields slows. Oil in a reservoir lies in pores whose surfaces hold onto it like a sponge, so that wells first gush, then slow toward a trickle. The declining rate of oil discoveries and slowing production from big, early finds combine to force overall production to peak—the top of Hubbert's curve—at about the time that half of all the oil that will ever be recovered has been pumped. From then on, production drops as fast as it rose, creating Hubbert's idealized symmetrical bell-shaped curve.

    When applied to world oil production, Hubbert's curve traces out a relatively grim future. During the oil crisis of 1979, Hubbert himself made a rough estimate of a turn-of-the-century world peak. At that time, though, geologists were slightly underestimating just how much oil Earth contains, and Hubbert's forecast was too gloomy—but perhaps not by much. In recent years, a half-dozen Hubbert-style estimates have been made, and they all cluster around a world oil production peak near 2010 (see table). Half the world's conventional oil has already been pumped, these geologists say, so the beginning of the end is in sight.

    View this table:

    One of the most pessimistic recent analyses comes from former international oil geologists Colin Campbell and Jean Laherrére, who are associated with Petroconsultants in Geneva; Campbell was an adviser to the IEA on its latest estimate. “Barring a global recession, it seems most likely that world production of conventional [easily extracted] oil will peak during the first decade of the 21st century,” they wrote in the March issue of Scientific American.

    Campbell and Laherrére's early peak prediction is drawn in part from their low estimates of existing reserves. Of the trillion barrels of oil that countries have reported finding but not yet extracted—their proven reserves—Campbell and Laherrére accept only 850 billion barrels. Much of the rest they view as “political reserves”—overly generous estimates made for political reasons. For example, reserves jumped by 50% to 200% overnight in many OPEC countries in the late 1980s—perhaps because OPEC rules allow countries with more declared reserves to pump more oil and so make more money, says Campbell.

    Not all Hubbert-type estimates are quite so pessimistic. “I'm an optimist,” says former oil industry geologist John Edwards of the University of Colorado, Boulder. “I think there's a lot more oil to be found. I used optimistic numbers [near the high end of estimated reserves], but I'm still at 2020” for the world production peak. “Conventional oil is an exhaustible resource. That's just the bottom line.”

    Technology to the rescue?

    But other geologists and many economists put more faith in technology. Oil will eventually run out, these self-described optimists agree, but not so soon. “We're 30, maybe even 40, years before the peak,” says oil geologist William Fisher of the University of Texas, Austin. Fisher has lots of support from the latest international energy outlook prepared by the U.S. Department of Energy's Energy Information Administration (EIA). “We don't see the peak happening until after the limit of our outlook,” in 2020, says the EIA's Linda Doman. “We think technology and developing Middle East production capacity will provide the oil.”

    In the optimists' view, it doesn't matter that there are few if any huge new fields left out there to find. What does matter, they say, is how much more oil the industry can find and extract in and around known fields. Even as the world consumes 26 billion barrels a year, in their opinion reserves are growing rapidly. They argue that much of OPEC's reserve growth is real, and that OPEC and others are boosting reserves not so much through the discovery of new fields as through the growth of existing fields—and technology is the key. Technology might double the yield from an established field, they say. “Technology has managed to offset the increasing cost of finding and retrieving new resources,” says economist Douglas Bohi of Charles River Associates in Washington, D.C. “The prospect is out there for an amazing increase in the [oil] reserve base.”

    Three currently used technologies are helping drive this boost in reserves, Bohi and others say. Aided by supercomputers, explorationists are using the latest three-dimensional seismic surveying to identify likely oil-containing geologic structures, yielding a sharp picture of potential oil reservoirs. A second technology involves first drilling down and then sideways, punching horizontally through a reservoir so as to reduce the number of wells needed, and therefore the expense, by a factor of 10. Finally, technology that allows wells to be operated on the sea floor many hundreds of meters down is opening up new areas in the Gulf of Mexico, off West Africa, and in the North Sea.

    All these new technologies can slow or delay what Hubbert saw as an inexorable production drop in older fields, the optimists say. Indeed, such technological achievements have already helped arrest the decline of U.S. oil production during the past 3 to 4 years, says Edwards.

    But the pessimists are unmoved. “Much of the technology is aimed to increase production rates,” says Campbell. “It doesn't do much for the reserves themselves.” And what new technology does do for reserves, it has been doing since the oil industry began in the 19th century, he says. New technologies for better drilling equipment and seismic probing have been developed continually rather than in a sudden leap and so have been boosting the Hubbert curves all along. The shape of the curve therefore already incorporates steady technology development, he and other pessimists note.

    As a result, they argue that today's technological fixes will make only slight changes to the curve. “All these things the economists talk about are just jiggling in a minor way with the curve,” says Albert Bartlett, a physicist at the University of Colorado, Boulder, who calculates a 2004 world peak. “You can get some bumps on the [U.S.] curve by breaking your back, but the trend is down.” For example, when oil hit $40 a barrel in the early 1980s, the U.S. production curve leveled out in response to a drilling frenzy—but it soon went right back down again. And besides, the pessimists note, when high prices drive increased production, the oil pumped is not cheap oil. Economist Cutler Cleveland of Boston University has found that the price-driven drilling frenzy of the late 1970s and early '80s produced the most expensive oil in the history of the industry. So, such production is a hallmark of the end of the golden age and the beginning of the transition stage of expensive oil.

    The next few years should put each side's theory to the test. If technology can greatly boost reserves, then the U.S. production curve should at least stabilize, while if the pessimists are right, it will soon resume its steep downward slide. Production from the North Sea should tell how middle-aged oil provinces will fare; pessimists expect it will peak in the next few years. But it is the world production curve that will finally reveal whether the world is due for an imminent shortfall or decades more of unbounded oil.

  11. OPEC's Second Coming

    1. Richard A. Kerr

    Everyone over 30 has a memory of the oil crisis 25 years ago: gasoline prices up 40% in a few months, crude oil prices more than doubled, and Western politicians helpless to do anything about it. Now, as geologists and economists hotly debate when oil will begin to run short (see main text), there's general agreement that despite today's dirt-cheap oil, the oil-consuming nations of the world had best take heed from the lessons learned a quarter-century ago. The domination of the market by the Organization of Petroleum Exporting Countries (OPEC)—which spawned the '70s crisis—will soon return, probably in the next decade.

    Last time around, of course, all the gloomy predictions of permanently expensive oil came to nothing. “What saved us was that regions like Mexico and the North Sea could increase production,” says economist Robert Kaufmann of Boston University. Next time, however, production from non-OPEC regions will be dropping, not rising, notes Kaufmann. OPEC dominance—and potential chaos in the oil market—is inevitable.

    Dominance of the world oil market has changed hands repeatedly this century. In the 1930s, the state of Texas, in the form of a regulatory body called the Texas Railroad Commission, was able to dampen boom-and-bust cycles in oil prices by turning on production when prices rose and turning off fields when prices fell. The commission managed to keep the price of oil steady and relatively low—to avoid the appearance of price gouging and any retaliation—right through the formation of OPEC in 1960 and its first attempt at dominance in 1967.

    Then, Texas began running short of oil. In the late '60s, the commission opened up field after field, to no avail; none could deliver oil fast enough to maintain Texan dominance. By the time of the Arab-Israeli War of 1973, OPEC was producing more than half of the world's oil, and the pivotal Middle East members of OPEC—Iran, Iraq, Kuwait, Saudi Arabia, and the United Arab Emirates—were producing 36% of it. The Arab members of OPEC cut off oil shipments to countries friendly to Israel and quadrupled the price of Middle East oil. Texas was powerless to increase production as oil jumped from $13 a barrel (1997 dollars) to $33 a barrel.

    At the next crisis in 1979, when the Iranian revolution cut off that country's supplies, oil-consuming nations were still at the mercy of OPEC. Price pressures had lowered demand and the supply from oil provinces discovered in the 1960s such as the North Sea, Mexico, and Alaska was steadily increasing, but OPEC countries still produced 44% of world oil. Prices jumped to $53 a barrel (1997 dollars). Some thought dominance of the market might eventually slip away from OPEC, but as news editor Allen Hammond wrote in Science in the spring of 1974, “the betting here is that high energy prices will endure.” But those dire predictions missed the steadily rising production of non-OPEC oil and high prices' dampening effect on demand. Prices declined steadily until 1986, when they collapsed back to $20 a barrel as OPEC's contribution sank to 32% of world production.

    Soon, however, the non-OPEC world, like Texas, will begin to run short of oil. This spring the Paris-based International Energy Agency projected that production outside Middle East OPEC countries would peak next year. By 2009, the subsequent decline there and a projected production increase in the Middle East would give that region 50% of world production, supported by its control of 64% of the world's oil reserves. That's an even stronger position than the one the Middle East enjoyed in 1973. Even upbeat analyses of the world oil supply, such as that from the Energy Information Agency (EIA) of the U.S. Department of Energy, call for OPEC as a whole to be producing about half the world's oil by 2015; that's the percentage it controlled in 1973.

    What will OPEC do with its dominant role? Kaufmann believes that OPEC too may have learned a lesson—that huge price increases lead to decreased demand and switching to expensive but more reliable supplies, factors that helped lead to the 1986 price collapse. Economic self-interest for OPEC nations would dictate modest price increases in the next 5 to 10 years, says Kaufmann. The EIA agrees, projecting a steady rise of only about 30% in real prices by 2020. But most would agree with the EIA that “one can expect volatile [price] behavior to recur because of unforeseen political and economic circumstances.” In other words, if Middle East supplies are disrupted by war or politics, those gas lines will be back.

  12. Big Oil Under the Caspian?

    1. Richard A. Kerr

    Geologists have been discovering less and less oil since the 1960s, but hope springs eternal in the oil industry. Optimists now pin their hopes for staving off the impending shortage of oil (see main text) in part on deposits of as-yet-unknown size under the Caspian Sea. “The Caspian Basin is an area of vast resource potential,” pronounces the U.S. Energy Information Administration (EIA). But not everyone thinks this deposit will rescue the world.

    Oil was first found under the Caspian in the last century, but there are still promising unexplored areas. Explorationists have drilled into about 30 billion barrels of oil that are ready to be pumped out, about half the amount found under the North Sea. But estimates of how much will ultimately be discovered rely on guesswork. Seismic images reveal geologic structures big enough to hold lots of oil, but there's no sure way to tell whether the rock in those structures is porous enough to hold much oil, and whether they are sealed well enough to hold in oil.

    Considering these uncertainties, geologist Gregory Ulmishek of the U.S. Geological Survey in Denver guesses that there may be another 50 billion barrels of oil to be found in the Caspian. But the EIA quotes a figure of 186 billion barrels—an amount equal to about a third of the oil known to remain in all of the Middle East. Only new wells in unexplored areas—which may be drilled in the next year or two—will tell who's right.

    In the near term, pumping big volumes of Caspian oil will take cooperation in politics as well as luck in geology. The three countries controlling the potentially big fields—Azerbaijan, Kazakhstan, and Turkmenistan—are all landlocked and will have to pipe oil across neighboring nations, projects that have been delayed by political wrangling in this unstable region.

    And even if the Caspian does prove to rival the largest oil provinces, it won't save the world for long. Many geologists estimate that there are roughly one trillion barrels of easily extractable oil left to be pumped. According to calculations by physicist Albert Bartlett of the University of Colorado, Boulder, every billion barrels discovered beyond that will push back the inevitable shortfall in world oil supplies by 5.5 days. By that reckoning, even the most generous estimates of the Caspian Sea could push the crisis, expected sometime between 10 and 50 years from now, back by 3 years at most.


    How the Genome Readies Itself for Evolution

    1. Elizabeth Pennisi


    The renowned author and cancer scientist Lewis Thomas once wrote: “The capacity to blunder slightly is the real marvel of DNA. Without this special attribute, we would still be anaerobic bacteria and there would be no music.” Like many others—Nobel laureate Barbara McClintock was a notable exception—Thomas thought that genetic change, and hence the evolution of new species, results from small, random mutations in individual genes. But a growing wealth of data, much of it presented at a recent meeting,* indicate that mainstream biologists need to consider genomes, and the kinds of evolutionary changes they undergo, in a much different light.

    The work shows that the mutations leading to evolutionary change are neither as small nor as rare as many biologists have long assumed. Sometimes they involve the movements of relatively large pieces of DNA, like transposable elements, the stretches of mobile DNA originally discovered in maize by McClintock. They can even take the form of wholesale shuffling or duplication of the genetic material (see p. 1119). All these changes can affect the expression of genes or free up duplicated genes to evolve new functions.

    What's more, these changes may not be totally random. Researchers have found, for example, that some stretches of DNA are more likely to be duplicated or moved to another place than others, depending on the nature of their sequences. They are also learning that the enzymes that copy and maintain the DNA introduce changes in some parts of the genome and not others, creating hotspots of mutation that increase the efficiency of evolution. As James Shapiro, a bacterial geneticist at the University of Chicago, puts it, “Cells engineer their own genomes.”

    Findings such as these are leading to what Lynn Caporale, a biotechnology consultant based in New York City, describes as a “paradigm shift.” In the past, researchers assumed that genomes evolve to minimize mutation rates and prevent random genetic change. But the new findings are persuading them that the most successful genomes may be those that have evolved to be able to change quickly and substantially if necessary. Or as McClintock said in her 1983 Nobel lecture, the genome is “a highly sensitive organ of the cell, that in times of stress could initiate its own restructuring and renovation.”

    Nature's genetic engineers

    One of the oddest examples of how the genome can restructure itself comes from David Prescott, a molecular geneticist at the University of Colorado, Boulder. For the past 25 years, he has studied the genetic makeup of a group of single-celled organisms called hypotrichous ciliates, whose genomes are truly bizarre.

    In addition to its large working nucleus, called the macronucleus, which contains multiple copies of all the genes, every ciliate has one or more micronuclei, each of which carries two copies of all the genes. The micronuclear genes, which are normally inactive, are split into multiple sections, with lots of interrupting DNA, called internal eliminated sequences, between the coding regions. In three of the 10 genes analyzed thus far, the coding regions are also in the wrong order.

    These micronuclear genes have to undergo a dramatic change during sexual reproduction when the micronuclei from the two partners fuse and give rise to both a new micronucleus and a new macronucleus. As the macronucleus takes shape, not only is the DNA between gene coding regions removed, but the coding regions have to be put into the correct order—“all in a matter of hours,” Prescott notes.

    Although having the gene-coding regions shuffled and split up in the micronucleus may seem a waste of time, Prescott thinks this arrangement helps generate new genes that can help the ciliates adapt to changes in their environment. In examining the DNA sequences of six species, he and his colleagues found that the sizes of the internal eliminated sequences vary from one to another. Sometimes they shift their positions by a few nucleotides, creating a slightly different coding region. This adds a new dimension to the changes that can occur, as now not just the original coding region but also this slightly altered one become available for shuffling and rearrangement into a new gene.

    Most organisms do not go to the extremes that the ciliates do, but they have the potential to perform similar DNA rearrangements, if only on a lesser scale, because they have the necessary enzymes for cutting and rearranging DNA, as well as splicing it back together. Thus, Prescott concludes, the ciliates are “teaching us about what trickery DNA can perform to support evolution.”

    Nonrandom mutations

    The changes in the ciliate genes appear to occur randomly, but researchers studying other species are finding, says Caporale, that the “rate of mutation is not monotonous throughout the whole genome.” One striking example comes from Baldomero Olivera, a molecular biologist at the University of Utah, Salt Lake City, who has been assessing the incredible variation that exists in the toxins of predatory Conus snails.

    With 500 species, these organisms are the most successful marine invertebrates. All use venom-laden harpoons to immobilize worms, fish, and other mollusks for food, and depending on the species, the venom contains 50 to 200 peptides. “Each Conus has its own distinct set” of toxin peptides, says Olivera. The genetic variation responsible for this toxin diversity, which may enable Conus species to identify one another, seems to be concentrated at particular hotspots in the snail DNA.

    When Olivera and his colleagues began looking at the coding sequences for the large precursor proteins that break down to form the toxin peptides, they found that the first coding section, or exon, was almost identical in the pairs of Conus species examined. In contrast, Olivera reported at the meeting, the third exon, which is the one that codes for the peptide that becomes part of the venomous cocktail, “has just gone crazy in terms of changes.” Because normally the functional parts of precursor proteins tend to change less over the course of evolution than the other parts, this finding suggests, he says, that the malleable exon somehow has a built-in tendency to mutate. The snails “are a good example of how expert nature is in using genome flexibility … to generate genetic variation,” says Thomas Kunkel, a biochemist at the National Institute of Environmental Health Sciences in Research Triangle Park, North Carolina.

    Kunkel stresses that there could be several different explanations for such mutational hot spots. For one thing, he notes, the efficacy with which the errors that creep into the genome are repaired can vary greatly. In test tube studies of DNA repair enzymes, Kunkel finds that the error rate for the repair of mistakes made during DNA replication can vary from 99% to 3%, depending on the nature of the sequence that needs repair. That may be because the sequence influences the ease with which the repair enzymes do their job.

    What's more, some sequences are much more prone to error when the DNA is copied than others. In particular, DNA consisting of the same base repeated several times or simple two- and three-base repeats can be quite hard to replicate accurately. The problem is that the newly synthesized strand tends to slip relative to the strand being copied so that it may end up longer or shorter than the original, depending on whether it slips forward or backward.

    Genomic flexibility

    Such simple repeats are best known for the problems they cause. Researchers have traced the mutations causing Huntington's disease and several other hereditary disorders to large increases in the number of repeating triplets associated with the relevant genes. But the repeats also provide the genome with a means for fine-tuning gene expression, say Edward Trifonov, a biophysicist at the Weizmann Institute for Science in Rehovot, Israel, and David King, an evolutionary biologist at Southern Illinois University in Carbondale, and their colleagues. Because the number of repeats associated with genes can vary randomly among individuals, some people may have an advantage over others under certain environmental conditions if the number of repeats influences gene function. One example King cites are the genes controlling how much fat is produced in a mother's milk. Those genes sometimes contain repeats, which could help set their level of activity without changing the coding regions of the genes themselves, he suggests.

    Repeats may also affect gene activity indirectly. In the fruit fly, for example, researchers have found that certain genes are less likely to be expressed when they are close to a series of repeats than when they are located elsewhere on the chromosome. Also, at least 60 genes encoding transcription factors, which control the expression of other genes, contain adjustable repeats. In 1994, researchers showed that a change in the number of the repeats can affect the rate at which a transcription factor is produced (Science, 11 February 1994, p. 808). This change could in turn affect the activity of other genes controlled by the factor, King points out.

    The repeated sequences known as transposons or transposable elements, which can move from one stretch of DNA to another, provide another source of genomic flexibility. McClintock first proposed the existence of transposons in 1948 based on studies of such things as color variation in corn kernels. She noted, for example, that the colors can change from one generation to the next—far too fast for ordinary evolutionary change. This led her to propose that what she called “controlling elements” could jump about the genome each time it is replicated. If one of those elements landed in a gene, say for a corn pigment, it would interrupt the gene sequence, resulting in loss of function. Then, if it popped back out, the gene's function, and the corn kernel color, would be restored.

    A great deal of work has since confirmed that idea. But transposable elements have long been considered to be rogue DNAs that seemed to land anywhere in the genome, making their effects little different from random gene mutations that are more likely than not to be detrimental to the organism's survival. “Some people see them as parasites,” says Caporale.

    As McClintock realized, however, these movements can not only have profound effects on the expression of genes but also provide grist for the mill of evolution. A recent study of morning glories by Shigeru Iida, a molecular biologist at the National Institute for Basic Biology in Okazaki, Japan, and his colleagues provides an example. They traced the loss of blue or purple pigmentation in the flowers of some mutants to the insertion of transposable elements into genes needed to make the colored pigments.

    As in corn, the transposons don't always stay put. Some of the mutant plants, whose flowers were originally all white, later produced speckled or streaked blossoms, a signal that, in the colored cells, the function of the altered gene had been restored because the transposon had moved. In the wild, the resulting streaked and speckled patterns could play a role in evolution, as one design may be more attractive to pollinators than another, causing plants with those patterns to become predominant, Iida notes.

    Transposons likely played a pivotal role in the evolution of higher vertebrates as well. New data suggest that two enzymes key to generating the immune system's inexhaustible variety of antibodies are relics of an ancient transposon. A great deal of this variability is due to the way antibody genes are assembled: by joining three separate sequences (denoted V, J, and D), each of which comes in many variants, into a single antibody gene. Two reports, one in this week's Nature from David Schatz of Yale University and his colleagues and the other in Cell from Martin Gellert's team at the National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland, now show that the enzymes that do this assembly, Rag1 and Rag2, work just like enzymes called transposases that mobilize transposons.

    The transposases recognize the ends of the transposon and cut it out of a chromosome. That freed-up piece of DNA, with the transposase still attached, can loop around itself and move to a new place in the genome. That's just what Rag1 and −2, which act together in a complex, do, Schatz says: “In a test tube, [Rag1 and −2] are indistinguishable from a transposase.”

    This discovery helps explain why the adaptive immune system seems to have appeared so abruptly during evolution. Unlike most vertebrate features, the immune system has no counterpart in invertebrates, says Ronald Plasterk, a molecular biologist at the Netherlands Cancer Institute in Amsterdam. The sudden introduction of a transposon into the part of the genome containing the remote predecessors of the antibody genes could have set the stage for the many gene duplications that allowed the large antibody gene complex to evolve. “[The immune system] is a wonderful example of how a mobile piece of DNA can have an astounding impact on evolution,” Schatz says.

    Transposon art.

    Mobile genetic elements caused these prized color patterns in morning glories.


    Besides altering genomes by disrupting specific genes, mobile elements may also reshape the whole architecture of the genome, providing yet another kind of genomic flexibility. Iida has found parts of foreign genes buried in the morning glory's transposons, suggesting that transposons may capture genes and move them wholesale to new parts of the genome. If this is true, then they make possible DNA shuffling that can place genes in new regulatory contexts and, possibly, new roles.

    And in May, Evan Eichler of Case Western Reserve University in Cleveland, Ohio, and his colleagues showed how a repeat sequence was likely responsible for the duplication of several genes' worth of DNA in humans and other primates (Science, 12 June, p. 1692). The researchers found copies of one piece of chromosome 16 in several other parts of the human genome, and Eichler suggests that repeated sequences at the ends of the inserted pieces of DNA were signals to enzymes that cut out that DNA as it replicated and allowed it to insert elsewhere. Sequence similarities between transposons and the repeats hint that the repeat sequence might be a remnant of an ancient mobile element. “It's no longer a matter of conjecture that transposable elements contribute to evolution,” concludes Nina Fedoroff, a molecular biologist at Pennsylvania State University in University Park. “It's a fact.”

    These evolutionary contributions may increase when a species faces new selective pressures, says Caporale. As she puts it, “Chance favors the prepared genome.” Researchers first noticed this happening in 1988 when John Cairns, then at Harvard University, showed that mutation rates in the bacterium Escherichia coli increased when the microbes needed to evolve new capabilities in order to survive changes in their environment. At the time, it seemed that only those genes directly involved with the adaptation changed, and this idea of adaptive or directed evolution caused quite a stir.

    But then last year, molecular geneticist Susan Rosenberg at Baylor College of Medicine in Houston and her colleagues showed that mutation rates increase throughout the genome, although only in a subset of the population. Another group also found that more than just the relevant genes changed. The bacteria are “smart” about how they evolve, explains Rosenberg. In response to adverse conditions, they “turn on mechanisms of mutation that are nothing like the usual [mutation] mechanisms,” she adds. They activate different repair enzymes and promote DNA shuffling, for example.

    Not long ago, these ideas would have been considered heretical, but “I see more and more people looking at [evolution] this way,” says Werner Arber, a Nobel laureate and microbiologist at the University of Basel in Switzerland. Whether by radically rearranging themselves, making use of mobile elements to generate variation, or causing certain stretches of DNA to mutate at high rates, genomes are showing that they can help themselves cope with a changing environment. Says Shapiro, “The capability of cells has gone far beyond what we had imagined.”

    • *“Molecular Strategies in Biological Evolution,” 27 to 29 June, organized by the New York Academy of Sciences, in New York City.


    Under Pressure, Deuterium Gets Into Quite a State

    1. David Kestenbaum


    To hear Gilbert Collins describe it, conducting experiments with the world's most powerful laser is a bit like working in a war zone: It demands long nights of attention punctuated by loud explosions. Collins and colleagues place a drop of deuterium into a penny-sized copper container, align an array of equipment around it, then retreat to a remote control room. After a short countdown, the Nova laser fires with a deafening crash. One of its 10 beams, momentarily carrying more power than the output of all the electrical generators in the United States, pulverizes the deuterium with a flash of green light. Having recorded the event, the team cleans up the mess, sticks the resulting bent copper on the shelf for posterity, and sets up for another shot.

    Collins and his colleagues at Lawrence Livermore National Laboratory in California are generating more than a museum of mangled metal, of course. The data from these pioneering experiments (reported on page 1178) show how deuterium and hydrogen behave in the hot, pressurized interiors of giant planets. That information could help resolve a number of mysteries, such as why Saturn appears to be so much younger than the rest of the planets in the solar system, says Gilles Chabrier, a physicist at the Ecole Normale Supérieure in Lyon. The work is “tremendously important for astrophysics,” he says, and may help answer basic questions about condensed matter. “No one knows what happens at these high pressures and temperatures,” he says. Adds Russell Hemley, a physicist at the Carnegie Institution of Washington: It gives a glimpse of “a different domain of matter.”

    Big squeeze.

    A shock from the Nova laser makes a drop of deuterium feel like it is at the core of Saturn.


    Matter is well understood under earthly conditions. Physicists know at what combinations of temperature and pressure water boils or freezes, for instance. And, by compressing materials with diamond anvils or explosive gas guns, they've mapped out the “equations of state” for many materials that describe how they behave under high pressure. But neither anvils nor gas guns are yet capable of simulating the high temperatures and pressures at the core of large planets. And at even higher pressures, truly exotic states of matter are expected to form. At some point, even diamond, the most durable insulator, is predicted to break down and conduct like a metal.

    The Livermore group figured it could recreate some of those conditions with the Nova laser, which was built to compress fuel pellets for fusion research. To watch what happens to deuterium, the Livermore physicists place a sample in a small copper cell outfitted with a plunger at one end and thin beryllium windows on the sides. When the laser hits the plunger, it sends a shock wave through the deuterium. By shining a beam of x-rays through the two windows, the team can track the shock speed and infer the pressure and resulting density of the deuterium over the 5 or 10 billionths of a second before the whole assembly flies apart. And by using a laser to measure the reflectivity of the deuterium through another window, they can tell when it makes the jump to a conducting (and hence reflecting) metallic fluid. “This is a monumental experiment,” says William Nellis, a physicist at Livermore who works with gas guns, “I take my hat off to them.”

    Some of the results are surprising. The group found, for instance, that the deuterium compressed easily—its density increased sixfold at pressures of about a half a megabar, instead of reaching a factor of four compression predicted by traditional models. That could be a rare bit of good luck for the laser fusion program, which aims to squeeze deuterium and tritium fuel pellets to maximum density to trigger fusion. But it's a puzzle for theorists. “It turns out a lot of their data disagrees with every [theoretical model]” available, says David Ceperley, a theorist at the University of Illinois, Urbana-Champaign.

    Some models predict that at some high temperature and pressure, deuterium should undergo an abrupt phase transition. In this picture, the pressure frees one electron to wander from its atom, which induces others to follow, dissolving the structure. This soup would be hard to compress since the shock would heat it and cause it to expand. But Ceperley suspects that at the temperatures reached in the Livermore tests, the soup may be a little more complicated. In his computer simulations, which laboriously model every proton and electron, Ceperley has found that hydrogen or deuterium atoms can sometimes link up into chains bound by borrowed electrons. If such structures do form, he conjectures, they might absorb some of the energy of the shock without heating the fluid or causing it to expand.

    As a practical matter, says Chabrier, the data give physicists their first real look at what conditions may be like inside giant planets such as Saturn. Equations of state suggest that, given Saturn's relative warmth, it has only existed for about 2 billion years. “Which is crazy,” Chabrier says, since it should have formed with the rest of the solar system, 4.5 billion years ago. If hydrogen, which makes up over 90% of Saturn, goes soft at high pressures or undergoes a phase transition, it could store up energy in the same way that water vapor has more energy than liquid water. This stored energy might keep Saturn relatively warm in its old age.

    The Livermore group also found that deuterium conducts like a metal at lower pressures than many models had predicted. That could help explain the strong magnetic field of planets such as Jupiter whose massive size creates a high-pressure environment. “So far, we have no clear explanation for the magnetic fields in [these] planets,” Chabrier says.

    Several other groups are planning similar high-pressure work with the Phebus laser in France and possibly the Gekko laser in Japan. And deuterium won't be the only element getting the squeeze. Michel Koenig at the Ecole Polytechnique in Palaiseau, France, says he and colleagues hope to work with the Livermore group to laser-shock iron and water, which may form a good part of large planetary cores. The Livermore group has recently put the screws to diamond. Already, they say, they've seen signs that it may turn into a metallic liquid under the strain. “It's the hardest known material,” says Richard Martin, a theorist at the University of Illinois, Urbana-Champaign, who has theorized about how diamond might be transformed by high pressure. “I'd be really interested to see where it breaks down.”

Log in to view full text