News this Week

Science  12 Jun 1998:
Vol. 280, Issue 5370, pp. 359
  1. PHYSICS

    Weighing In on Neutrino Mass

    1. Dennis Normile

    AN UNDERGROUND FACILITY IN JAPAN HAS RECORDED THE MOST CONVINCING EVIDENCE YET THAT THIS EPHEMERAL SUBATOMIC PARTICLE HAS MASS, WHICH COULD ALTER OUR VIEW OF THE UNIVERSE

    TAKAYAMA, JAPAN—Neutrinos are in the headlines—again. Every few years, a group of physicists announces preliminary evidence that, contrary to decades of theoretical prejudice, this wispy particle may have mass. Other physicists react to the news by scrutinizing the data or trying to replicate the experiment. As doubts set in, neutrino mass goes back to being no more than a tantalizing possibility.

    But to many observers, last week's announcement by a Japanese-U.S. collaboration looks like the real thing. Their giant detector beneath a mountain in central Japan, they said, had picked up strong, albeit indirect, evidence of neutrino mass. “I'm absolutely convinced,” says John Bahcall, a neutrino expert at the Institute for Advanced Study in Princeton, New Jersey. Theorist Wick Haxton of the University of Washington, Seattle, calls the results “incredibly impressive” and an example of “perfect physics.”

    Shifty particles.

    Neutrinos that enter the underground detector from the other side of the world have more time to oscillate to another flavor than those coming straight down.

    UNIVERSITY OF HAWAII

    Their faith rests largely on the size and sensitivity of the group's $100 million detector, Super-Kamiokande, operated by a collaboration of 120 physicists from 23 institutions headed by the University of Tokyo's Institute for Cosmic Ray Research (ICRR). Its 50,000-ton water tank and 13,000 photomultiplier tubes allow it to collect enough data to provide strong evidence that a certain type of neutrino from the atmosphere is “disappearing” by changing, or oscillating, into another type of neutrino the detector can't see. That can only happen if the particle has at least a trace of mass.

    The actual mass of the neutrino has yet to be determined and is likely to be a minute fraction of the mass of the electron. But there is nothing lightweight about the implications of the findings, presented at a conference last week that brought some 350 neutrino experts from 24 countries to this small town just 50 kilometers down the road from Super-Kamiokande.

    “It is one of the most important discoveries in particle physics of the last few decades,” says Sheldon Glashow, a theorist at Harvard University. It will force a revision in the Standard Model, the established theory of particles and forces that has served as the basis of modern physics, which assumes a massless neutrino. The results also may affect calculations of the total mass of the universe, with implications for understanding its origin and eventual fate.

    The evidence for mass is the latest twist in the strange saga of the neutrino, a particle posited in 1930 by the Austrian physicist Wolfgang Pauli. Created in staggering numbers by the big bang and by the nuclear processes driving the sun and stars, the chargeless neutrinos flow through matter like sunlight through glass. As a result, the detectors do not record neutrinos directly. Rather, they capture charged particles formed in the aftermath of rare neutrino interactions with a nucleus or proton. But even after investigators allow for the inefficiency of their detectors, they often find fewer neutrinos than theorists predict.

    The problem first arose with neutrinos from the sun. Because detectors can see only one or two of the three neutrino types, or flavors (electron, muon, and tau), this solar neutrino deficit led to speculation that neutrinos were changing back and forth, or oscillating, from a flavor the detector could see to one it could not. The laws of quantum mechanics require oscillating particles to have mass. And a neutrino with mass would not conform to the Standard Model.

    The whiff of possible new physics was irresistible. Dozens of experiments examined neutrinos from the sun, from the upper atmosphere, and from nuclear reactors and particle accelerators. Many of these experiments reported evidence for neutrino mass, but most of the claims fared poorly. Some were retracted, others disproven, and still others ignored. One currently controversial claim comes from the LSND experiment at Los Alamos National Laboratory in New Mexico, which reported that neutrinos generated by an accelerator were oscillating on their way to a detector a few meters away (Science, 10 May 1996, p. 812). The LSND results cannot be checked against the Super-Kamiokande results, however, because the two experiments are looking for different types of oscillations. And many other experiments simply yielded data samples too small to rise above statistical uncertainties.

    Super-Kamiokande, which opened in 1996, was designed to overcome these limitations. Sitting in a cavern in a working zinc mine 1000 meters beneath the surface to screen out background radiation, the detector holds nearly 20 times the volume of highly purified water as its predecessor, built in the early 1980s, and has a daily detection rate some 100 times greater. The charged particles resulting from neutrino interactions in the water generate a flash of light known as Cerenkov radiation, and analyzing the pattern of the light signals can yield the energy and direction of the incoming neutrino and identify it as a muon or electron neutrino.

    Although Super-Kamiokande is also gathering data on solar neutrinos, its claim of neutrino mass comes from data on atmospheric neutrinos, which result from cosmic rays bombarding Earth's upper atmosphere. At present, scientists know only to a rough approximation how many cosmic rays hit the atmosphere at any given time. But because the process through which cosmic rays produce the different flavors of neutrinos is well understood, the ratio of muon to electron neutrinos generated in the atmosphere can be predicted with confidence. The ratio detected by Super-Kamiokande and several other experiments, however, differs from these predictions.

    The difference, known as the atmospheric neutrino anomaly, suggests that one or both of the neutrinos are oscillating and thus changing the ratio. But Super-Kamiokande can go a step further by tracking the direction of the incoming neutrinos. In particular, the Super-Kamiokande team compared neutrinos coming down from the sky with those coming upward through Earth. Because the cosmic rays and their resulting neutrinos rain down equally from all directions, the ratio should be 1. But if oscillation can occur, the neutrinos coming the 13,000 kilometers from Earth's far side have more time to oscillate than the neutrinos traveling only 20 kilometers down from above.

    For electron neutrinos, Super-Kamiokande caught equal numbers going up and coming down. But for muon neutrinos there was a big difference. In 535 days of operations, Super-Kamiokande counted 256 downward muon neutrinos and just 139 upward ones. The large number of observed neutrinos and the magnitude of the difference reduces the chances of the finding being a statistical fluke, say team members. Taken together, the data indicate that the muon neutrinos are oscillating, perhaps to tau neutrinos, which the detector cannot pick up. “From these data we conclude that we have strong evidence for muon neutrino oscillations,” says Takaaki Kajita, an ICRR physicist who presented the Super-Kamiokande results.

    Despite the awe that greeted the presentation—Bahcall called it “one of the most thrilling moments of my life”—the results were anything but surprising. “We have been releasing data all along,” says Henry Sobel, a University of California, Irvine, physicist who heads the American side of the collaboration. “And we started seeing [evidence of oscillation] from the first data set.” Late this spring, the group decided that the latest data had put them over the top. “Everyone was convinced we had done everything possible to find artifacts or misunderstandings,” says Yoji Totsuka, ICRR director and head of the collaboration.

    Now neutrino experts will begin pondering new questions. Measuring oscillations can only yield the difference between the masses of the two flavors being measured, not their absolute mass. In this case, the mass difference is about 0.07 electron volt, or about one 10-millionth of the mass of the electron. That figure serves as a lower limit for neutrino mass.

    The uncertainty also leaves unresolved one fierce cosmological debate—whether neutrinos make up a significant part of the universe's dark matter, the mass theorists believe must be present for the universe to exist as we know it but which can't be accounted for by the observable stars and planets. John Learned, a Super-Kamiokande collaborator at the University of Hawaii, Manoa, says that the result implies that neutrinos are a significant fraction of the dark matter, but David Caldwell of the University of California, Santa Barbara, says the Super-Kamiokande evidence is irrelevant because that lower limit is “too low to be significant.”

    Theorists have other issues to address. Paul Langacker, a physicist at the University of Pennsylvania, says, “Standard Model theories will have to be extended” to accommodate a neutrino with mass. Others believe that the revisions could be major. Barry Barish of the California Institute of Technology in Pasadena says the massive neutrino “is the first empirical evidence providing a clue for what is beyond the Standard Model.” In particular, a massive neutrino is one of the cornerstones of Grand Unified Theories, which seek to provide a unified explanation for all known particles and forces.

    Experimentalists still have lots to do as well, including verifying the Super-Kamiokande results. Several groups are now readying long-baseline experiments in which a stream of neutrinos generated by an accelerator is aimed at a detector hundreds of kilometers away. Such experiments promise greater detail on oscillations by actually counting the number of neutrinos at the source, instead of relying on theory, as the atmospheric neutrino experiments do. But their range and energy levels don't fit the parameters; Harvard's Glashow gives them only a 50-50 chance of confirming the Super-Kamiokande findings.

    More data will certainly be necessary to stitch the results into a consistent picture of neutrino masses. Whatever the outcome, this ephemeral particle seems likely to have a weighty impact on physics.

  2. COMBINATORIAL CHEMISTRY

    The Fast Way to a Better Fuel Cell

    1. Robert F. Service

    Combinatorial chemistry—the shotgun approach to chemical discovery whereby researchers synthesize and test hundreds or thousands of different compounds simultaneously—is already revolutionizing the discovery of new drugs. Researchers are working to apply the strategy to finding hot new materials, such as catalysts, as well. But in many cases, testing hundreds or thousands of new catalysts at once can be a major obstacle. Now a team at Pennsylvania State University, University Park, and the Illinois Institute of Technology (IIT) in Chicago has found a way around this bottleneck, coming up with a method for quickly selecting the better catalysts for everything from fuel cells to batteries.

    The technique, which signals the presence of an effective catalyst with a fluorescent glow, has already yielded a concrete result, as the researchers, led by Penn State chemist Tom Mallouk and IIT chemical engineer Eugene Smotkin, report on page 1735. They used it to discover a new catalyst for converting methanol to electricity in fuel cells—devices that are being hotly pursued by companies around the world as a clean alternative to combustion engines. The catalyst isn't ideal, Mallouk and others note: Among its ingredients are osmium—potentially toxic—and iridium, which is prohibitively expensive, along with platinum and ruthenium. Nevertheless, its discovery shows that in the search for better catalysts, the brute strength of combinatorial chemistry “is definitely worth pursuing,” says Tom Fuller, a methanol fuel-cell expert at International Fuel Cells in South Windsor, Connecticut.

    Current fuel-cell catalysts consist of an equal mix of platinum and ruthenium, which, with the help of a small electrical voltage, break down methanol into carbon dioxide, protons, and electrons. The electrons are routed through a wire to power a car or do other work and are eventually channeled to another electrode in the cell, where they meet up with the mobile protons. But these catalysts are inefficient, wasting about 25% of the energy stored in the fuel as heat instead of converting it to electricity. Researchers have searched for years to find an improved mixture of metals. Most of these efforts have concentrated on various mixtures of two metals or occasionally three, but few have tried four or more because so many different combinations are possible. “It gets very laborious to try to make and test them all one at a time,” says Fuller.

    To speed up the search with combinatorial chemistry, Mallouk and his Penn State colleagues took a hacksaw to a commercial inkjet printer and modified the machine to spray droplets of different metal salts instead of ink. They then used a computer to control the spraying of the salts, in the end creating an array of dots, varying both the component salts and their relative concentrations. Treating the dots with a strong reducing agent converted the salts to patches of metal alloy, each of which acted as a catalyst.

    One way to pick out the best catalyst from such an array of candidates is to expose them to their target compound and detect the heat given off during the reaction. But fast-reacting catalysts can produce heat even when they make unwanted byproducts. So the Penn State researchers took a new approach: They simply spiked their array—which sits in an aqueous bath containing methanol—with a compound called nickel PTP, which fluoresces a faint blue in the presence of protons. To activate the catalysts, the researchers applied a small voltage across the array, sat back, and watched as their best catalysts lit up.

    The Penn State team passed along their brightest prospects to their IIT colleagues, who made electrodes from the material and incorporated them into working fuel cells. They found that the best such catalyst works about 40% better than a straight platinum-ruthenium mix under simulated real-world conditions. Mallouk, Fuller, and others caution that this improvement may not be enough to justify using the more costly metals. But the promising results of this first attempt to use combinatorial chemistry suggest that there may be even better catalysts out there just waiting to be discovered.

  3. POLYMER ELECTRONICS

    Transistors and Diodes Link and Light Up

    1. Robert F. Service

    Car bumpers, coffee mugs, computer casings: It seems that just about everything is made out of plastic these days. Researchers have even created plastic electronics, such as polymer-based transistors and glowing diodes. Now, separate teams of researchers in the United States and United Kingdom have managed to integrate these two types of devices for the first time, opening the way to lightweight, flexible displays made out of material not too different from garbage bags. Down the road, such displays could challenge conventional televisions and laptop displays and open up entirely new uses such as large-area illuminated signs that can be rolled up and carted away.

    The new pairing between organic electronics and lights “is a breakthrough that the field of organic displays has been looking for,” says Yang Yang, an organic-display expert at the University of California, Los Angeles. The advance, made possible by new polymers that carry higher currents, allows researchers to use simple, low-cost fabrication methods such as screen printing and inkjet printing to lay down all the different materials needed to create displays, says Yang. “It will be very exciting to see what you can do with all-organic systems,” says James Sturm, an organic-display expert at Princeton University.

    The prospect of all-organic displays has been tantalizing researchers for several years. Each point of light on such a display is a single light-emitting diode (LED) powered by an electric current, which is switched on and off by a transistor. The problem is that although organic LEDs typically need a rushing stream of current to shine, most organic transistors only put out a trickle. As a result, researchers have been forced to stick with conventional silicon-based transistors to drive their organic LEDs and so have lost some of the key advantages of plastics: flexibility, low cost, and low weight.

    New hope for fully organic displays began to shine last year, when researchers at Lucent Technologies' Bell Laboratories in Murray Hill, New Jersey, and at Pennsylvania State University, University Park, came up with a new breed of high-current organic semiconductors. The two teams aiming to make all-organic displays, one at Bell Labs and one at Cambridge University, both settled on one of these new organics, a chainlike polymer known as regioregular poly(hexylthiophene). The polymer consists of a series of linked carbon and sulfur-based rings, with hydrocarbon chains dangling off each ring. When laid down in a solution, the chains and rings of different molecules prefer to associate with their own kind, and so the polymer assembles into alternating sheets of chains and rings. Current flows along the sheets of rings, and the close proximity of the rings in different molecules allows charges to jump freely from one molecule to the next. “This isn't perfect crystalline order, but it seems to be enough to boost the mobility of the electrical charges,” says Cambridge physicist Henning Sirringhaus.

    On page 1741 of this issue, Sirringhaus and his Cambridge colleagues Nir Tessler and Richard Friend report fashioning this polymer into an organic transistor and using it to drive a conventional polymer-based LED built directly on top. Meanwhile, the Bell team, led by physicists Ananth Dodabalapur and Zhenan Bao, report in an upcoming issue of Applied Physics Letters (APL) that they made a similar transistor from poly(hexylthiophene) but then crafted an organic LED alongside.

    The Bell Labs LED shines brighter, because the team coaxes light from a highly luminescent small organic molecule known as ALQ, says Bao. The downside is that this material must be laid down in a vacuum, a somewhat cumbersome process. Although the Cambridge LED is not as bright, it is made with a polymer-based light emitter, which can be applied from a simple solution—a process that is easier to scale up to coat large areas. Sirringhaus notes as well that he and his colleagues should be able to improve the performance of their devices considerably, for they were able to get much better performance out of their transistor by finding ways to encourage the polymer to order.

    Bao says that since she and her group submitted their APL paper, they too have made all-polymer integrated transistors and LEDs. The two groups say they are now pushing ahead with efforts to create full arrays of the devices by screen printing and inkjet printing. If they succeed, plastics will display a whole new image.

  4. GENOMICS RESEARCH

    Sifting Through and Making Sense of Genome Sequences

    1. Elizabeth Pennisi

    COLD SPRING HARBOR, NEW YORK—The genomics community's annual meeting on Genome Mapping, Sequencing, and Biology once focused mostly on developing better mapping and sequencing technologies. But at this year's meeting, held here from 13 to 17 May, participants were also caught up in trying to make sense of the sequence data being produced.

    Tracking Down Invisible Genes

    To function normally, the cell needs genes that code for more than proteins. Also scattered throughout the genome are genes encoding RNA molecules that play a variety of roles in the cell, including assembling and maintaining the ribosomes, small particlelike structures where proteins are synthesized. These RNA genes don't have the telltale resemblances that mark protein-coding genes, making them hard to spot. New results presented at the genome meeting show that computer algorithms are getting much better at picking these genes out.

    Programs called BLAST or FASTA routinely identify new protein-coding genes by comparing them with known genes, looking for sequence similarities. But even related RNA genes can have very different sequences, because RNA functions depend on how they bend and twist, not on their exact base sequences. The bending does depend, however, on which of the RNA's bases pair up—and recognizing the pairing pattern was key to the new computer program, devised by Sean Eddy and graduate student Todd Lowe of Washington University School of Medicine in St. Louis. With it, they tracked down almost all of the 40-plus yeast genes for a group of RNAs involved in building the ribosome, the so-called small nucleolar RNAs (snoRNAs).

    The basic strategy dates to 1994, when Eddy and, independently, David Haussler from the University of California, Santa Cruz, published two mathematical formulae capable of recognizing RNA pairing patterns. At the time, “that algorithm was a theoretical proof of principle and had no practical problem to be applied to,” Eddy recalls.

    But in 1996, Lowe became intrigued with the snoRNAs because his journal club had discussed several research papers about these then hard-to-identify molecules. For example, one group of snoRNAs in yeast, humans, and other organisms chemically modify ribosomal RNA by adding methyl groups to particular bases. Researchers don't know what these methyl groups do, but based on the number that get attached, they thought about 55 different snoRNAs should be involved in the modifications. At the time, however, they had confirmed the existence of only one.

    Eddy advised Lowe against using the algorithm for finding the rest, thinking that snoRNA researchers already had a head start using other computational and experimental approaches. Indeed, in the 2 years before Lowe undertook the project, other researchers went on to identify 20 apparent snoRNA genes. But he went ahead anyway, using the sequences of those genes to “teach” the computer program what to look for.

    Lowe turned up 41 snoRNA genes, 22 of which were new. By knocking out those genes in yeast and determining which ribosomal RNA bases did not get methylated, Lowe and Eddy linked specific snoRNAs to 51 methylation sites. With all these genes and their methylation sites in hand, snoRNA researchers can now figure out “why all these genes are dedicated to this,” Eddy says.

    Six of the snoRNAs proved to be located in the introns—the non-protein-coding stretches of DNA that are interspersed between the coding regions of many genes and are often regarded as “junk.” The existence of these RNA genes may explain why the intron DNA is often conserved, says Eddy: “Most people don't think about non-protein-coding DNA. But there's other stuff going on in the genome besides the protein [genes].”

    Indeed, other researchers say the work highlights the potential of mathematical approaches for teasing meaning out of these mysterious parts of the genome. It demonstrates “a growing up of computational biology,” says David Lipman, a computational biologist at the National Center for Biotechnology Information in Bethesda, Maryland. “[Eddy] is producing real data. Just think what [the results] will be from mouse and human.”

    More Ways to Score SNPs

    No two human genomes are alike, and in the past few months the quest to catalog slight individual differences in the genetic code has turned into a heated race. Companies and academic labs are looking for single-base changes, also called single-nucleotide polymorphisms or SNPs. By studying these differences, they expect to nail down which genes contribute to the development of diseases such as diabetes or schizophrenia, which involve many genes. With those genes in hand, clinicians can begin to assess an individual's predisposition to those diseases based on his or her SNP repertoire.

    For both these applications, researchers want fast and sensitive methods of screening thousands of DNA samples, and at the meeting several teams reported progress in achieving that goal. For a few years, researchers have been developing DNA chips that they hope will eventually be able to evaluate whole genomes at once. They now say that their current chips can look at on the order of 10,000 genes at once. In addition, some newer methods are emerging.

    One team, for example, is harnessing a standard analytical tool—mass spectrometry—while another has a modern twist on an old technique for detecting DNA variations: using the “melting temperature” of DNA hybrids as an indicator of how closely related the hybridized strands are. “What we're seeing is an explosion of ideas about how to do that [detect SNPs],” says Eric Lander, who directs the genome center at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts. “It's a sign of a pretty rich field.”

    The most high-tech of the new SNP-detection methods comes from GeneTrace Systems Inc., a biotech firm based in Alameda, California. The company is using so-called MALDI-TOF mass spectrometers, which can accurately detect small differences in the masses of relatively large molecules (Science, 27 March, p. 2044). GeneTrace's method exploits this sensitivity by transcribing the DNA sequences to be tested into copies of a fixed length. The copies should contain exactly the same sequence of bases except at the SNP. The single-base difference there gives rise to a mass difference, which the mass spectrometer can detect.

    GeneTrace molecular biologist Yuping Tan reported that the company can analyze 10,000 samples a day, with an error rate of about one in 10,000—a fraction of the rate seen with DNA chips, says GeneTrace's Joe Monforte. Based on the results thus far, mass spectrometry “looks very good,” says Aravinda Chakravarti, a human geneticist at Case Western Reserve University in Cleveland.

    Ultimately, Monforte says, GeneTrace hopes to scale up its method to the point where the company can analyze hundreds of thousands of samples a day, the capacity needed to screen large groups of people to find all the genes involved in complex diseases such as diabetes or heart disease. Mass spectrometers are, however, very expensive, costing roughly $100,000 each. In contrast, the method devised by geneticist Anthony Brookes's team at Uppsala University in Sweden does not require such a major capital investment, although it is slower and limited to hunting thousands of SNPs at a time.

    In the Uppsala team's approach, either a robot or a person places a single strand of the DNA sequence to be tested into a small well and then adds a short, single-stranded piece of DNA whose sequence is complementary to the target sequence and will thus bind—or hybridize—to it. The reaction also contains a dye that fluoresces when the two strands stick together. Then the well is heated, causing the two strands to separate and the fluorescence to disappear. The more perfectly matched the strands are, the more resistant they are to this denaturization. This allows researchers to distinguish which base is at the SNP location by monitoring the temperature at which the fluorescence fades.

    “It's very simple and very robust,” Brookes says. “Any lab could use it.” He and his colleagues call this approach DASH, for dynamic allele specific hybridization. They are working with the British company Hybaid Ltd. to commercialize the technology.

    “[The] method looks quite promising and is really clever,” says Chakravarti. But he and Lander also realize the race to build the best SNP-scoring technology has only just begun. Says Lander: “It's too early to tell what the best way to do it will be.”

    A Mosaic of Gene Duplications

    As the archive of life, the genome has long been considered a relatively permanent record. But increasingly, molecular biologists are realizing that the genome undergoes constant remodeling as extra copies of genes and surrounding DNA quietly sneak in and out. Two presentations at the genome meeting, one by Evan Eichler of Case Western Reserve University and the other by Julie Korenberg of Cedars-Sinai Hospital in Los Angeles, showed just how common these extra gene copies are.

    Both researchers have found that the genome is larded with extra copies of small chromosomal segments. “We are a mosaic of duplications,” Korenberg concludes. The cause of such duplications is still a mystery, but these extra genes and their surrounding DNA provide fodder for evolutionary change, because natural selection can ultimately put them to new uses. Eichler and Korenberg also suggest that it may be possible to get a handle on difficult problems in evolutionary biology, such as the nature of the primate family tree, by comparing the patterns of gene duplications in different species.

    Eichler first began to appreciate the dynamic nature of DNA about 4 years ago when he and his colleagues tried to label a specific piece of the X chromosome. They were using a fluorescent dye attached to a DNA sequence that they expected would bind specifically to that region. But they found that the label also wound up on a spot on chromosome 16. When they took a closer look at the labeled chromosome 16 DNA, they discovered the explanation: It contained a copy of an X chromosome gene. “The entire gene, including the regulatory and structural sequence [surrounding it], had moved to a new location,” he recalls.

    That wasn't the only patch of chromosome 16 that appeared to be copied from elsewhere in the genome. When Eichler and his colleagues sequenced about 845 kilobases of the DNA, they found 17 segments that seemed to have come from other chromosomes, ranging in size from 5000 to 50,000 bases. Among these were a creatine-transporter gene, the gene that is mutated in a rare human hereditary disease called adrenoleukodystrophy—made famous in the movie Lorenzo's Oil—and parts of the immunoglobulin genes, he reported.

    These aren't one-time duplications, the Case Western group has found. The adrenoleukodystrophy gene, for example, appears in four other places in the human genome as well. “Evolutionary biologists talk a lot about [the role of] gene duplication,” says Howard Jacob, a molecular geneticist at the Medical College of Wisconsin in Milwaukee. “[Eichler] has evidence of [such] duplication.” By comparing 150,000-base chunks of DNA from different parts of the human genome, Korenberg too has observed that such duplications are common.

    Eichler finds they often turn up near the chromosome's centromeres, the pericentromeric region. To him, the work suggests the importance of these regions as “both graveyards and factories of evolution.”

    Korenberg's survey has examined other parts of the chromosomes as well. She has noticed that often the duplicated patches exist at inversion sites, where a piece of DNA breaks out of the chromosome and then sneaks back in upside down at that same site or at a different site. Once a duplication or inversion has occurred, she suggests, the DNA “is predisposed to further [changes] at those points,” changes that may lead to human disease. Often she finds two or more copies of that piece there.

    Whatever causes the duplications, they may enable scientists to chart evolutionary relationships. For example, because many primates are so closely related, molecular evolutionists have been unable to sort out the details of the primate family tree based on single-base differences in their genes. But both Eichler's and Korenberg's work suggest that tracking these much larger DNA changes may help. Eichler and his colleagues found that chimps have two copies of the adrenoleukodystrophy gene, neither of them on the same chromosomes as the human genes, while gorillas have several copies, some of which correspond to the chimp duplications while others resemble the human ones. In accordance with the current leading model, this suggests that a gorillalike ancestor gave rise to both chimps and people, with both descendants losing some of the duplicated sections of DNA as they evolved.

    Similarly, Korenberg has been studying differences between the primates at the DNA inversion sites, looking to see how many copies of duplicated patches exist in different species. Because these duplicated genes are more likely to evolve, she expects to find some of the genes that distinguish chimps from people, say, or gorillas from chimps. Indeed, says evolutionary biologist James Lake of the University of California, Los Angeles, Eichler's and Korenberg's data “may solve this long-standing problem of the relationship of humans, chimps, and gorillas.”

  5. GEOPHYSICS MEETING

    Scientists Ponder Deep Slabs, Small Comets, Hidden Oceans

    1. Richard A. Kerr

    BOSTON—More than 3500 geophysicists gathered here from 26 to 29 May for the annual spring meeting of the American Geophysical Union, a record for this meeting. The claim that tiny comets rain down on Earth got plenty of critical attention, but oceans within satellites of Jupiter and the churning of Earth's rocky mantle drew notice too.

    A Stagnant Deep Mantle?

    Middle ground may be emerging in the deep Earth. One faction of earth scientists, trying to explain the mix of elements in surface rocks, has long insisted that rock from the deep mantle must be sealed off from shallower rock by a barrier 660 kilometers down. Another faction has argued that the viscous rock of the mantle mixes from top to bottom—and last year this group seemed to gain the upper hand: Images derived from seismic waves showed that great slabs of surface rock dive into the deep Earth and stir the whole mantle from top to bottom (Science, 31 January 1997, p. 613).

    Now both factions may have to get used to a partly mixed mantle. By refining the images that last year pointed to whole-mantle mixing, seismologists Rob van der Hilst and Hrafnkell Kárason of the Massachusetts Institute of Technology (MIT) suggest that slabs pass through a distinct layer in the deepest mantle, leaving it largely unmixed. “Whole-mantle [stirring] may be too simplistic,” says mineral physicist Carl Agee of Harvard University. “There may be places that are not very well mixed.”

    Slabs take the plunge.

    Seismic imaging shows slabs sinking through the midmantle (top, blue) and piled on the matle floor (bottom), but in between they break up, perhaps as they punch through a layer of isolated rock.

    SOURCE: VAN DER HILST AND KÁRASON / MIT

    That's certainly what geochemists have long believed. They have compared the amounts of various isotopes of helium, potassium, lead, and argon measured at the surface with what planetary theorists believe the newborn Earth must have contained. To explain the disparity, geochemists inferred that the deep mantle must hold isolated reservoirs of pristine material, which only mix to the surface over billions of years.

    Last year's seismic images seemed to leave little room for such reservoirs, however. Van der Hilst and his colleagues had compiled data on millions of seismic waves that had crisscrossed the mantle. Because the waves' travel time from earthquake to seismograph depends on the rock's temperature—the hotter the rock, the slower the speed—and composition, they could turn the seismic data into images showing the slabs of cooler surface rock that plunge into the mantle at deep-sea trenches. The images showed the slabs going right through the 660-kilometer “barrier.” But the lowermost mantle still looked fuzzy and indistinct. For a clearer view, Van der Hilst and Kárason have now added more waves that probe the lowermost mantle, including those that bounce off the underside of the surface, those that skim the top of the molten outer core at the base of the mantle, and those that pass through the outer core.

    The sharper seismic view confirms that two great slab sheets, one hanging beneath the Americas and the other beneath southern Eurasia, plunge deep into the mantle, reaching depths of at least 1600 kilometers. And as in the earlier images, the bottom 300 kilometers of the mantle seems to hold slab material that has accumulated in broad piles on the mantle floor. In both the old and new images, the slabs seem to vanish between these two layers. Some researchers blamed the disappearance on poor resolution in the earlier images. But the new images suggest that the massive slabs do disrupt about 1800 to 2000 kilometers, says Van der Hilst, melting away into smaller scale features, only to reappear near the mantle floor. “Something funny does happen” about 2000 kilometers down, says seismologist Kenneth Creager of the University of Washington, Seattle. “It's suggesting some new phenomenon.”

    To many researchers, a likely possibility is that the lowermost 1000 kilometers of the mantle is the geochemists' long-sought storehouse of ancient, pristine rock. It “seems an obvious place to put geochemical reservoirs,” says geophysicist Bradford Hager of MIT. Under the right conditions, a dense, viscous layer below 2000 kilometers could resist mixing, according to modeling work by Hager and geophysical modeler Louise Kellogg of the University of California, Davis. In their model, slabs could plow through to the bottom of the mantle and plumes of hot rock could rise from near the core toward the surface. Because of the high viscosity of the layer, this traffic wouldn't unduly disturb it—or disrupt the delicate compromise between geophysicists and geochemists.

    “Atmospheric Holes” Rejected

    A year ago at the AGU meeting, space physicist Louis Frank of the University of Iowa in Iowa City started something of a snowball fight. He revived his decade-old theory that small comets—house-sized balls of fluffy ice—strike Earth 25,000 times a day, and he argued that dark spots seen in new satellite images of the upper atmosphere are watery traces of these snowball impacts. Some researchers agreed that the spots, or “atmospheric holes,” might well be real, but almost no one took small comets seriously as the cause. Frank and his critics have been lobbing arguments back and forth all year. Now, after a comet shower of criticism at this year's meeting, the dark spots themselves may be vanishing into oblivion.

    “The small-comet business is a very dead horse,” says planetary scientist Thomas Donahue of the University of Michigan, Ann Arbor, who last year was impressed by the evidence for the spots. Now, he says “the case for [spots] gets weaker and weaker as time goes on.”

    The images on which Frank based his claims last year (Science, 30 May 1997, p. 1333) came from two cameras aboard the Polar satellite, his own and one operated by space physicist George Parks of the University of Washington, Seattle. Parks has already concluded that the dark spots are simply instrumental noise (Science, 14 November 1997, p. 1217). And at the meeting, two independent teams—one led by Parks and another by space physicist Forrest Mozer of the University of California, Berkeley—presented new analyses that underscore that conclusion.

    If the dark spots are truly blobs of water vapor 1000 kilometers or so above Earth, both groups reasoned, their images as seen by Polar should swell and shrink by a factor of 100 as the satellite swings close to Earth and then away again on its highly elliptical orbit. But if the spots are noise generated within the cameras, their apparent size should not change.

    Neither Parks nor Mozer could find any hint that the spots changed size with the satellite's altitude. That finding “is robust and devastating,” says Mozer. “The data are completely consistent with an internal source” within the camera. Indeed, the spots could all be accounted for as noise produced by a camera's image intensifier, which can brighten an image erratically, according to modeling work by Mozer and space physicist James McFadden, also of Berkeley.

    But Frank is holding fast to his ideas. At the crowded session on small comets, he argued that Mozer's and Parks's analyses are flawed. The speed at which the holes cross the camera's field of view would also vary with spacecraft altitude, he said. That would sometimes make the holes hard to detect and so skew analyses like Parks's and Mozer's. Using his method of measuring holes, he showed that they get somewhat larger when viewed from lower altitudes. But Mozer countered that the size change was too small for the spots to be real.

    Undaunted, Frank continued, citing what he called supporting evidence. For example, he said that the spots are most abundant in images taken of the leading side of Earth, where Earth's orbital motion concentrates meteor impacts the way a moving car drives bugs onto the windshield. But others challenged that idea in a heated exchange during the question period. Unlike meteors, small comets are supposedly in orbits similar to Earth's and therefore overtaking Earth from behind. Their impacts should therefore peak on the trailing side, said longtime small-comet critic Alexander Dessler of the University of Arizona, Tucson. Frank countered that Earth would gravitationally focus the comets back to the leading side.

    “I don't agree,” chimed in planetary scientist Alan Harris of the Jet Propulsion Laboratory in Pasadena, California, as he projected a diagram of the orbital situation on the screen.

    “These cartoons are meaningless,” retorted Frank.

    “I have a Ph.D. in orbital mechanics,” Harris snapped back, in a jab at Frank's Ph.D. in the physics of auroras and magnetospheres. “I think I can speak authoritatively.”

    The acrimony suggests that the skeptics have taken over once more. “The ball's back in Lou's court,” says Donahue, who says Frank should detail his analysis in print. Further analysis of Polar data by others is not likely soon, says Donahue: “Most of the community regards it as a waste of time.”

    An Ocean for Old Callisto

    Callisto has been the odd moon out of Jupiter's four large satellites. The other three—Io, Ganymede, and Europa—have revealed clear signs of geologic activity: erupting volcanoes on Io, a magnetic field generated by a churning molten core on Ganymede, and, most exciting of all, a tortured, icy surface and likely subterranean ocean on Europa. In contrast, Callisto looked utterly inert, inactive inside and out for billions of years. But now, it seems, Callisto has a magnetic field—and even an ocean—of its own.

    At the meeting, researchers analyzing data from the Galileo spacecraft reported strong evidence of a magnetic field induced in an ocean beneath Callisto's icy surface by Jupiter's own powerful field. “This is an astonishing result,” says planetary physicist David Stevenson of the California Institute of Technology in Pasadena, because “Callisto looks dead.”

    Finding geophysical stirrings and liquid water beneath an ancient, unchanging surface required a bit of inference. Galileo team members led by Margaret Kivelson of the University of California, Los Angeles, had already suspected that magnetic signatures picked up during passes near both Europa and Callisto (Science, 2 January, p. 30) might have been induced in hidden oceans by Jupiter's massive magnetic field. That tilted field wobbles like a tipsy top as the planet rotates. In a salty ocean—which is a good conductor—the moving field would induce electrical currents, which in turn would create a magnetic field oriented roughly opposite to Jupiter's. Galileo seemed to have found such fields on its first passes by the two moons in 1996 and 1997. That was no great surprise for Europa, whose jumbled, icy surface shows signs of liquid water not far beneath, but the implications for stable Callisto were shocking. Kivelson herself remained cautious about an ocean, as did her colleagues.

    Now those doubts are falling away. “We think a subsurface ocean is likely” on Callisto, Kivelson said. During the latest Galileo flyby of Europa on 29 March, Europa continued to behave as an ocean-bearing moon should, reinforcing the argument for Callisto. And researchers such as space physicist Frances Bagenal of the University of Colorado, Boulder, are now particularly impressed by data collected late last year. When Galileo caught the moon in the opposite hemisphere of Jupiter's magnetic field, Callisto's field had flipped, just as an induced field should. The case for a subsurface ocean is “clear-cut,” says Bagenal.

    Europa's ocean has made it a tantalizing candidate for life, and planetary scientists are now beginning to wonder about the implications of an ocean for Callisto. Presumably, internal heat from radioactive decay is responsible for melting some of Callisto's ice to water, a key ingredient for life. Exactly where the ocean lies remains uncertain. Kivelson is putting it near the surface, far from the moon's inner fires, while Stevenson expects it to be at least 100 kilometers down. Geologists are also wondering how a subterranean ocean might have shaped surface geology during the past 4 billion years. Clearly, Callisto is the odd moon out no longer.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution