News this Week

Science  04 Apr 1997:
Vol. 276, Issue 5309, pp. 30
  1. Life on Mars

    Martian ‘Microbes’ Cover Their Tracks

    1. Richard A. Kerr

    Researchers analyzing tiny putative microbes are finding it surprisingly hard to prove or deny past life on the Red Planet, but hope a wide net of studies will yield results soon

    HOUSTON--The topics at the annual Lunar and Planetary Science Conference (LPSC) are usually strictly nonbiological. This year, however, some tiny traces of putative life muscled all the rocky planets, asteroids, and even Galileo's stunning views of Jupiter's icy moons from center stage. From a tangle of battling journal papers, talks, press releases, and hallway scuttlebutt, the planetary scientists who gathered here were trying to make sense of the extraordinary proposition published last August in Science: that a meteorite from Mars contains traces of ancient life on the Red Planet.

    Since then, a dozen or so research groups have searched for clues to temperatures at which the potato-sized meteorite's minerals formed and puzzled over tiny “nanofossils” that might be analogous to terrestrial bacteria. At the LPSC, the conflicting findings dueled to a stalemate. To the authors of the original paper, that was good news. “I've been very pleased that in the 7 months since our paper [was published], we have not had a showstopper,” says geochemist Everett Gibson of NASA's Johnson Space Center in Houston, who with David McKay of JSC and others argued in the Science paper that the best explanation of the evidence from meteorite ALH84001 was that life existed eons ago on the Red Planet. “We feel stronger about those conclusions now,” says Gibson. But by meeting's end, most others were less sanguine, arguing that life on Mars has gained little support from other groups.

    Although some research suggests that the tiny carbonate globules at the heart of the debate formed in an environment conducive to life, fine-scale measurements of isotope ratios have found hints of high temperatures--too high for life--when the microbes were supposedly at work in the rock. And crystals of an iron mineral, magnetite, also seem to require high temperatures--and may even be masquerading as the putative microbial fossils.

    After hearing all this and much more, attendees asked by Timothy Swindle of the University of Arizona to put a number to “the chances that the features identified by McKay et al. are the results of martian life” were barely lukewarm in their support: The median of the 120 written responses was a mere 20%. Yet, few are ready to write off the idea entirely. “Anyone who tells you they know the answer is probably overstating their case,” says isotope geochemist John Valley of the University of Wisconsin, Madison.

    Each of the five lines of evidence in the Science paper has plausible biological and inorganic explanations, and proving one or the other has turned out to be surprisingly difficult. In part, that is because the nanometer-scale mineral grains mixed in with the putative fossils had more complex histories than expected. Add that to the analytical problems of working with very tiny structures--”this sample is pushing the envelope,” as Valley puts it--and in hindsight it seems clear that no one should have expected a decisive verdict so soon. But the LPSC debates do suggest how a new look at the putative nanobacteria might settle the issue--perhaps within the next year.

    Magnetite mysteries

    Because LPSC is a gathering of physical scientists, the iconic images of putative “nanofossils” got little airing, although two wall-size murals did hang in the conference center's lobby. Instead, much of the discussion centered on two lines of evidence that illustrate how hard it is to prove whether the minerals associated with the fossillike features originated in living processes or dead geologic ones.

    One such potentially biogenic mineral was magnetite. McKay and his colleagues had found innumerable grains of this iron oxide, measuring about 50 nanometers across, within the 50-micrometer globules of carbonate called rosettes. Impressed by the grains' resemblance in size, shape, and crystalline regularity to magnetite produced within some terrestrial bacteria, the researchers argued that bacteria also produced the meteorite's magnetite--martian bacteria, that is. But late last year, John Bradley of the Georgia Institute of Technology, Ralph Harvey of Case Western Reserve University in Cleveland, and Harry McSween of the University of Tennessee countered that argument. In Harvey's words, “We did find the kind of magnetite described by the McKay group, but we also found a whole zoo of different [shapes] of magnetite.”

    In their view, this diversity of shapes points to an inorganic formation that also produced grains resembling biogenic magnetite by chance. What's more, the group said, many of these forms contain crystal defects characteristic of a high-energy environment--temperatures greater than 500oC.

    At the meeting, the JSC group and its supporters tackled this argument but couldn't quite make it go away. Paleomagnetician Joseph Kirschvink of the California Institute of Technology noted that biogenic magnetite comes in a wide variety of shapes and has defects too, at least when bacteria alter their environment and so induce magnetite to form outside, rather than inside, the bacterial cell. But at the meeting, Harvey highlighted a particular defect called a “screw dislocation”--an offset of one or more atomic layers formed as the crystal grew--that has never been linked to biogenic magnetite. “The defects we're showing are not minor defects,” he says. “That's going to be a very hard issue for them to overcome.”

    In a potentially even more damning line of attack, Bradley, Harvey, and McSween argued that the JSC group's putative “nanobacteria” are nothing more than grains of magnetite. They have analyzed ALH84001 with transmission electron microscopy for a year and seen nary a nanofossil--but have found many linear “whiskers” of magnetite that strongly resemble the “nanofossils” in size and shape, says Harvey. The JSC group, using scanning electron microscopy (SEM) and a different sample preparation technique, has found no whiskers, admits Kathie L. Thomas-Keprta of Lockheed Martin in Houston--raising the possibility that a whisker seen in SEM looks like a “nanobacterium.”

    And this is one question that may be answered in the coming months: Thomas-Keprta announced that she has developed a technique for plucking a single “nanobacterium” from the rock and slicing it open. Such a look inside one of structures should settle whether it is merely magnetite or not.

    Turmoil over temperature

    A plethora of papers at the meeting examined another troubling issue: the temperature at which the carbonate rosettes formed. That is relevant because McKay's team suggests that although the rosettes weren't produced by microbial metabolism, the bacteria altered the chemical environment in such a way as to induce carbonate formation. And assuming martian microbes had the same heat tolerance as earthly ones, that process must have happened at temperatures below about 115oC. Cosmochemists thought it would be straightforward to learn the formation temperature, because it affects both isotopic ratios and chemical composition.

    Before the meeting, however, estimates of the formation temperature of ALH84001's carbonate were all over the map. McKay and colleagues cited a low-temperature estimate made in 1994 by one of their co-authors, Christopher Romanek of the Savannah River Ecology Lab in Aiken, South Carolina. He measured the ratio of oxygen-18 to oxygen-16 of the meteorite's total carbonate. At higher temperatures, the carbonate would have picked up less oxygen-18; Romanek concluded from the higher ratios he measured that it formed below 80oC. But in July 1996, just before the Science paper was published, Harvey and McSween studied the abundance of magnesium, iron, and calcium in the carbonates--and concluded that they formed above 650oC.

    Now, new chemical and isotopic data presented at the LPSC and in recent papers challenge the assumption behind both results. Both groups had assumed that fluids carrying dissolved elements from the meteorite's dominant mineral, orthopyroxene, had undergone thorough chemical exchange with the carbonates; in other words, that the carbonate was in chemical and isotopic equilibrium with the rest of the meteorite. New evidence showed that it wasn't--and opened the way to a new set of conflicting temperature estimates.

    On the low-temperature side is isotope geochemist Valley. “We've not found evidence for equilibrium in this sample,” he told the meeting. “There is instead rampant disequilibrium.” By blasting atoms from the carbonate with a 30-micrometer-wide beam of ions and running the ionized atoms through a mass spectrometer, Valley found a range of oxygen-isotope compositions of 11.5 to 20 parts per thousand, as he and his colleagues reported in the 14 March issue of Science (p. 1633). If the carbonates had been in equilibrium with surrounding rock, they would have had the same isotopic composition wherever they were analyzed. That means that they formed either at low temperatures or too quickly to reach equilibrium. Valley favors low temperatures because the delicate alternation of magnesium-rich and iron-rich carbonates across the rosettes seemed unlikely to have formed rapidly. A companion paper by Kirschvink and others (p. 1629) found that two mineral grains retained magnetic field orientations and therefore also suggested fairly low temperatures.

    At the meeting, members of a team led by Laurie Leshin of the University of California, Los Angeles, reported that when they analyzed their own sample of ALH84001, they found an even wider range of oxygen-isotope compositions. In addition, isotopic variations correlated with variations in calcium composition, something Valley couldn't see because his rice grain-size sample didn't contain the full range of carbonate compositions.

    That correlation over a wide range of compositions suggests a very different interpretation to Leshin--a high-temperature origin. “We have a couple of possibilities for environments” under which the carbonates could have formed, she said, and both make life quite unlikely. In one, mineral-laden water flowing through the rock deposited the carbonates, and their composition varied as the temperature swung by up to 250oC. In the other, pockets of trapped fluid--mostly carbon dioxide, an unlikely place for life--deposited carbonate whose composition varied as deposition depleted the fluid.

    This latter scenario dovetails with an earlier talk by Edward Scott, Akira Yamaguchi, and Alexander Krot of the University of Hawaii. Based on inspection of the micrometer-scale distribution and composition of mineral grains in the meteorite, they suggested that the shock of an impact--which all agree ALH84001 felt more than once in its existence--melted small parts of the rock and injected the resulting fluid into impact-induced cracks. Within seconds, the melt would have crystallized into the carbonate seen today without the least bit of life being involved.

    This display of dueling isotopes and temperatures left the LPSC audience torn, and few partisans were persuaded to change sides. Asked what he made of the burgeoning isotopic debate, Harvey replied: “I'm confused, in a very happy way,” given that the more data, the better. To help end the confusion, researchers also plan to analyze isotopes of carbon that might constrain temperatures further. If LPSC attendees learned one thing from the give-and-take of the meeting, it was that it will take a broad web of evidence--chemical, mineralogical, and microscopic--to yield a decisive yea or nay on whether ALH84001 holds signs of past life. NASA is now preparing to divvy up the meteorite to more researchers to open the way to the broad-based attack that might finally yield a verdict.

  2. Mars Images: One Fine Day on Mars

    1. Richard A. Kerr

    D. Crisp et al./JPL

    The Hubble Space Telescope took its sharpest image ever of the planet Mars last month. Astronomer David Crisp of the Jet Propulsion Laboratory in Pasadena, California, and his colleagues acquired the Hubble image on 10 March in part to scout out what the Pathfinder spacecraft will find when it lands on Mars on 4 July. A martian dust storm could play havoc with the Pathfinder lander and the small rover it will place on the surface, both of which depend on full sunlight falling on their solar power panels. As northern hemisphere spring begins on Mars, however, the Hubble's Wide-Field Planetary Camera-2 revealed nary a dust storm in sight. White clouds made of water-ice shroud the giant impact basin Hellas near the bottom of the image as well as several great volcanoes on the right. Meanwhile, NASA officials have given the go-ahead to two robotic missions to the Red Planet in 2001; the robots will do some science but will also scout out conditions, such as radiation levels, of vital interest to any future human visitors.

  3. Paleoanthropology: Tracing the Identity of the First Toolmakers

    1. Ann Gibbons

    At dozens of archaeological sites in Africa, razor-sharp stone flakes and round hammerstones mark the handiwork of anonymous craftspeople who forged tools as early as 2.6 million years ago. But researchers have been hard pressed to determine which of the half-dozen hominid species then roaming Africa made these artifacts. Were the first toolmakers one of our ancestors, or some more distant, dead-end branch of our family tree? Three years ago, State University of New York, Stony Brook, anatomist Randall Susman proposed a rule of thumb to solve this mystery--an easy test based on the width of a bone in the thumb. He argued that this measure identified whose hands could grip objects with the precision needed for toolmaking (Science, 9 September 1994, p. 1570). His influential paper concluded that by 2 million years ago, most hominids could make tools; it also prompted speculation that the oldest such tools were made not by our relatively big-brained ancestors but by their small-brained vegetarian cousins. But new data suggest that Susman's test, like all rules of thumb, may have a wide margin of error that could lead to erroneous conclusions.

    In new work presented this week at the annual meeting of the American Association of Physical Anthropologists in St. Louis, two teams of scientists described a more complex test--developed by watching humans actually make tools--that may upset previous ideas about the identity of the ancient toolmakers. After applying her new test to some ancient hands, Arizona State University physical anthropologist Mary Marzke concludes--in contrast to Susman's work--that even 3.3 million years ago, the famous human ancestor nicknamed “Lucy” was beginning to acquire the needed hand structure for toolmaking. She adds that the acquisition of this skill involved many parts of the hand and not just the thumb, and that “the capability for toolmaking arose gradually.”

    Susman set out to determine which of the known species of australopithecines and early Homo might have made the Oldowan tools, which first appear in Ethiopia about 2.6 million years ago; similar tools are found until about 1 million years ago. He noted that humans, but not most apes, have a bone at the base of the thumb, the metacarpal, that has a broad head in relation to its length. He also found that the human thumb has three muscles not seen in apes, including a thick tendon called the flexor pollicis longus (FPL) that is linked with a broad metacarpal head and is vital for pinching objects between the pad of the thumb and another finger--a grip essential for toolmaking.

    Assuming that the wide metacarpal head indicated toolmaking ability, Susman measured bones from four extinct hominid species. He got a surprise: All human species from about 2 million years ago, including a small-brained vegetarian called Paranthropus robustus, had wide metacarpal heads. But Lucy's narrow bone head failed the test, suggesting that her species, Australopithecus afarensis, was not a toolmaker.

    Susman's simple method appealed to many. Biological anthropologist Leslie Aiello at University College, London, wrote at the time that the method offered “an apparently foolproof way of determining which of our early ancestors would have had hands that functioned in a way similar to our own.” But some were skeptical; at Northwestern University, for example, paleoanthropologists Mark Hamrick and Sandra Inouye showed that the wide metacarpal heads of mountain gorillas also passed Susman's test--although these apes clearly were not toolmakers.

    Hamrick, now at Duke University, and Marzke have both been tackling the problem in a more empirical manner. Marzke got Indiana University archaeologist and tool expert Nicholas Toth and two anthropologists to make stone tools at the Mayo Clinic; she wired their hands with electromyography sensors to measure which muscles were most stressed and videotaped the toolmaking process. She found that a complex group of muscles in the pinkie, thumb, and palm was used in toolmaking--but that the thumb muscle and precision grip highlighted by Susman were not heavily used. Hamrick, whose team at Duke did similar studies on nine people, corroborated that the precision grip wasn't a major factor.

    Working with Mayo Clinic hand surgeon Ronald Linscheid, Marzke also dissected human and ape hands and found that the muscles used in toolmaking leave an elaborate signature on the skeleton. For example, in humans the shape of the thumb joints reflects the fact that the small muscles of the thumb are more efficient. And the bones at the tips of the fingers have spines marking where ligaments hold the fingers steady during firm pinches.

    Using these data, Marzke has developed eight criteria, described in a recent article in the American Journal of Physical Anthropology, for identifying toolmaking capability. Although they incorporate Susman's criteria, including a well-developed FPL muscle, she says “no single skeletal feature can be relied upon as a safe indicator of toolmaking,” because any one feature could have evolved for other behaviors. Thus, she says, Susman's conclusions are suspect. Aiello, among others, applauds these refinements, and now agrees that a strong thumb “is not the only thing that is important.”

    So far, Marzke has analyzed a small collection of fossils and has found that A. afarensismet three of the eight criteria. That means, she says, that this species wasn't actually a toolmaker but that other behaviors, such as throwing stones, were changing their hands in ways that would later be used for toolmaking. Of the 2-million-year-old species studied by Susman, she found only one--H. habilis--that met all her criteria; fossils of the other species were too fragmentary for her to accept or reject them as toolmakers.

    And that's just the problem with her approach, says Susman: It is highly unlikely that researchers will ever find all the bones needed to test a species on all eight criteria. “What's the value of her criteria if they're inapplicable to fossils?” he asks. He also points out that Toth and other human volunteers may not make tools the same way early humans did: “Nick Toth isn't an australopithecine.”

    But Marzke and others already have permission to study the hand bones of P. robustusin South Africa, as well as more recent fossils of Homo species from China. Says Hamrick, who is planning follow-up work on one of Marzke's criteria: “Her work sets a stage where future researchers can look for those traits and test them.” So, the mystery of who made the first stone tools may soon be resolved by a show of hands--from the fossil record.

  4. Particulate Matter: Getting a Handle on Air Pollution's Tiny Killers

    1. Jocelyn Kaiser

    Mention particulate pollution, and most people think of soot from smokestacks and diesel buses. But there is another source of particles in urban air that is just as important: gases and vapors that come from evaporated or partially burned gasoline and solvents in paints, among other sources. In the presence of sunlight, these gases react with nitrogen oxides and other chemicals in the air and condense onto bits of dust to form very fine particles. These create a major public health problem--and a vexing puzzle for scientists, who have been hard pressed to figure out how these particles form and interact.

    Dark matter.

    On a smoggy day in Los Angeles (above), up to 20% of the fine particles in the air come from organic compounds condensing on bits of dust.

    Uniphoto

    Results on page 96 of this issue may dispel some of the uncertainty. A team of engineers and atmospheric chemists describes how they simulated particle-forming processes in the laboratory to devise a mathematical model that predicts how hydrocarbons from gasoline are transformed into particles. The work, by Jay Odum and others in John Seinfeld's lab at the California Institute of Technology in Pasadena, should help regulators get a better handle on how to reduce levels of fine particles, especially in areas where motor vehicles are a big contributor to air pollution. “It's a real move forward,” says Roger Atkinson, an atmospheric chemist at the University of California, Riverside.

    Researchers and regulators began paying close attention to fine particles in urban air some 5 years ago, when studies in cities such as Steubenville, Ohio; Provo, Utah; and Philadelphia showed that hospital visits for asthma and other respiratory illnesses rose and fell with the levels of these particles. On a smoggy day in Los Angeles, up to 20% of the mass of particulate matter of less than 2.5 micrometers in diameter can come from hydrocarbons in evaporated and burned gasoline, among other sources.

    This summer, the U.S. Environmental Protection Agency (EPA) is planning to issue its first-ever limits on the levels of these particles, called PM2.5. But to come up with specific steps to meet those goals--proposed measures range from limiting backyard barbecues to fitting diesel vehicles with new emissions-control equipment--regulators need good models for predicting the contribution of different hydrocarbon compounds in particle formation. That is what the Seinfeld group's work may help them to do.

    In earlier laboratory work with single hydrocarbons such as xylene, the researchers developed the first equation to describe accurately the process by which hydrocarbons oxidize and condense on seed particles: More condense if there is a higher mass of seed particles in the air. In the lab, they also found that different hydrocarbons did not interact much, so the contributions of the various compounds in the test chamber could simply be added up to predict the total level of particles. Finally, they noticed that in a mixture of hydrocarbons containing heavier aromatics and lighter compounds, only the aromatics seemed to attach themselves to soot. Together, these findings made it possible for the researchers to predict the amount of particles that would form from hydrocarbons, at least in the neat world of a test chamber.

    But that didn't mean their equations would work for the much more complex mixture of hydrocarbons in urban air. For that, the researchers decided to investigate particle formation from hydrocarbons in gasoline--a soup of hundreds of compounds that the researchers say serves as a good proxy for the cocktail of exhaust products, evaporated gasoline, paint solvents, and other humanmade hydrocarbons in city air. “We [hoped that] if we could do two compounds, we would be able to do 200,” Odum says.

    The researchers tested 12 different kinds of gasoline (each with different mixtures of aromatics) in a gigantic smog chamber--a Teflon bag about the size of a two-car garage, which they filled with seed particles (ammonium sulfates); the gasoline of interest; nitrogen oxides; and a chemical called a photoinitiator, which gets particle-forming reactions started. While the bag baked in the sun, they sampled its contents at regular intervals with gas chromatographs and mass spectrometers to track the transformation of hydrocarbons into particles.

    Through several test runs with the different gasolines, the researchers found that they could, indeed, predict from the gasoline's aromatic content the amount of particulate matter that would form. Moreover, just as with the simple mixtures of hydrocarbons tested in the lab, the contributions of the hundreds of hydrocarbons in gas turned out to be additive.

    Other researchers caution that Seinfeld's group still needs to show that its model will work with the whole gamut of particle-forming hydrocarbons in urban air, especially because in many eastern cities, compounds from pine trees and other vegetation can dominate. Still, “this is a clear breakthrough in our understanding,” says Steven Japar, an atmospheric chemist at Ford Motor Co.

    According to John Bachmann of EPA's Office of Air Quality Planning and Standards, the agency expects to use Seinfeld's results to improve its latest air-pollution models and to help figure out, for instance, whether gasoline with a lower aromatics content could help reduce PM2.5 in some areas. “These are the kinds of advances we really need [in order] to do a better job of modeling and specifying control alternatives,” Bachmann says.

    Seinfeld's group also found some unexpected good news. To reduce ground-level ozone, an important ingredient in smog, regulators in recent years have required gasoline manufacturers to remove some of the aromatics from their products. This has had the side benefit of cutting down on particulate matter. Being able to describe the links between gasoline and the formation of ozone and particles will be of great help in developing cost-effective pollution-control strategies, Bachmann says.

    Seinfeld is now weaving this new hydrocarbon model into an overall computer model for urban airsheds--one that includes sulfates and nitrates, which also form particles in the air. He hopes that having cleared up this murky spot in the air pollution picture, the team will be able to bring the whole thing into sharper focus.

  5. Biology: Fractal Geometry Gets the Measure of Life's Scales

    1. Nigel Williams

    Living organisms come in a vast range of sizes--from microbes to whales, they span at least 21 orders of magnitude. Biologists have long been intrigued by this startling array of bodily dimensions, and for more than a century they have been trying to figure out how body size and physiology are related. What they have come up with so far is a big conundrum. Metabolic rate, for example, varies in proportion to the 3/4 power of an organism's mass--the bigger the creature, the slower its metabolism--and similar relationships have been found for variables such as life-span, age at first reproduction, and duration of embryonic development (1/4, 3/4, and -1/4 powers of mass). The common factor in all these relationships--the 1/4 power, which seems to hold in almost all organisms from microbes to higher plants and animals--has biologists stumped: 1/3 powers would be much more logical if metabolic rate reflected only geometric constraints of body size.

    Branching out.

    Computer model of a vertebrate vascular system created using fractal geometry.

    M. Neumann et al./Univ. of Vienna

    Now, on page 122, a team of researchers reports a new mathematical model for living organisms that may finally help solve the puzzle. Their model--a unique combination of the dynamics of energy transport and the mathematics of fractal geometry--is still very general, but it has produced results that conform well with observations of living systems, including the enigmatic 1/4-power scaling. Other biologists are keen to test their approach. “They have really come up with something quite unexpected,” says ecologist William Calder of the University of Arizona.

    Ecologists James Brown and Brian Enquist of the University of New Mexico in Albuquerque had been butting their heads against the 1/4-power law for a long time. “It's an intriguing relationship, and we just couldn't get anywhere until we began to wonder whether the transportation of materials around the body might be a key rate-limiting process,” says Brown. The two quickly found that they needed more expertise in mathematics and physics to explore the complex mechanics of shunting supplies around bodies of different sizes. So, with the help of the Sante Fe Institute, which specializes in interdisciplinary theoretical research, they teamed up with physicist Geoffrey West of the Los Alamos National Laboratory.

    The team analyzed organisms in terms of the geometry and physics of a network of linear tubes required to transport resources and wastes through the body. Such a system, they reasoned, must have three key attributes: The network must reach all parts of a three-dimensional body; a minimum amount of energy should be required to transport the materials in a fluid medium; and the terminal branches of the networks (for example, the capillaries in the circulatory system) should all be the same size, as cells in most species are roughly similar sizes. A key insight came when the team realized that such a network could best be described using a space-filling, fractallike branching system. Fractal geometry, pioneered by physicist Benoit Mandelbrot, has been used to model many seemingly complex natural structures, from snowflakes to the branching patterns of streams, by repeatedly applying a relatively simple mathematical formula.

    “Researchers previously have tended to focus on individual parts of transport systems, such as major vessels or capillary beds, and have not focused on the whole network,” says Brown. “By looking at a whole transport system, it is possible to see the fractal branching,” he adds. “Combining energetics with fractal design is a fascinating approach,” says comparative physiologist Ewald Weibel of the University of Berne in Switzerland.

    Using this approach, the team has developed a general model for the design of distribution networks that incorporates both fractal geometry and hydrodynamics. The researchers believe that their model includes the most fundamental features of a real network. They found, for example, that when they initially tried to ignore some features, such as the elastic and pulsatile nature of blood vessels, “the model gave us all the wrong answers,” says Brown. The researchers then added more detail to the model, “to incorporate special features of blood vessels and other features such as the multiple small, parallel vessels of plant vascular systems,” Brown adds. And when this was done, the model predicted values for the scaling of structural and functional variables that were in close agreement with measured values. “It's stronger than any model that has come along before,” says Calder. And to the team's delight, the model predicts 1/4-power scaling. “It's the fractal approach that gives us the 1/4-power scaling,” says Brown.

    The model makes several as yet untested predictions. For example, because the metabolic rate of single-celled organisms, like those of higher animals, also scales as the 3/4 power of mass, the model predicts that the distribution of materials within single cells will show similar geometric and physical properties to those found in multicellular organisms. It also predicts the degree of branching in a circulatory system: A whale is 107 heavier than a mouse, but the new model suggests that a whale needs only 70% more branches in its circulatory system to supply its body. Brown even suggests that a fractal circulatory system, by providing an optimum means of provisioning different body sizes, may be a major factor in the evolution of such a vast array of shapes and sizes in the living world. “If fractal distribution didn't happen, would you find 21 orders of magnitude among organisms?” he asks.

    West, Brown, and Enquist are now planning to test some of the model's predictions; they are also looking at whether the model can help explain other aspects of ecology and evolution, such as why life-span, age at first reproduction, and embryonic development time also fail to scale to the 1/3 power of mass. “Since the rates of resource use, times of life histories, abundance of individuals, and numbers of species all vary predictably with body size, scaling may be one of the most fundamental features of biological diversity,” says Brown.

    Although the model is highly generalized, the team believes it may provide the starting point for other, more detailed, models. Says Weibel, “I'm not sure their model is the final one, but it has good predictive power, and study of the deviations from it is going to be very interesting.”

  6. Muscular Dystrophy: Backup Gene May Help Muscles Help Themselves

    1. Wade Roush

    Researchers are beginning to learn that, just as a single engine can keep a passenger jet aloft if the others fail, mammalian DNA contains redundant genes that might, in a pinch, be able to stand in for their counterparts. Now, scientists are trying to use one of those genes to treat a currently incurable human genetic disease: Duchenne type muscular dystrophy (DMD), which mainly affects boys.

    Figure 1 Protein proxy.

    In transgenic mice, utrophin took over for its cousin, dystrophin (red), stemming muscle breakdown.

    Every year, about 21,000 male babies worldwide are born with DMD, which is caused by a genetic defect that renders them incapable of making a key muscle-strengthening protein called dystrophin. As a result, their muscle cells slowly rupture and die, usually leading to heart or respiratory failure by age 20. But recent work in mice suggests that it may be possible to correct this defect by enticing muscle cells to make more of a very similar protein, called utrophin. These results, says Ron Schenkenberger, director of research for the Tuscon, Arizona-based Muscular Dystrophy Association, are “the most exciting development” in recent studies on DMD.

    If researchers can find a way to duplicate the animal results in boys with DMD, it might be possible to prevent muscle-cell loss or even repair damaged muscles. For example, researchers might be able to find drugs that can turn up the activity of an existing utrophin gene--a strategy similar to one that has already proved effective in treating another genetic disease, sickle cell anemia.

    Still, the history of efforts to come up with DMD therapies is tempering optimism. Discovery of the dystrophin gene in 1987 fueled hopes that dystrophin itself could be replenished in the muscles of those with DMD, either by injecting them with myoblasts--young, undifferentiated muscle cells from healthy donors that would then populate the muscles with normal cells--or by infecting diseased muscle cells with genetically engineered viruses carrying the dystrophin gene. But neither approach has yet panned out (Science, 24 July 1992, p. 472).

    Aware of these problems, geneticist Kay Davies of Oxford University in the U.K. decided to see whether a healthy gene could pinch-hit for dystrophin. In a similar maneuver, researchers are already replacing hemoglobin--the protein impaired in sickle cell patients--using urea compounds that activate the gene for fetal hemoglobin, which is normally turned off after birth. Davies and her team had already identified the gene for a protein that might play such a role in DMD. In 1989, they had used dystrophin's own gene sequence as molecular bait to find similar genes--and thereby hooked the utrophin gene. The proteins encoded by the two genes turned out to be extremely similar: 80% of their amino acids are identical, although they have different functions.

    Dystrophin is part of a complex of proteins that lash the fibers of muscle cells' internal skeletons, made of the protein actin, to an external support network, the extracellular matrix. These connections help buttress muscle fibers against the forces of stretching and contraction. In contrast, utrophin is found mainly at the synapses between muscles and their controlling nerves, where its role remains largely unknown.

    Davies had a tool to test her idea that utrophin could perhaps be made to stand in for dystrophin: the mdx mutant mouse, which produces little dystrophin. But first, she had to get the mice to produce utrophin in more of their muscle tissue than is usual. To do this, Davies, geneticist Jonathon Tinsley, and four other Oxford colleagues first genetically engineered a non-mdx strain to carry a condensed version of the utrophin gene attached to a promoter, or on-off switch, that would keep the gene turned on throughout an offspring's skeletal muscles. The mice carrying the utrophin transgene were then bred to mdx animals.

    Before this experiment, other muscular dystrophy researchers doubted that utrophin could fill in for the missing dystrophin. If it could, they argued, muscular dystrophy patients would naturally up-regulate the utrophin gene to produce more of the protein. Because they do not, “most people didn't think the experiment would work,” Davies recalls. But when the Oxford team examined the progeny of the two mouse strains, they found that male pups who had inherited both the dystrophin deficiency and the transgene had utrophin everywhere that dystrophin should be.

    What's more, as the group reported in the 28 November 1996 issue of Nature, the utrophin appeared to decrease sharply the muscle damage in the animals. Dying muscles release an enzyme called creatine kinase into the bloodstream, and mice with the active utrophin gene had only about one-fourth as much of this enzyme in their blood as their nontransgenic littermates. Under the microscope, moreover, muscles such as the diaphragm showed no signs of cell breakdown. The Oxford findings “are certainly promising, certainly encouraging,” says Eric Hoffman, a geneticist at the University of Pittsburgh School of Medicine who, in 1987, was a co-discoverer of the dystrophin gene.

    In two papers in the 24 February issue of the Journal of Cell Biology, the Davies lab and that of neuroscientist Joshua Sanes at Washington University in St. Louis report what may be another sign that the two proteins are interchangeable. They found that mice with their utrophin genes knocked out have only mild motor impairment; this suggests either that utrophin is not essential at neuromuscular junctions, or that dystrophin, or some unknown third member of the dystrophin-utrophin family, can substitute for it.

    Turning this apparent redundancy to the advantage of human DMD patients may take a long time, however. Utrophin will probably be no easier to sneak into muscle cells through gene therapy than dystrophin is, says Davies. Her original idea--boosting endogenous utrophin gene activity with a drug--may be more promising, but it hinges on whether researchers can find a compound that turns up utrophin production without producing intolerable side effects. Some 30 researchers discussed strategies for that search at a meeting in January at Long Island's Cold Spring Harbor Laboratory. And Oncogene Science Inc., of Uniondale, New York, is already screening for such a compound, with help from a $700,000 grant from the Association Française Contre Les Myopathies. “Within 2 years, we will know whether [up-regulating utrophin] will work,” says Davies. “Right now, it's very difficult to tell. … This is unknown territory.”

    For a detailed look at progress in several areas of tissue regeneration, please see the Special Issue beginning on page 59.

  7. Astronomy: Ancient Galaxy Walls Go Up; Will Theories Tumble Down?

    1. Tim Appenzeller

    IRVINE, CALIFORNIA--Few things are stranger than traveling far from home and finding yourself in familiar surroundings. Yet, that's just what astronomers are experiencing as they probe some of the farthest--and, hence, earliest--reaches of the universe. Speaking at a National Academy of Sciences colloquium here 2 weeks ago, Charles Steidel of the California Institute of Technology (Caltech) described new evidence suggesting that giant walls of galaxies, hundreds of millions of light-years long, may have crisscrossed the universe when it was just 15% of its present age. These structures, says Steidel, “are indistinguishable from what you see at any other distance.”

    Shrinking in the violet.

    A distant galaxy, visible through red and green filters, vanishes in the ultraviolet.

    C. Steidel et al.

    So far, Steidel's group has found just one definite wall and two strong candidates. But if the finding is confirmed--and other observers have less direct evidence pointing to similar structures in the early universe--it will push the evidence for giant groupings of galaxies back in time by billions of years. It may even indicate that the universe's largest features were there from the beginning. “It's a fantastic result,” says Neta Bahcall of Princeton University.

    It is also troubling for some theorists. “The more you push this back in time, the harder pressed the theorists become,” says Judith Cohen of Caltech, who has seen similar structures at more modest distances (Science, 18 October 1996, p. 343). If the early universe looks so similar to the modern one on large scales, then gravity hasn't resculpted it over time, which implies that it may have far less mass than theorists prefer. That's in line with evidence of a scarcity of mass in the nearby universe but at odds with one measurement of how fast the cosmic expansion rate is changing, which hints at a much higher density of matter (see next page).

    Steidel and his colleagues--Kurt Adelberger and Melinda Kellogg at Caltech, Mark Dickinson at Johns Hopkins University, Mauro Giavalisco from the Carnegie Observatories, and Max Pettini from the Royal Greenwich Observatories--found these structures by exploiting a shortcut for picking out galaxies at huge distances. They take pictures of the sky through filters of different colors and compare them. Light coming from the most distant galaxies has to traverse so much interstellar space that the sparse hydrogen there is enough to blot out part of their ultraviolet spectrum. As a result, the galaxies are visible in red and green images but disappear in ultraviolet. The observers then aim the giant 10-meter Keck telescope in Hawaii at each object that meets this color criterion, to measure how much of its light has been redshifted--displaced toward the long-wavelength end of the spectrum--by the expansion of the universe. The redshift indicates the galaxy's distance and thus its age.

    As of last month, the team had cataloged 168 galaxies at redshifts of between 2.8 and 3.5, which translates to perhaps 2 billion years after the big bang--early days in a universe that is now at least 12 billion years old. Earlier efforts had identified just a handful of galaxies and hundreds of strange, brilliant objects called quasars in this early epoch. But now, with a whole population of what Steidel calls “normal galaxies, garden-variety things” in hand, he and his colleagues have a sample big enough to study.

    They quickly noticed that the numbers of these galaxies varied widely in different “fields” scattered around the sky. The reason became clear when the astronomers arranged the 70 galaxies in the densest field by redshift. They found that many of the galaxies were clustered at a redshift of 3.1, creating a density spike and pushing up the overall galaxy numbers in that field. Two other spikes seem to be emerging in the same field, and there are hints of them in other fields as well, says Steidel: “It looks as though they are actually quite common.”

    The spikes, says Margaret Geller of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, probably mark places where the Keck's line of sight is piercing great sheets of galaxies. Steidel hasn't measured their dimensions. But Geller suspects that they resemble her own Great Wall, a collection of galaxies within a few hundred million light-years of Earth that forms a sheet some 100 megaparsecs long--more than 300 million light-years--which she and her CfA colleague John Huchra mapped during the 1980s. Caltech's Cohen also has a sense of déjà vu, having traced similar structures at redshifts of about 1.0, billions of light-years away. Steidel's finding, she says, “extends in a quantum leap our earlier work.”

    The finding also matches clues from studies of quasars. As their light passes through clouds of gas on its way to Earth, it picks up shadows--dark spectral lines. This spectral bar code indicates that the clouds contain elements forged in stars, implying that galaxies are nearby, and it also reveals the clouds' distances. They, too, seem to fall into large clumps at redshifts as high as 3.0, says Jean Quashnock of the University of Chicago, one of the researchers doing the work.

    All of this evidence for an early large-scale structure could support a scenario that Alex Szalay of Johns Hopkins has been exploring with Richard Bond of the Canadian Institute for Theoretical Astrophysics in Toronto. They argue that this cosmic clumpiness originated a few hundred thousand years after the big bang, long before any galaxies formed and while the universe was still a sea of hot, ionized gas. Theorists believe this hot soup of matter pulsed with oscillations, or “sound waves,” that would have created density peaks. Then, as the universe cooled, “this pattern gets frozen in,” Szalay says.

    The sound waves would have been about the right size to gather primordial matter into clumps measuring 100 megaparsecs or so. And if the soup had the right ingredients, the clumps could have turned into the great walls of galaxies persisting all the way up to the present. “Every survey that is deep enough detects [structure] on 100-megaparsec scales,” says Szalay, who detected traces of this pattern in the nearer universe during the late 1980s. “It starts to make perfect sense.”

    A modern-looking early universe is not what many other theorists expected, however. The matter-rich universe they have pictured should evolve rapidly over time, spurred by gravitational forces, so that its early architecture should look nothing like today's. “Any time the early universe starts looking more like today's, it favors a low [density],” says Princeton's Bahcall.

    One thing is clear: Observers will have to explore still greater distances and earlier times to see whether the eerie familiarity of the early universe really does go back all the way to the beginning.

  8. Astronomy: Supernovae Offer a First Glimpse of the Universe's Fate

    1. Donald Goldsmith

    Of all cosmological questions, perhaps the most resonant is the universe's ultimate fate, billions of years in the future. Is the cosmos destined to expand forever, becoming ever more tenuous? Or will it eventually slow to a halt, or even recollapse into a state of near-infinite density? It all depends on the density of matter in the universe, because gravity is what slows the cosmic expansion, and also on whether space itself has an innate “springiness,” as described by a term called the cosmological constant. But cosmologists don't know the value of either one well enough to predict the fate of the universe.

    Distant beacon.

    A supernova flares in a far-off galaxy.

    Perlmutter et al./Supernova Cosmology Project

    Now, by studying Type Ia supernovae--exploding stars visible at distances so vast that they represent earlier epochs of cosmic history--astronomers are getting a direct look at which way the universe is headed. By comparing the brightness of these beacons--an indicator of their distance--with the rate at which cosmic expansion is carrying them away, two groups of observers are now closing in on the rate at which cosmic expansion has changed over time. Both groups, one led by Saul Perlmutter of Lawrence Berkeley National Laboratory in California and the other by Brian Schmidt at the Australian National Observatory, have found and studied dozens of supernovae far out in the visible universe.

    Hints of a slowdown.

    A handful of distant supernovae (red) suggests that high matter density--an omega close to 1.0--is slowing the cosmic expansion. Magnitudes are linked to distance and redshifts to velocity.

    Source: S. Perlmutter et al.

    So far, only the Perlmutter group has announced results, at a Texas Symposium held in Chicago last December and in a paper soon to be published in The Astrophysical Journal.And these results are based on just seven of the distant supernovae. But they are enough to suggest that the density of matter in the universe may slow it to a halt, although perhaps only after an infinite amount of time has passed. “This is the first [systematic] attempt to use distant supernovae to constrain the deceleration and geometry of the universe,” Schmidt notes.

    Although these first results come with large error bars--Schmidt judges them to be “uncertain, although not necessarily incorrect”--their implications are startling. If they prove to be correct, the cosmos contains hundreds of times more mass than can be seen as stars and galaxies, and several times more than can be traced indirectly. In addition, the tentative results leave little room for a cosmological constant--the hypothetical attractive or repulsive force exerted by empty space. That would please cosmologists, who have never been fond of the constant, but the results may conflict with other clues suggesting a low-density universe (see previous page).

    Now, astronomers must wait to see whether scores of other supernova observations, which both groups are still analyzing, will sharpen or reduce this conflict. “Within 2 to 3 years, thanks to these supernovae, we should know [the universe's mass density] to an accuracy of 20% or better,” says Alex Filippenko of the University of California, Berkeley, a leading supernova expert.

    The key to measuring changes in the expansion rate lies in finding distance indicators bright enough to be visible in the far reaches of the universe--hence, seen at much earlier eras of cosmic expansion--and also uniform enough to serve as “standard candles,” objects whose apparent brightnesses as seen from Earth indicate their distances. Type Ia supernovae--giant thermonuclear explosions triggered when white dwarf stars steal material from companion stars until they exceed a critical mass--have the needed brightness. Flaring to a maximum in 2 or 3 weeks, then fading over the following months, they are the most violent stellar events known. And, in principle, because the critical mass should be the same for each explosion, they should furnish good standard candles.

    In practice, though, the peak brightness of these supernovae varies, threatening their usefulness as standard candles. But during the past 2 years, astronomers have learned how to recognize and compensate for these variations. Mark Phillips, Mario Hamuy, and Nicholas Suntzeff of the Cerro Tololo Inter-American Observatory in Chile have found that the brighter a supernova is at its peak, the more slowly it fades afterward. Hence, the shape of the supernova's “light curve”--its brightness over time--reveals its intrinsic luminosity at maximum light.

    In addition, Adam Riess of the University of California, Berkeley (a member of the group led by Schmidt), has found a way to correct for the varying amounts of dust in the galaxies containing supernovae--variations that can dim their light. Together, these two corrections allow astronomers to turn each supernova into a standard candle. “Type Ia supernovae as distance indicators have made the transition from childhood to their teenage years,” says Schmidt.

    In one demonstration of their value, astronomers have used corrected observations of “nearby” Ia's, up to a billion light-years away, to track how fast the universe is now expanding--the so-called Hubble Constant. To measure it, observers need to know the speeds at which objects at different distances are flying away from the Milky Way. The velocity is the easy part, because the motion displaces the light of an object toward the long-wavelength end of the spectrum, creating a “redshift” that is simple to measure. But finding standard candles reliable enough to measure absolute distances has been much more difficult.

    Astronomers have tried using objects ranging from pulsating stars to giant gas clouds as standard candles, and they have derived many different values for the Hubble constant. But when the corrected supernova observations entered the fray (Science, 24 November 1995, p. 1295), they yielded a result that could eventually help end the debate. Late last year, for example, Riess, along with Robert Kirshner and William Press of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, reported a Hubble constant of 64 kilometers per second per megaparsec (1 megaparsec equals 3.26 million light-years), a figure many astronomers find plausible.

    Cosmic slowdown. Now, by finding, correcting, and analyzing supernovae at much greater distances (as much as 7 billion light-years), the Perlmutter and Schmidt groups are learning how this expansion rate has changed over cosmic history. In the nearby universe, where the expansion hasn't changed much, the corrected brightnesses of the supernovae should fall along a straight line when plotted against redshift. For supernovae in the far-distant, long-vanished universe, however, the line should begin to bend, indicating that the expansion rate was different at earlier times.

    The amount of bending holds clues to the “geometry” of the cosmos--whether we live in an “open” universe, doomed to expand forever, a “closed” universe that will eventually recollapse, or the state just in between, which theoretical cosmologists prefer because it is predicted by a favorite scenario for the early universe called inflation. Two factors can change the expansion rate, thus bending the line: the mass density of the cosmos--expressed as omega, the ratio of the actual density to the density predicted by inflation--and the cosmological constant.

    Most theorists would prefer the cosmological constant to be zero, because they have no good way to justify any other value. But there is strong motivation to accept a positive value: This property of empty space could increase omega, reconciling the density predicted by inflation with the apparent lack of mass in the universe. All the visible stars and galaxies provide an omega less than 0.01, and even tallying all the invisible “dark matter” suggested by indirect measurements brings omega up to only 0.2 or 0.3. A cosmological constant could push omega up to the long-sought 1.0.

    Fortunately, mass and a cosmological constant have different effects on the way the expansion rate varies over time. With enough distant supernovae at a range of different redshifts, astronomers should be able to disentangle these effects. The Perlmutter and Schmidt groups have used automated techniques to spot scores of distant supernovae, but analyzing and correcting these observations require months of effort. They also require waiting a year or so to make follow-up observations of the supernova host galaxies, so that the galaxies' own light can be subtracted from the supernova measurements.

    With only seven distant supernovae analyzed so far, covering just a modest range in redshift, the Perlmutter group can only hint at the answer. But their preliminary results suggest that mass accounts for an omega of about 0.9, implying the existence of vastly more dark matter than has been traced so far, and the cosmological constant for an effective omega of only 0.1. The uncertainties are broad, however, leaving open the possibility that matter accounts for an omega as small as 0.3 and that the cosmological constant has a substantial value.

    The Perlmutter group's competitors say even this level of accuracy may be overly optimistic. “The actual [uncertainties] may be larger than those that are quoted,” says Kirshner, who notes that those in the Perlmutter group followed different procedures in observing their distant supernovae from the ones that Hamuy, Phillips, and Suntzeff used when they learned how to correct nearby supernovae to a standard brightness. What's more, says Schmidt, the Perlmutter group observed their first seven supernovae in one color, even though two or more colors give a better indication of reddening and dimming from dust.

    Richard Ellis of Cambridge University in the U.K., a member of Perlmutter's group, responds that “even with the best will in the world, it is not possible to treat low- and high-redshift data identically” because of differences in the intensity and colors of light from near and distant supernovae. Ellis adds that the Perlmutter group has done a better job at finding the farthest of these beacons, having cataloged some that lie twice as far away as their first seven.

    More detailed analysis of these and the dozens of other distant supernovae that the Perlmutter and Schmidt groups have observed will narrow the uncertainties. Both groups have been assigned large amounts of time on the Hubble Space Telescope during its next observing cycle, which begins this July, for follow-up observations of the supernova host galaxies. And by the end of 1998, both should have completed their work and had a chance to compare it. “I won't believe 'the answer' unless we both get the same answer,” says Riess. Perlmutter concurs: “It's great that there are two projects--there are so many things you can do wrong. Of course, we started first, and we would like to have our results out first.”

    Either, way, thanks to Type Ia supernovae, cosmologists now stand on the threshold of knowing the shape of the universe--and the shape of things to come.

    Donald Goldsmith's most recent book on astronomy, Worlds Unnumbered: The Search for Extrasolar Planets, has just been published by University Science Books of Sausalito, California.

  9. Molecular Biology: Counterfeit Chromosomes for Humans

    1. Wade Roush

    The name is not very elegant--YACs, standing for yeast artificial chromosomes. But for about a dozen years, these artificial constructs, containing the minimal elements of a functional chromosome, have been the basis of some elegant science. They have both aided understanding of yeast chromosome function and served as vehicles for cloning large genes from any species, using yeast cells' own DNA-replication machinery. Now, however, YACs may have to share the spotlight with HACs: human artificial chromosomes.

    HAC for hire.

    Humanmade microchromosomes (arrow) could shelter supplementary human genes.

    Athersys Inc.

    In this month's issue of Nature Genetics, researchers at Case Western Reserve University and Athersys Inc., both in Cleveland, report constructing the first wholly synthetic, self-replicating, human “microchromosomes,” one-fifth to one-tenth the size of normal human chromosomes. While the team still hasn't found an efficient way of transplanting microchromosomes' self-assembling components into new cells--a crucial step before researchers can exploit them fully--future refinements could give HACs even broader research applications than YACs.

    “They have created a system where they can now do a great deal to analyze [human chromosome] function,” says David Schlessinger, a genome researcher at Washington University in St. Louis. Custom-made HACs could, for example, help reveal how each gene's chromosomal packaging affects its activity. And there may also be medical payoffs: These chromosomes-in-miniature could be loaded with genes that are missing or impaired in patients with genetic disorders such as muscular dystrophy, then introduced into the patients' cells to compensate for the defect.

    Biologists have wanted to mimic human chromosomes ever since they performed the feat for yeast. To make YACs, researchers combine telomeres, the protective DNA segments on the ends of chromosomes; centromeres, which serve as attachment points for the protein fibers that pull duplicate sister chromosomes (chromatids) apart during cell division; and origins of replication, DNA segments where the double helix can unwind and begin to copy itself. “YACs showed us that this type of thing could be done in yeast, but there was no guarantee that it could be done in human cells, because human chromosomes are much more complex,” says molecular biologist Gil Van Bokkelen, president of Athersys and a co-author of the Nature Genetics paper with Case Western researchers John Harrington, Robert Mays, Karen Gustashaw, and senior author Huntington Willard.

    The members of the Ohio team believe that they succeeded because they decided to add non-protein-coding “satellite DNA,” repeated sequences of five to 171 base pairs found near mammalian centromeres, to their HAC recipe. Some researchers regard the satellite sequences as nonfunctional “junk DNA,” but from earlier studies, Willard's team had concluded that the alpha type of satellite is actually the centromere's main component.

    In the current work, the researchers first devised a way to build long strings of alpha satellite DNA. They then inserted the satellite arrays into cultured human tumor cells, together with DNA fragments from telomeres and plain “genomic” DNA, including origins of replication. Some of the satellite arrays combined with DNA fragments, forming microchromosomes 6 million to 10 million base pairs long. These apparently replicated when the tumor cells divided, because 6 months later, the progeny cells still contained HACS.

    To use HACs to uncover more about how real chromosomes work, experimenters will need a more reliable method than the current vehicles--lipid bubbles called “lipofectins”--to get HACs or their ingredients into cells. But with refinements, Willard says, HACs could help settle just what centromeres are made of and a host of other questions in molecular biology and biomedical research.

    In addition, any gene sandwiched between the synthesized satellite arrays and telomeres would, in theory, behave like a gene on a regular chromosome, because it would be accessible to enzymes, transcription factors, and the other machinery of gene expression and replication. Thus, HACs could give biologists a new way to study gene activity in human cells and gene-therapy researchers a new way to transfer needed genes into patients' cells. “YACs really aren't good for that--they are not stable in human cells,” says Louis Kunkel, a leading muscular dystrophy researcher at Children's Hospital in Boston. “This is a neat alternative.”

  10. Physics: New Proof Hides Cosmic Embarrassment

    1. James Glanz

    Stephen Hawking is betting his shirt again. Earlier this year, the Cambridge University astrophysicist conceded one wager about the hypothetical ruptures in the laws of nature called singularities. This time, Hawking has a better chance of winning, according to a new theorem by Princeton University's Demetrios Christodoulou to be published in the Annals of Mathematics.

    Figure 1 Censored.

    Collapsing shells of field energy (ripples) form a singularity cloaked within a black hole.

    ©1994 M. Choptuik/UT Austin

    The original bet, made in 1991 between Hawking and two physicists at the California Institute of Technology--Kip Thorne and John Preskill--concerned whether “naked” singularities could ever form in the universe. Singularities, points of infinite density formed when matter or field energy collapses, are hypothesized to exist within black holes, which “clothe” them, but Preskill and Thorne argued that under just the right circumstances, they might also form on their own. Hawking insisted that they cannot.

    This may sound like a recondite dispute among specialists, but it strikes at the heart of what cosmologists think they know about the fabric of space and time. “I would consider it the most significant question that can be posed entirely within the confines of classical, general relativity,” says Robert Wald, a cosmologist at the University of Chicago. Because Einstein's mathematical description of space-time breaks down at singularities, they would in effect throw the universe into unpredictability if they could be observed and their effects felt. “It's ignorance where ignorance really matters,” says Christodoulou.

    In the 1970s, Oxford University's Roger Penrose had offered some reassurance with his “cosmic censorship” conjecture, which said that singularities could never be directly observed because they would always be shrouded in black holes, from which even light can't escape. Hawking has drawn on the conjecture in some of his best known work, but Preskill says that “it would not be that surprising or terrifying to me if [cosmic censorship] weren't true.” Thorne agreed, leading to the 1991 bet.

    “Unfortunately, I wasn't careful enough about the wording of the bet,” Hawking said during a symposium on black holes in Chicago last December (Science, 24 January, p. 476). The wording didn't exclude naked singularities born in circumstances likely to be extremely rare in nature--for example, conditions precisely poised between black-hole formation and a less drastic collapse. Such naked singularities are allowed theoretically, according to earlier work by Christodoulou and computer calculations by Matthew Choptuik of the Center for Relativity at the University of Texas, Austin.

    “Kip and I started pressing Stephen that, well, he should pay up,” says Preskill. A story in the 12 February New York Times reported that Hawking had finally decided to settle the wager, which required the loser to hand over clothing embroidered “with a suitable concessionary message.” Hawking's chosen message, printed on a T-shirt: “Nature abhors a naked singularity.”

    “We said, 'This is a concession? It sounds like fighting words,'” recalls Preskill. But Christodoulou's new theorem lends support for Hawking's not-so-concessionary posture, by proving mathematically--without the approximations of the earlier computer calculations--that infinitesimal changes to the special, naked singularity-forming collapses will produce black holes instead. The proof assumes that the matter or energy collapses spherically, so it doesn't rule out the possibility of naked singularities born in more complicated geometries. But for spherical collapses, it shows that Christodoulou and Choptuik's earlier solutions “were very much of the character of a pencil standing on end,” says Wald. “In nature, you're never going to find pencils standing on their ends.” In light of his new theorem, says Christodoulou, “I don't think [Hawking] should have paid up.”

    Now, the original participants have laid a new wager. The bet is the same, except that it is now limited to naked singularities that might develop from “generic”--meaning not unstable or impossibly rare--initial conditions. And this time, says Preskill, the clothing must be embroidered with a “truly” concessionary message. Although Christodoulou's proof says nothing about nonspherical collapses, Hawking says he isn't worried: “The world is safe from naked singularities, at least in classical general relativity.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution


Navigate This Article