News this Week

Science  10 Oct 1997:
Vol. 278, Issue 5336, pp. 220
  1. ARCHAEOLOGY

    Doubts Over Spectacular Dates

    1. Ann Gibbons

    A DATING TECHNIQUE BASED ON A FAINT GLOW FROM SEDIMENTS LONG HIDDEN FROM LIGHT HAS YIELDED SOME OF THE MOST STARTLING—AND FIERCELY DISPUTED—DATES IN ARCHAEOLOGY

    In the remote Northern Territory of Australia, a huge sandstone boulder marks the spot where, according to aboriginal lore, a spirit named Jinmium turned herself to stone to escape her pursuer. This rock shelter has long been a magical place where ancient people camped, painted ochre figures, and carved holes in the walls. Archaeologists have been eager to know how far back its history extends, and last year, they got an answer that even the leader of the dating team called “pretty outrageous”: between 116,000 and 176,000 years. The date implies, among other things, that humans have been in Australia two to three times longer than previously thought, and it makes Jinmium the oldest known rock art site in the world. Admits Richard Fullagar, the Australian Museum archaeologist who led the team, “We worried about the date.”

    Coy about its age.

    The rock shelter at Jinmium, Australia.

    R. ROBERTS/LA TROBE UNIVERSITY

    But the team published the results anyway, along with a few caveats. Appearing in the journal Antiquity in December 1996, the report drew worldwide attention—and intense scrutiny. “When I first heard about those dates, I didn't believe them,” says Rhys Jones, an archaeologist at Australian National University in Canberra who is a member of the team now redating the site. “I doubt that the date will be confirmed.” Even Fullagar isn't sure of the date he published. He jokes: “I'm sticking to my guns. We're still uncertain.”

    Enlightening technique.

    Luminescence dating relies on natural radiation in minerals to knock electrons into “traps” (1 and 2); in the laboratory, heat or light empties the traps and stimulates a glow that indicates age (3 and 4).

    SOURCE: J. FEATHERS

    If the date falls—and there are early signs that Jinmium's real age may be as little as 10,000 years—it will eliminate a major challenge to the conventional view of Australian prehistory. But it will also deal a blow to the credibility of the experimental dating method used at the site: determining the age of sediments by measuring a tiny luminescence signal that builds up while the rock or sand grains are hidden from sunlight. The method is a potential boon to archaeology, offering a way to put a time scale on sites that can't be dated by any other method. Indeed, over the past decade, luminescence techniques have yielded a series of spectacularly early dates, which have put people in Siberia more than 260,000 years ago, modern humans in South Africa 260,000 years ago and in Australia 60,000 years ago (at sites other than Jinmium), and remarkably sophisticated toolmakers in central Africa 90,000 years ago.

    Quartz timing.

    Grains of the mineral from wasp nests are yielding dates for this Australian rock art panel.

    R. ROBERTS/LA TROBE UNIV.

    But several of these dates are already in question, and the techniques that produced them are still being tested and refined. As a result, many archaeologists and anthropologists are wary of published luminescence dates. “From the perspective of a consumer, like myself, it can be very difficult to know in any given instance whether a date is reliable or not,” complains Stanford University paleoanthropologist Richard Klein. Such doubts have discouraged many archaeologists, especially in the United States, from adopting the techniques.

    While luminescence dating has proved its value in dating pottery and burnt artifacts, dating experts agree that it has sometimes been applied too hastily to ordinary sediments. “There seems to be a great deal of danger of people taking a new technique and applying it without testing it adequately,” says David Huntley, a physicist at Simon Fraser University in Burnaby, Canada, who pioneered sediment-dating techniques. Yet the methods can be powerful, they say, when applied carefully, by researchers who understand the geology of the site and use state-of-the-art techniques in the lab. “I trust the reliability of a sediment date by optical dating more than a radiocarbon date—when it's well applied,” says Nigel Spooner, a physicist at Australian National University.

    Light industry

    The idea of using the light signal emitted by minerals in soil or ancient pottery to date these materials was proposed almost half a century ago. But it wasn't until the 1960s that it was put to work in dating pottery, and less than 20 years ago when it was first applied to sediments. Luminescence dating relies on a clock driven by natural radiation in common minerals like quartz or feldspar. The radiation bumps electrons from their normal positions in the minerals' crystal latticework into traps, or defects, at a rate that is roughly constant over time. Exposure to sunlight or heat from a fire at a site of human occupation empties many of the traps, setting the clock to zero. When the site is abandoned, the clock starts ticking again. As long as the mineral grains remain in the dark, the traps refill with electrons at a regular rate.

    Years later, scientists who have protected specimens from light while collecting them can empty those traps again in the lab. They do so by heating the sample—a technique called thermoluminescence dating (TL)—or tickling it with light from a lamp or laser, in optically stimulated luminescence (OSL). The freed electrons often generate a faint glow. The more intense this luminescence, the more time has elapsed since the clock was reset. If scientists can figure out how fast the clock ticks, for example by measuring natural radiation at the archaeological site, they can calculate when the mineral was last heated or exposed to light.

    One appeal of the technique is its ability to see at least 100,000 years back in time, and sometimes as much as 800,000 years—much farther than the better known radiocarbon dating method, which cannot date sites older than 40,000 years or so. And unlike radiocarbon dating, which requires organic material such as charcoal, or argon-argon and other dating methods, which require volcanic materials, luminescence dating can be put to work almost anywhere. “Quartz and feldspar are ubiquitous—they're in the sediments at practically every site,” explains James Feathers, an archaeologist at the University of Washington's luminescence dating lab in Seattle. “So you can often do luminescence where you can't do something else.”

    The technique has already established a solid reputation for dating pottery and other artifacts that were fired or burned in antiquity. These artifacts are often so opaque that feldspar, quartz, and other datable minerals in their interior layers have been well-protected from light—even if the relics themselves lay exposed on the ground. The record of believable dates for artifacts ranges from the terra-cotta army figures in Xi'an, China (dated to 2200 years) to burned tools and flint found with the remains of Neandertals and modern humans in the Levant (dated to about 90,000 years). Says Feathers, “TL dating on pottery and burned artifacts is almost routine.”

    Bleach job

    Luminescence dating is breaking new ground, however, in studies of ordinary sediments from archaeological sites. In place of the heat that sets the clock to zero in pottery or burnt stones, these studies assume that sunlight does the job—and that the sediment last saw daylight just before the art, tools, or other traces of human presence were buried. The burning issue for every scientist using TL or newer OSL methods on sediments is whether the light completely emptied the electron traps—zeroing the clock—when the sediment layer formed.

    While some electrons require only a few minutes of sunlight to be bleached, or freed from their traps (the easy-to-bleach signal), others need hours or even days of ultraviolet light (the hard-to-bleach signal). If soil was blown into the site by the wind, the minerals probably did see enough light to be entirely bleached, says Huntley. But sediment deposited by a river or glacial outflow may not have been thoroughly bleached. As a result, the luminescence age it yields will be misleadingly old.

    This is the chief concern at the site called Katanda in what is now the Democratic Republic of Congo, where George Washington University archaeologist Alison Brooks and her team uncovered a finely crafted barbed bone point and other tools from a cliff bank above the Semliki River in 1988. The textbook view has been that humans capable of making such sophisticated tools first appeared in Europe about 40,000 years ago. But luminescence dating of the tool-bearing sediments, together with another experimental dating method, called electron spin resonance (ESR), suggested that the tools were made at least 75,000 years ago, pushing back the onset of modern behavior (Science, 28 April 1995, pp. 495, 548, 553).

    The ESR date came from hippopotamus teeth in the tool layer, which could have washed in from older deposits, says McMaster University geologist Henry Schwarcz, who dated the teeth. And Seattle's Feathers, who was a postdoc in the lab of retired University of Maryland physicist William Hornyak, where the luminescence dating was done, is equally uncertain about the sediment results. The date rested entirely on TL of the hard-to-bleach signal, which may not have been zeroed completely by sunlight when the soil was deposited at the site.

    “The trouble with the site is the date that was published was based on the assumption that the quartz got fully bleached,” says Feathers, who is working to correct the problems with the OSL dating, which is better than TL at measuring the more reliable, easy-to-bleach signal. Hornyak, however, has said he is “very confident” of the TL dates because repeated tests on the sediments have yielded the same results.

    Rubble trouble

    A problem of a different sort is undermining the TL dates on sediments at the Jinmium rock shelter: pebbles of crumbly sandstone from the boulder wall or bedrock jumbled into the dated sediments. Because the rubble might not have been bleached at the same time as the sediments, it could have thrown off the dates. “Where there is rubble, there may be trouble,” jokes Richard “Bert” Roberts, a geochronologist at La Trobe University in Bundoora, Australia, who has dated many of the earliest sites of human occupation in Australia.

    Fullagar noted in his paper in Antiquity that although some of the layers he dated contained rubble, none was found in the layer with the oldest stone artifacts. Still, says Roberts, undetected grains from the wall of the rock shelter or from the bedrock below the sediment layer could have been mixed with the quartz that was dated. In a sample of 100 grains, he says, it would take just two 250,000-year-old flecks of quartz to give an overall date of 6000 years, even if the rest of the sample was just 1000 years old.

    To address that concern, Roberts has now collected fresh samples of sediment from Jinmium to date with the newer luminescence method, OSL. OSL can tease a date from samples as small as a few tens of grains of sediment—and sometimes as small as a single grain—instead of the thousands of grains typically needed for a TL signal, says Roberts. He is now painstakingly dating the sample, grain by grain, to see if old grains are mixed with newer ones, and expects results by the end of the year. Meanwhile, Spooner has analyzed the published TL data and thinks that the site could be as little as 10,000 years old.

    That's the kind of disagreement that has unnerved many archaeologists: “When I see wacky TL dates, I wait and set it aside until things settle down,” says archaeologist David Meltzer of Southern Methodist University in Dallas, who studies early sites in the Americas. “My sense is that you need to prove it over each time you go into a new area and make sure it works there.”

    Dating experts like Huntley agree: “One really has to find sites with well-established ages that no one's going to argue with to test the technique, to see if you can get the right answers. There hasn't been enough of that done,” he says.

    As an example of the right way to apply these techniques, some dating experts point to two rock shelters in Northern Australia where Roberts's team has come up with luminescence dates of 50,000 to 60,000 years, making them the earliest sites of human occupation in Australia outside of Jinmium. The team checked younger luminescence dates against radiocarbon results to show that the two clocks were in synch. They also used both OSL and TL, and analyzed very small specimens. “Those dates look solid,” says geochronologist Ann Wintle of the University of Wales in the United Kingdom, an expert on OSL and TL dating of sediments. “One has to compare with radiocarbon, where possible.”

    But such cross-checking isn't always possible. At one site in Siberia that has come under the luminescence spotlight, Diring Yuriakh, Russian scientists had come up with a TL date of over 1.6 million years for the sediments around a set of stone artifacts. That date was widely viewed as outlandish, and another team headed by Steve Forman, a geologist at the University of Illinois, Chicago, redated the sediments with TL to get a figure of at least 260,000 years (Science, 28 February, pp. 1268 and 1281). Yet the prevailing view is that humans didn't venture into subarctic regions until 30,000 years ago.

    So the team that redated the site has looked for other methods. But Diring Yuriakh lacks the charcoal, volcanic materials, or teeth needed for other techniques, and Forman's attempts to use OSL failed, because the sediments had been buried too long and all the OSL electron traps had been filled long ago. “So you're left with nothing except TL,” says Forman. “Yes, it's experimental. Yes, it's developing, but what else do you do?”

    Buzz word

    Caution is the best advice that Wintle and others can offer to archaeologists eager to use luminescence methods. They advise forming an interdisciplinary team that can scrutinize the geology of the site, such as how the sediments were laid down, particularly around the object being dated, and detect signs of trouble in the lab (such as poor bleaching or a signal that fades because the traps have leaked electrons before the sample was dated). “Above all, the ultimate test of a date is whether it can be reproduced by an independent lab with access to the original site, because reassessment of the geological context is critical,” adds geologist Jack Rink of Canada's McMaster University.

    New luminescence strategies may also help. Roberts and his colleagues, for example, have now dated some of the most spectacular rock art in Australia by extracting single grains of quartz from mud-wasp nests on the rock face (Nature, 12 June). Wasp nests are common at rock art sites in Australia and elsewhere, and the mineral grains they harbor should have been well bleached at the time the nests were built—and sealed off from light since then. And because the nests are often right on top of the pigment, and in some cases have been partially painted over, they can provide minimum and maximum dates for the art. In another advance, Spooner and the University of Wales's Geoff Duller are borrowing a tool from astronomers' telescopes for detecting the luminescence—a charge-coupled device, capable of detecting the dimmest signals.

    In the end, high-profile controversies at sites like Jinmium may speed the transformation of luminescence sediment dating from showy upstart into a reliable standby. “My job is luminescence dating,” says Roberts. “I can't afford to have the field look inept.”

  2. ASTRONOMY

    Probing a Star's Heart of Crystal

    1. Govert Schilling
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Looking into a crystal ball is not what you'd expect an astronomer to do. But Don Winget and his colleagues will give it a try next year. Their hope is not to predict the future, but to unravel the past. And their crystal ball isn't on a magician's table; it is a pulsating white dwarf star at a distance of tens of light-years in the southern constellation Centaurus.

    Winget and his colleagues have identified this star, called BPM 37093, as an ideal laboratory for studying how material inside white dwarfs—ancient stellar embers—crystallizes as they age. Depending on what the researchers find, “the ages of white dwarf stars in the disk of our galaxy might have to be revised upward, from 9 billion to 11 billion years or so,” says Winget, who is at the University of Texas, Austin. That could spell trouble for cosmologists, who believe the universe as a whole is only about 11 billion years old.

    Compact stars with a mass comparable to the sun, but a diameter not much larger than Earth's, white dwarfs are the remains of sunlike stars that shed their outer layers into space at the end of their lives. The cores, which no longer have any fuel left for nuclear fusion, slowly cool and fade. The coolest white dwarfs should thus be relics of the first stars to form in the galaxy.

    Determining a white dwarf's age from its temperature isn't simple, however. In the early 1960s, theorists predicted that the atomic nuclei inside a cooling, compacting white dwarf would arrange themselves in a rigid lattice structure. The formation of this crystal ball, like the freezing of water, would release energy, slowing the cooling rate and making the star look younger than it really is.

    Just how big an effect the crystallization has on the star's cooling rate depends on whether the carbon and oxygen nuclei that make up most of the white dwarf crystallize as an “alloy”—which would limit the effect—or separate into a core of pure oxygen and a carbon-oxygen mantle. “If [this] phase separation takes place,” says white-dwarf specialist Gilles Fontaine of the University of Montreal, “you have additional energy release, and the cooling of the white dwarf is slowed down [further].” The delay would add 2 billion or 3 billion years to the age determined from temperature alone.

    Astronomers have had a technique that could settle the question—but until now, no suitable star. The technique is asteroseismology: observing pulsations at a star's surface to deduce its inter-nal structure, much as geologists study earthquakes to learn about the interior of our planet. Pulsating white dwarfs are scarce, however, and known pulsators are too hot to be crystallized.

    In the 1 October issue of The Astrophysical Journal, Winget and his colleagues Mike Montgomery (also at the University of Texas) and Kepler de Souza Oliveira Filho, Antonio Kanaan, and Odilon Giovannini (at the Universidade Federal do Rio Grande do Sul in Brazil) identify BPM 37093 as an exception. They point out that this particular pulsating white dwarf is massive enough (about twice the mass of an average white dwarf) to have a crystallized interior, despite its relatively high surface temperature of over 11,000 degrees Celsius. Asteroseismology of this star “promises to provide us with the first empirical tests of the theory of crystallization,” say the researchers.

    In March 1998, Winget and his colleagues will begin studying the star's pulsations, observable as tiny brightness variations over periods of about 10 minutes, with the Whole Earth Telescope. WET is a collection of medium-sized telescopes all over the world, working in tandem to ensure that the star is never lost from sight. The observations will continue for nearly 2 weeks, yielding data that Winget hopes will settle the issue of whether phase separation takes place.

    “That would be a fabulous achievement,” says Huib Henrichs, a University of Amsterdam astronomer who uses asteroseismology to study giant stars. But Fontaine argues that poorly known features of the star such as its internal convection and the mass of the external hydrogen layer could make the results hard to interpret. Says Fontaine: “A lot of hard work will be necessary to untangle the various effects” that could mimic the signature of crystallization.

    Winget agrees but says additional, ultraviolet (UV) observations of his crystal ball, to be made next year by the orbiting Hubble Space Telescope, will help sort out these effects. In the UV images, he says, it should be possible to distinguish the confounding effects from glimpses of a real crystal ball.

  3. EVOLUTION

    How the Malaria Parasite Manipulates Its Hosts

    1. Virginia Morell

    Looking at the malaria parasite through an evolutionary biologist's lens could yield better ways to fight the disease. That was the take-home message from two groups of researchers at the recent biennial meeting of the European Society of Evolutionary Biology in Arnhem, the Netherlands (24 to 28 August). These scientists' studies suggest that in its mosquito and mammalian hosts, the parasite, like all living things, behaves according to the rules of evolutionary theory—and its adaptations have consequences for human health.

    Pity the mosquito.

    Malaria makes it bite recklessly.

    R. A. ANDERSON/UNIV. OF AARHUS

    Contradicting the long-standing belief that malaria doesn't harm its mosquito hosts, Jacob Koella from the University of Aarhus in Denmark and his colleagues showed that the parasite has evolved the ability to turn them into the insect equivalents of bloodthirsty Count Draculas. The mosquitoes—à la Bela Lugosi, vanting to suck your blood—aggressively seek more and more encounters, boosting the parasite's chances of being transmitted, but also increasing the insects' chances of dying in action.

    In a mammalian host, too, the parasite is responding to evolutionary pressures, suggests Margaret Mackinnon of the University of Edinburgh in the United Kingdom, leader of the second team. Her findings fit largely untested theoretical models claiming that pathogens face an evolutionary trade-off that affects their virulence. The faster they replicate in the early stages of the disease, the more readily they can be transmitted. Too fast, however, and the host will die first.

    Together, the studies show how evolutionary biology can open a new window on one of the oldest and most common of human diseases. “[Koella and Mackinnon are] taking the point of view of the disease organism—that it's something that evolves and has its own ecology,” says Marlene Zuk, an evolutionary biologist at the University of California, Riverside. “And that's what's usually missing” in other approaches to malaria.

    Until Koella's study, for example, researchers generally studied malaria-bearing mosquitoes only in the lab. There, the female Anopheles mosquito can survive for several months with the Plasmodium parasite living in her salivary glands and being transmitted. That lengthy period led scientists to believe that Plasmodia don't harm their insect hosts.

    But Koella found otherwise. Studying mosquitoes in the wild in Tanzania with collaborators, including Edith Lyimo of the Ifakara Centre for medical research, he discovered that the infected insects have a shorter life-span than their caged counterparts. “Most mosquitoes die when they bite someone and are squashed [in response],” Koella says. It thus makes sense for a mosquito to keep its biting to a minimum.

    It might also seem smart for the malaria parasite to want its insect host to behave with some restraint, as the longer the insect is alive, the more opportunities the parasite would have to be transmitted. But in fact, says Koella, the Plasmodium “wants the mosquito to bite as often as possible, in order [for the parasite] to be transmitted to as many hosts as possible.”

    To test this counterintuitive idea, Koella asked human volunteers (who had been screened for distinct genetic differences) to spend a night in a house in an area where malaria is rampant. In the morning, he collected 173 mosquitoes that had fed on the volunteers. Using the polymerase chain reaction technique, which distinguished the genetic patterns of each volunteer, his team was able to identify which sleepers each mosquito had bitten. Only 18% of the 111 uninfected mosquitoes had feasted on more than one person, compared to 34% of the 62 infected mosquitoes. “That suggests the mosquitoes are moving around much more when they are infected, something that epidemiologists should be taking into account,” notes Andrew Read, an evolutionary biologist at the University of Edinburgh who is also working on malaria.

    Koella speculates that the parasite alters the neurochemistry controlling the mosquito's abdominal stretch receptors, so that it bites insatiably. “If Koella is right,” Read says, “then this is a fantastic demonstration of a parasite manipulating its host.” Adds Kevin Lafferty, a parasitologist at the University of California, Santa Barbara, “He has broken that old paradigm about malaria not causing any pathology to the mosquito. And that's exciting, but it's also shocking to find such a big gap in our knowledge at this stage.”

    Another gap in understanding malaria is that there's no predicting how virulent the disease may be. “That variation has always been a puzzle,” says Read, who works with Mackinnon. “You see kids who have been infected with the disease at the same time and with the same dose, and some are very ill, while others aren't.” Clinicians often attribute such variation to the differences in human immune systems, say Mackinnon and Read. But the variation may also stem from genetic differences within the parasite that cause some strains to replicate at a higher rate and an earlier stage, possibly producing a more virulent—and more transmissible—form of malaria.

    Such genetic differences could mean that natural selection controls the level of virulence, says Mackinnon, making “the parasite sufficiently nasty to its host to reproduce and get transmitted, but not enough to kill its host.” The malaria parasite—like other pathogens—should thus evolve to an intermediate level of virulence. “It's one of the basic tenets of Darwinian medicine,” says Read, “but there is little experimental data to back it up.”

    To see if malaria fits the model, Mackinnon infected lab mice with eight distinct, cloned strains of the parasite. She then checked to see which mice were suffering highly virulent infections by measuring a variety of classic symptoms. The results showed “large and consistent differences” between the strains, says Mackinnon. Those that had high replication rates early in infection were the most virulent, making the mice extremely ill; these strains also had slightly higher transmission rates, suggesting that there is a “genetic relationship” among the three traits.

    The finding suggests that the parasite does indeed face an evolutionary trade-off between being transmitted quickly and killing the host. Thus, the researchers say, it's possible that control measures such as vaccines or mosquito nets could actually lead to the development of more virulent strains of the disease. For instance, explains Mackinnon, a vaccine is likely to cause the parasite to replicate earlier and faster so that it can be transmitted before the host's enhanced immune response wins out. “It's an important issue,” says Zuk. “How will the disease organism respond to different ecological conditions?”

    “That's the big question,” agrees Mackinnon, and it's one she hopes her research will eventually answer. “We know now that the parasite has the capacity to change genetically. Will it become more virulent if we start curbing its transmission?” Evolutionary theory predicts that it could.

  4. MATHEMATICS

    Fractions to Make an Egyptian Scribe Blanch

    1. Dana Mackenzie
    1. Dana Mackenzie is a science and mathematics writer in Santa Cruz, California.

    Over 3500 years ago, the Egyptian scribe Ahmes wrote a scroll of mathematical problems intended to instruct his readers in the “knowledge of all obscure secrets”—including how to multiply and divide fractions. The scroll, called the Rhind Papyrus, is also a primer on the Egyptian way of writing fractions: as sums of reciprocals of whole numbers, using as few numbers as possible, and always avoiding repetition. Thus, Ahmes wrote the number we call “2/17” as 1/12 + 1/51 + 1/68. Bizarre as it looks, this method gave exact results with a surprising economy of notation, and the Rhind Papyrus showed how to use it to solve a number of supposedly practical problems involving the division of loaves and beer.

    Thinly sliced.

    Some of the 366 terms in the modern Egyptian fraction that adds up to 2. Ancient versions of these sums of reciprocals were far more compact.

    SOURCE: G. MARTIN

    Now, two mathematicians have given the ancient method of Egyptian fractions a perverse modern twist. For the sheer delight of it, they have shown how run-of-the-mill numbers can be expressed as fractions so sprawling that they would have given the ancient scribe carpal tunnel syndrome—if he didn't run out of papyrus first.

    Classically, Egyptian fractions were very sparse, involving reciprocals of widely spaced numbers. But Greg Martin, a number theorist at the Institute for Advanced Study in Princeton, New Jersey, went to the opposite extreme by asking: How densely spaced can the terms in an Egyptian fraction get? Or, to put it another way, if we want to divide r kegs of beer among our friends in the Egyptian way, so that they all get different amounts but nobody gets less than a certain amount—1/n kegs—how many friends can we serve?

    According to Martin, we can serve a lot more friends than Ahmes, or even modern mathematicians, would have suspected: Two kegs of beer can be divided among 366 people, with everyone getting at least 1/1000 of a keg and no two people getting the same amount. Reduce the limit to 1/3,000,000 of a keg, and nearly 1 million people could get some minute fraction of the beer.

    A string of fractions that long would ordinarily add up not to an integer like 2, but to a fraction with a gigantic denominator—the least common denominator of all the summands. “We've all seen examples in fifth-grade arithmetic where the denominator gets bigger—and the more fractions we add, the bigger and messier it gets,” Martin says. “To add a million fractions together and get a denominator of 1…is just outside our experience.”

    The key to creating a dense Egyptian fraction that can add up to an integer is choosing the denominators of the reciprocals, or unit fractions, so that you can do a spectacular amount of cancellation. And as Martin showed in a paper presented at a number theory conference this summer at Pennsylvania State University, such cancellation can always be arranged, provided that n, the denominator of the smallest allowable fraction, is large enough, and provided the number of fractions doesn't exceed about 30% of n.

    It's just a matter of smoothness—not of the beer, but of the fractions. “Smooth” numbers, a term coined by Ronald Rivest of the Massachusetts Institute of Technology, are numbers with no large prime divisors. They are an essential ingredient in every state-of-the-art computer factorization algorithm, but for Martin's purposes, the most important fact is that there are lots of them: For any number n, roughly 30% of the numbers between 1 and n have no prime factors greater than the square root of n. These are, with a few strategic deletions, precisely the numbers Martin used as the denominators of his unit fractions. Because they inevitably have many common factors, they open the way to the extensive cancellation needed to get a denominator of 1.

    Other number theorists are persuaded by the finding, and another young mathematician recently proved a remarkably similar result. Ernest Croot III, a graduate student at the University of Georgia in Athens, has reportedly solved a question raised by the great mathematician Paul Erdös, who died last September (Science, 7 February, p. 759). Numbers theorists have long known that the unit fractions from 1 to 1/n add up to a number roughly equal to the natural logarithm of n plus “Euler's constant,” 0.577 …. For example, 1 + 1/2 +…+ 1/1000 is about 7.5. Thus, 8 is too large to be represented as a sum of unit fractions between 1 and 1/1000, but conceivably some of the numbers from 1 through 7 could be. Erdös asked what percentage of integers less than the logarithm of n could be written as an Egyptian fraction without using any fractions beyond 1/n. The very wording of the question suggests that he doubted they all could.

    Croot, however, has shown that—at least for large enough values of n—Erdös was too pessimistic. In fact, every integer at least 0.2 less than the number 1 + 1/2 +…+ 1/n can be represented this way. If n = 1000 is sufficiently large (Croot isn't sure it is), then all the integers from 1 to 7 would be expressible as Egyptian fractions with no denominators greater than 1000. Like Martin's result, Croot's proof implies the existence of extraordinarily dense fractions. To get a sum that's close to the maximum possible (log n plus Euler's constant), you've got to use most of the terms available to you.

    Croot's work is still unpublished, but Andrew Granville of the University of Georgia, a leading number theorist and Croot's thesis adviser, has read the proof and says it is “certainly correct.” Granville adds that it is “sheer coincidence” that Croot and Martin happened to get interested in Egyptian fractions at the same time. “Ernie tends to work on whatever takes his fancy—he'll just pop into my office and show me some ingenious thing I didn't even know he was working on.”

  5. PARASITOLOGY

    Fishing for Answers to Whirling Disease

    1. Carol Potera
    1. Carol Potera is a writer in Great Falls, Montana.

    BOZEMAN, MONTANA—When virologist Karl Johnson retired from a storied career at the Centers for Disease Control and Prevention 10 years ago, he moved here to indulge a lifelong passion. Instead of battling Bolivian hemorrhagic fever, Ebola, and other devastating diseases, Johnson envisioned tooling away much of his time by casting his line into the Madison River, famous for its wild rainbow trout. But Johnson, instead, finds himself battling yet another pathogen—except this one targets not humans, but the trout. “Vandals have broken into the cathedral of fly-fishing,” laments Johnson.

    The “vandals” he's referring to are the protozoa Myxobolus cerebralis, parasites that infect young fish, devouring their cartilage and leaving them deformed. Inflammation meant to fight the infection puts pressure on nerves and disrupts balance, causing fish to swim in circles, or whirl, which makes it difficult to feed and escape predators. Because of “whirling disease,” the rainbow trout population on the Madison has plummeted from 3500 fish per mile in 1990 to 300 per mile in 1994. Johnson and others responded by establishing the privately funded Whirling Disease Foundation in Bozeman, where he serves as scientific director. With money from the foundation, the federal government, and other resources, researchers are beginning to make dramatic progress against this puzzling disease.

    First identified in Europe in 1893, but not seen in Montana until 3 years ago, whirling disease has been notoriously difficult to diagnose in early stages of infection. But a sensitive test, developed in the past year, has given researchers a new tool to track infections and study the complex life cycle of M. cerebralis, which depends on two hosts and two pathogenic stages. These new techniques should immediately help fisheries managers avoid spreading the parasite through contaminated fish stocks, and they may eventually point to ways to combat the parasite in the wild.

    Although M. cerebralis poses no threat to humans who eat infected fish, whirling disease has jeopardized Montana's income from trout fishing, an industry that brings in $250 million a year. Anglers worldwide seek the thrill of catching Montana's wild rainbows, which leap, cartwheel, and run across the water. “Wild trout are part of our heritage,” says Marshall Bloom, a medical virologist at the National Institutes of Health's Rocky Mountain Laboratories in Hamilton, Montana, and head of the Montana Whirling Disease Task Force.

    When fish pathologist Beth MacConnell of the U.S. Fish and Wildlife Service in Bozeman first diagnosed whirling disease in the Madison River in late 1994, little was known about it. To this day, no one understands why whirling disease zeroes in on rainbows, how to stop it, or how it spread to Montana and 20 other states. But researchers hope that the new diagnostic technique, based on the polymerase chain reaction (PCR), may help clarify these critical questions.

    Currently, whirling disease is diagnosed by a brute-force method: Fish heads are ground up and examined microscopically for spores of M. cerebralis that form as the disease progresses. Not only is this method time-consuming and inaccurate (related fish parasites produce similar-looking spores), it can only detect infections that have been developing for at least 4 months. Because a newly infected fish may look normal, says Ron Hedrick, director of the Fish Health Laboratory at the University of California, Davis, “they may have been stocked from hatcheries and spread the disease.”

    In 1996, Hedrick and his graduate student, Karl Andree—who in part were supported by the Whirling Disease Foundation—supplied the key ingredient needed for a PCR assay that can dodge many of these problems: They cloned and identified ribosomal RNA sequences specific to M. cerebralis. The assay works by finding bits of M. cerebralis DNA in tissue and then multiplying them up for detection. Early laboratory results by Andree confirm that the PCR assay can detect the parasite just 2 hours after infection.

    Microbiologist Bob Ellis of Colorado State University in Fort Collins is now testing the assay in the field by placing young, caged fish in waters containing the parasite. After just 3 weeks of exposure, three of 29 fish tested positive in the tail area. After 2 months, nine of 39 fish tested positive in either their tails or heads. (M. cerebralis penetrates the tail epidermis first, then travels through nerve bundles to the head.) “The goal is to develop an early diagnostic method that doesn't require killing fish,” says Ellis. When perfected, tail clippings of hatchery fish could be checked before dumping, preventing spread of the protozoa.

    Screening hatchery fish will help in states that stock fish, but not in Montana: The state does not release hatchery fish in its streams—its reputation as a haven for fly fishers is built in part on the fact that its big trout are bred in the wild. So experts are looking for other ways to restore self-sustaining populations of wild rainbows in Montana and other states. That leads them to the mud-dwelling worm.

    When fish die from whirling disease, M. cerebralis spores are released from decaying carcasses and survive for up to 30 years in mud. To infect fish, the spores must be eaten by mud-loving Tubifex tubifex worms. By some unknown mechanism, the worm's gut converts clam-shaped spores into an infectious form, Triactinomyxon (TAM), which look like three-armed hooks. “Without the worms, it's a dead end. Yet we know practically nothing about the worms,” says Bloom.

    Some fish biologists even doubted that the two morphologically distinct spore forms were related, especially as attempts to reproduce their complex life cycle in the laboratory brought inconsistent results. The PCR test put the controversy to rest, when Andree showed in the May-June Journal of Eukaryotic Microbiology that the spore and TAM stages share 99.8% genetic homology. And, for the first time, researchers can detect the parasite in T. tubifex worms. Before the PCR assay, “we had no diagnostic procedure for that at all,” says Hedrick.

    Of special interest are T. tubifex worms that appear to resist infection by the parasite. At the University of Montana, Missoula, parasitologist Bill Granath finds in preliminary studies that parasite DNA is detected in resistant worms 24 hours after spore infection, but is undetectable 4 days later. “It appears that parasite DNA enters the worm tissue, but doesn't develop,” says Granath. These resistant worms fuel dreams about biological control: If they could displace susceptible worms in the environment, they could offer a potential natural control. Hedrick's laboratory also plans to investigate how resistant worms neutralize spores and whether resistant genes can be passed to susceptible worms.

    Granath's studies point to another potential weak spot in the parasite's life cycle: At 15 degrees Celsius, huge amounts of TAMs are released from the worms into the water, but at 5 degrees, few are released. This laboratory finding parallels field observations that hatchling rainbows, but not stream-sharing brown trout, contract whirling disease. Fish biologist Dick Vincent of Montana Fish, Wildlife, and Parks in Bozeman found that young rainbows emerge in May, when waters are warm and filled with TAMs that bombard them. Young brown trout emerge in March, when waters are colder and contain few TAMs. One implication is that selective pressures may eventually favor rainbow trout that spawn earlier in colder waters.

    All these recent discoveries have resulted from a tiny research investment. In 1996, federal grants to study whirling disease totaled just $360,000, the Whirling Disease Foundation awarded $73,000, and some chapters of Trout Unlimited gave up to $10,000. Colorado State's Ellis says it may be time for anglers to “put more money into research instead of flies.”

  6. ASTROPHYSICS

    Theorists Nix Distant Antimatter Galaxies

    1. Gary Taubes

    For those who yearn for an equal opportunity universe, half matter and half antimatter, the latest findings will be a disappointment: Matter dominates, and there are no antigalaxies, despite the dreams of science fiction fans everywhere. This is the conclusion of a trio of theorists after a lengthy analysis of the physics of matter-antimatter annihilation and the gamma-ray glow that pervades the sky. Their finding, to be reported in the Astrophysical Journal in February, may sound like a victory for conventional wisdom, but it underscores a long-standing mystery: why the big bang displayed such blatant favoritism toward matter.

    Long shot?

    Artist's conception of the antimatter detector to fly on the space shuttle.

    AMS

    The universe that sprang from the big bang should have contained equal parts of matter and antimatter. But cosmologists have long known that our cosmic neighborhood is all matter. Now physicists Andy Cohen of Boston University; Alvaro de Rújula of CERN, the European particle physics lab near Geneva; and Sheldon Glashow of Harvard University have confirmed that matter somehow dominated the rest of the visible universe as well. By rigorously calculating the energy that would have been emitted when matter and antimatter met and annihilated, then comparing the results with actual measurements of the gamma-ray background, they rule out the existence of large domains of antimatter. “It's probably the best job ever done of calculating the annihilation rates and the gamma-ray spectrum,” says the University of Chicago's David Schramm.

    The favored explanation for the absence of antimatter in the nearby universe is that soon after the big bang a slight asymmetry developed between matter and antimatter. The asymmetry enabled a little extra matter to survive when the two annihilated, leaving a universe apparently devoid of antimatter. But that picture has been taken on faith more than data. The asymmetries in the relevant parameters of quantum physics—in particular, a phenomenon called CP violation—currently appear to be smaller than necessary.

    The uncertainty opened the way for a second scenario: that the universe started off with equal amounts of matter and antimatter, segregated in nonoverlapping domains. When the newborn universe went through a spurt of exponential growth, called inflation, these domains grew so quickly that they never had time to annihilate completely. If so, the universe today would have huge domains of antimatter, on the scale of galaxy clusters or larger. These antigalaxies would look to us like the ordinary variety, but, says Cohen, there should be telltale signs of matter-antimatter annihilation at the boundaries between domains.

    If the matter and antimatter domains are nearby in time and space, the high-energy gamma-rays from their boundaries would have been seen already, he says. But the signal from larger domains—at least the size of superclusters—could have been missed. Annihilation would have begun at their boundaries early in the history of the universe, says Cohen. “The gamma-rays would be smeared out, redshifted to lower energies by the expansion of the universe. Now, there is a diffuse gamma-ray background, and no one is exactly sure where it comes from. The suggestion that it comes from antimatter is an old one, going back several decades.”

    So he, de Rújula, and Glashow tested that idea by computing the spectrum of diffuse photons from matter-antimatter annihilation in the early universe. The process can be thought of as “the ultimate bomb,” de Rújula says. A gas touching an antigas annihilates in an explosion of light, particles, and antiparticles, which in turn heats both the gas and antigas, causing them to annihilate faster, producing yet more annihilation and more heat and so on.…While calculating the energy from this chain reaction is “pretty difficult,” says de Rújula, “it's just a case of laboriously applying our knowledge to a very complicated thermodynamic calculation.”

    The three physicists conclude that even in the most conservative analysis, matter-antimatter annihilation should produce a signal five times as large as the observable diffuse gamma-ray background. “It's an awfully big effect,” says Glashow.

    Cohen notes that there are still a few loopholes: For instance, if the universe consists of just two domains, one entirely matter and the other entirely antimatter, the analysis wouldn't hold. “If you looked in one direction, you might not see any gamma-rays at all,” he says. “So we don't have anything to say about that case.”

    Schramm says the analysis definitely reinforces the “prior prejudices” of theorists that the antimatter isn't there. But the work wasn't done just for the enlightenment of theorists. In 1995, physicist Sam Ting of CERN and the Massachusetts Institute of Technology began work on a detector to fly on the space station that would search for antimatter cosmic rays, such as nuclei of anticarbon, coming from distant antigalaxies (Science, 12 January 1996, p. 142). Ting's experiment is scheduled to be tested on the space shuttle this May.

    Ting says he promised to buy de Rújula, Glashow, and Cohen dinner if their analysis supported the possibility that his detector will see cosmic antinuclei. Now dinner is off, he says. But Ting, who is famous for knowing when to ignore the predictions of theorists and won a Nobel Prize by doing just that, isn't discouraged. “The most important thing is that no precision experiment has ever been done” to measure the composition of cosmic rays. Adds Glashow, “We're not exactly saying it's impossible for [Ting] to find antimatter. We're saying that from the context of current cosmology it's impossible. So if he finds it, he upsets the whole apple cart.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution


Navigate This Article