News this Week

Science  02 Jan 1998:
Vol. 279, Issue 5347, pp. 28
  1. Calibrating the Mitochondrial Clock

    1. Ann Gibbons

    Mitochondrial DNA appears to mutate much faster than expected, prompting new DNA forensics procedures and raising troubling questions about the dating of evolutionary events

    In 1991, Russians exhumed a Siberian grave containing nine skeletons thought to be the remains of the last Russian tsar, Nicholas II, and his family and retinue, who were shot by firing squad in 1918. But two bodies were missing, so no one could be absolutely certain of the identity of the remains. And DNA testing done in 1992—expected to settle the issue quickly—instead raised a new mystery.

    Some of the DNA from the tsar's mitochondria—cellular organelles with their own DNA—didn't quite match that of his living relatives. Forensic experts thought that most people carry only one type of mitochondrial DNA (mtDNA), but the tsar had two: The same site sometimes contained a cytosine and sometimes a thymine. His relatives had only thymine, a mismatch that fueled controversy over the authenticity of the skeletons.

    The question of the tsar's bones was finally put to rest after the remains of his brother, the Grand Duke of Russia Georgij Romanov, were exhumed; the results of the DNA analysis were published in Nature Genetics in 1996. Like the tsar, the duke had inherited two different sequences of mtDNA from their mother, a condition known as heteroplasmy. But solving the mystery of the Romanov's remains raised another puzzle that first troubled forensics experts and is now worrying evolutionists. “How often will this heteroplasmy pop up?” wondered Thomas J. Parsons, a molecular geneticist at the Armed Forces DNA Identification Laboratory in Rockville, Maryland, who helped identify the tsar's bones.

    Several new studies suggest that heteroplasmy may in fact be a frequent event. They have found that it occurs in at least 10% and probably 20% of humans, says molecular biologist Mitchell Holland, director of the Armed Forces lab. And because heteroplasmy is caused by mutations, this unexpectedly high incidence suggests that mtDNA mutates much more often than previously estimated—as much as 20-fold faster, according to two studies that are causing a stir. Other studies have not found such rapid mutation rates, however.

    Resolving the issue is vital. For forensic scientists like Parsons, who use mtDNA to identify soldiers' remains and to convict or exonerate suspects, a high mutation rate might cause them to miss a match in their samples. It could also complicate the lives of evolutionary scientists who use the mtDNA mutation rate as a clock to date such key events as when human ancestors spread around the globe.

    Evolutionists have assumed that the clock is constant, ticking off mutations every 6000 to 12,000 years or so. But if the clock ticks faster or at different rates at different times, some of the spectacular results—such as dating our ancestors' first journeys into Europe at about 40,000 years ago—may be in question. “We've been treating this like a stopwatch, and I'm concerned that it's as precise as a sun dial,” says Neil Howell, a geneticist at the University of Texas Medical Branch in Galveston. “I don't mean to be inflammatory, but I'm concerned that we're pushing this system more than we should.”

    Counting mutations

    The small circles of DNA in mitochondria have been the favored tool for evolutionary and forensic studies since their sequence was unraveled in 1981. Unlike the DNA in the nucleus of the cell, which comes from both egg and sperm, an organism's mtDNA comes only from the mother's egg. Thus mtDNA can be used to trace maternal ancestry without the complicating effects of the mixing of genes from both parents. And every cell in the body has hundreds of energy-producing mitochondria, so it's far easier to retrieve mtDNA than nuclear DNA.

    It seemed like a relatively straightforward genetic system. Researchers could count the differences in the same sequence of mtDNA in different groups of people and, assuming a constant mutation rate, calculate how long ago the populations diverged. But the case of the tsar highlights how little is known about the way mtDNA is inherited. His mother must have carried or acquired a mutation, so there were hundreds of copies of each of two kinds of mtDNA in her egg cells. She then passed some of each kind to her sons. But just how often do such mutations occur?

    The most widely used mutation rate for noncoding human mtDNA relies on estimates of the date when humans and chimpanzees shared a common ancestor, taken to be 5 million years ago. That date is based on counting the mtDNA and protein differences between all the great apes and timing their divergence using dates from fossils of one great ape's ancestor. In humans, this yields a rate of about one mutation every 300 to 600 generations, or one every 6000 to 12,000 years (assuming a generation is 20 years), says molecular anthropologist Mark Stoneking of Pennsylvania State University in University Park. Those estimates are also calibrated with other archaeological dates, but nonetheless yield wide margins of error in published dates. But a few studies have begun to suggest that the actual rates are much faster, prompting researchers to think twice about the mtDNA clock they depend upon.

    For example, after working on the tsar's DNA, Parsons was surprised to find heteroplasmy popping up more frequently than expected in the families of missing soldiers. He and his colleagues in the United States and England began a systematic study of mtDNA from soldiers' families and Amish and British families. Like most such studies, this one compares so-called “noncoding” sequences of the control region of mtDNA, which do not code for gene products and therefore are thought to be free from natural selection.

    The researchers sequenced 610 base pairs of the mtDNA control region in 357 individuals from 134 different families, representing 327 generational events, or times that mothers passed on mtDNA to their offspring. Evolutionary studies led them to expect about one mutation in 600 generations (one every 12,000 years). So they were “stunned” to find 10 base-pair changes, which gave them a rate of one mutation every 40 generations, or one every 800 years. The data were published last year in Nature Genetics, and the rate has held up as the number of families has doubled, Parsons told scientists who gathered at a recent international workshop* on the problem of mtDNA mutation rates.

    Howell's team independently arrived at a similar conclusion after looking deep within the pedigree of one Australian family affected with Leber hereditary optic neuropathy, a disease caused by an mtDNA gene mutation. When the researchers analyzed mtDNA from 40 members of this family, they found that one individual carried two mutations in the control region (presumably unrelated to the disease, because it is noncoding mtDNA). That condition is known as triplasmy, because including the nonmutated sequence, he had three different mtDNA sequences in his cells.

    By tracing the mutations back through the family pedigree, Howell was able to estimate that both mutations probably arose in the same woman who was born in 1861, yielding an overall divergence rate of one mutation every 25 to 40 generations. “Both of our studies came to a remarkably similar conclusion,” says Howell, whose study was published in late 1996 in the American Journal of Human Genetics. Both also warned that phylogenetic studies have “substantially underestimated the rate of mtDNA divergence.”

    Several teams of evolutionists promptly went back to their labs to count mtDNA mutations in families of known pedigree. So far, Stoneking's team has sequenced segments of the control region in closely related families on the Atlantic island of Tristan da Cunha, where pedigrees trace back to five female founders in the early 19th century. But neither that study nor one of 33 Swedish families has found a higher mutation rate. “After we read Howell's study, we looked in vain for mutations in our families,” says geneticist Ulf Gyllensten of Uppsala University in Sweden, whose results are in press in Nature Genetics. More work is under way in Polynesia, Israel, and Europe.

    Troubled by the discrepancy in their results, the scientists have pooled their data with a few other studies showing heteroplasmy, hoping to glean a more accurate estimate of the overall mutation rate. According to papers in press by Parsons, and Stoneking and Gyllensten, the combined mutation rate—one mutation per 1200 years—is still higher than the one mutation per 6000 to 12,000 years estimated by evolutionists, although not as fast as the rate observed by Parsons and Howell. “The fact that we see such relatively large differences among studies indicates that we have some unknown variable which is causing this,” says Gyllensten.

    Because few studies have been done, the discrepancy in rates could simply be a statistical artifact, in which case it should vanish as sample sizes grow larger, notes Eric Shoubridge, a molecular geneticist at the Montreal Neurological Institute. Another possibility is that the rate is higher in some sites of the DNA than others—so-called “hot spots.” Indeed, almost all the mutations detected in Parsons and Howell's studies occur at known hot spots, says University of Munich molecular geneticist Svante Pääbo.

    Also, the time span of observation plays a role. For example, because hot spots mutate so frequently, over tens of thousands of years they can revert back to their original sequences, overwriting previous mutations at that site. As a result, the long-term mutation rate would underestimate how often hot spots mutate—and the average long-term mutation rate for the entire control region would be slower than that from near-term studies of families. “The easiest explanation is that these two rates are caused by hot spots,” says Pääbo.

    If so, these short-term rates need not perturb long-term studies. “It may be that the faster rate works on the short time scale and that you use the phylogenetic rate for long-term events,” says Shoubridge.

    But Parsons doubts that hot spots account for all the mutations he has observed. He says that some of the difference between the long-term and short-term rates could be explained if the noncoding DNA in the control region is not entirely immune to selection pressure. The control region, for example, promotes replication and transcription of mtDNA, so any mutation that interferes with the efficiency of these processes might be deleterious and therefore selected against, reducing the apparent mutation rate.

    Regardless of the cause, evolutionists are most concerned about the effect of a faster mutation rate. For example, researchers have calculated that “mitochondrial Eve”—the woman whose mtDNA was ancestral to that in all living people—lived 100,000 to 200,000 years ago in Africa. Using the new clock, she would be a mere 6000 years old.

    No one thinks that's the case, but at what point should models switch from one mtDNA time zone to the other? “I'm worried that people who are looking at very recent events, such as the peopling of Europe, are ignoring this problem,” says Laurent Excoffier, a population geneticist at the University of Geneva. Indeed, the mysterious and sudden expansion of modern humans into Europe and other parts of the globe, which other genetic evidence puts at about 40,000 years ago, may actually have happened 10,000 to 20,000 years ago—around the time of agriculture, says Excoffier. And mtDNA studies now date the peopling of the Americas at 34,000 years ago, even though the oldest noncontroversial archaeological sites are 12,500 years old. Recalibrating the mtDNA clock would narrow the difference (Science, 28 February 1997, p. 1256).

    But not everyone is ready to redate evolutionary history on the basis of a few studies of mutation rates in living people. “This is all a fuss about nothing,” says Oxford University geneticist Martin Richards, who thinks the fast rate reaches back hundreds of years at most.

    That, however, is squarely within the time frame of forensics cases. Heteroplasmy isn't always a complicating factor in such analyses. When it exists in more than one family member, the confidence in the identification gets stronger, as in the case of the tsar. But otherwise, it could let a criminal off the hook if his mtDNA differed by one nucleotide from a crime scene sample. Therefore, Parsons and Holland, in their work identifying 220 soldiers' remains from World War II to the present, now have new guidelines—adopted by the FBI as well—to account for a faster mutation rate. When a missing soldier's or criminal suspect's mtDNA comes up with a single difference from that of a relative or at a crime scene, the scientists no longer call it a “mismatch.” Instead the results are considered “inconclusive.” And, for now, so are some of the evolutionary results gained by using the mtDNA clock.

    • * First International Workshop on Human Mitochondrial DNA, 25 to 28 October 1997, Washington, D.C.

  2. MEETING BRIEFS

    Geophysicists Ponder Hints Of Otherworldly Water

    1. Richard A. Kerr

    SAN FRANCISCO—More than 7000 geophysicists gathered here from 8 to 12 December for the annual fall meeting of the American Geophysical Union (AGU). All eyes were on a session addressing the claim that tiny comets pummel Earth every minute, but more distant wonders—oceans within Jupiter's moons—were one topic in a little-noticed Friday afternoon session.

    Tidings of a Hidden Ocean?

    Looking at the tortured, icy surface of Jupiter's moon Europa, it's easy to imagine a liquid ocean stirring below it. Indeed, planetary geologists observing the fractures and disruption of the crust—features also seen in the ice pack of our Arctic Ocean—have argued that an ocean, one that could be teeming with life, lies just a few kilometers down (Science, 19 December 1997, p. 2041). But the geological arguments aren't absolutely convincing; they don't rule out the possibility that the moon once had an ocean that has long since frozen solid. Now an unexpected new clue to an ocean on Europa—as liquid and briny as our own—has emerged.

    An ice-encrusted ocean?

    This color-enhanced image of Europa dramatizes the surface disruption, suggesting that an ocean lies below.

    NASA/JPL

    Looping past the satellite, the Galileo spacecraft has picked up what may be traces of a magnetic field induced in a subsurface ocean by Jupiter's own powerful field. The evidence is not yet conclusive, but if further close flybys bear it out, “it would be the strongest evidence so far for an ocean,” says planetary physicist David Stevenson of the California Institute of Technology. And Europa may not be unique. In a surprising twist, the same kinds of clues are also hinting at an ocean in another jovian moon, Callisto.

    Soon after Galileo began inspecting Jupiter's four major satellites in December 1996, it picked up unexpected magnetic signatures near these bodies. Ganymede had an unmistakable magnetic field, generated in the same way as Earth's: by the churning of a liquid metal core. But the fields Galileo detected near Io, Europa, and Callisto were much weaker. Instead of originating within the moons, researchers speculated, these weak magnetic signatures could result from plasmas of charged particles swirling nearby.

    Then two researchers, Stevenson and planetary magnetist Fritz Neubauer of the University of Köln in Germany, independently began thinking of other ways to explain the weak fields Galileo had detected. Both hit on the idea that they might be induced within a satellite by Jupiter's own magnetic field. For example, Stevenson reasoned that if Europa did have an ocean 10 to 100 kilometers beneath its surface ice, the jovian magnetic field would be sweeping through the water constantly because the tilted field wobbles like a tipsy top as the planet rotates. If the ocean were salty enough, and thus a good electrical conductor, the moving field would induce electrical currents—called eddy currents—in the ocean. These currents would in turn create a magnetic field oriented roughly opposite to Jupiter's.

    At the AGU meeting, Lee Bennett, Margaret Kivelson, and Krishan Khurana of the University of California, Los Angeles (UCLA), reported what Kivelson called a “rather impressive” fit between Stevenson's calculated field and the observations. Galileo's first magnetic field data from near Europa showed that the field at the spacecraft waxed and waned during the flyby much as Stevenson had predicted for an induced field. “I was shocked and pleased,” says Kivelson, who is the principal investigator of the Galileo magnetometer. “It was the first time it began to make sense.” A second Europa pass recorded a magnetic signature that was much more variable, but still roughly consistent with an induced field, says Stevenson.

    The UCLA group then applied Stevenson's model to Callisto. Galileo gravity data had initially seemed to imply that Callisto is nothing but a mix of rock and ice. Neubauer suggested, however, that the magnetic signature Galileo picked up looked much like an induced field. He was right. “The astonishing thing is that the model works even better for Callisto” than for Europa, says Stevenson—suggesting that Callisto has a salty ocean 200 kilometers down. According to geophysicist Gerald Schubert of UCLA and his Galileo colleagues, the gravity data don't rule out such an ocean after all. Their revised interpretation, reported at the meeting, calls for an outer shell of water that could be frozen on the outside but liquid inside.

    More flybys during Galileo's 2-year “extended” mission, which began in December, could firm up the case for one or both of these oceans. Stevenson remains cautious, however. “I still wonder whether there's some way external to the satellite to produce this effect,” he says. “We have a poor enough understanding of the plasmas around these bodies that maybe we're missing something.”

    ‘Atmospheric Holes’ Assailed

    The idea that house-sized comets are pelting the Earth 25,000 times a day appears just as implausible now as it did when space physicist Louis Frank of the University of Iowa in Iowa City first proposed it in 1986. But the dark spots seen in satellite images of the upper atmosphere, which Frank had identified as “atmospheric holes”—the watery traces of his comets—have come back to tease space scientists. Dismissed as instrument noise 10 years ago, the spots reappeared last year in images from a new satellite, Polar, persuading a few researchers that something peculiar—although not a rain of small comets—is going on in the upper atmosphere (Science, 30 May 1997, p. 1333). Frank and his critics are now deadlocked about whether the spots are real, and they battled at a packed session of the meeting.

    Frank's claim had seemed stronger last year because he said that the spots appeared not only in his own camera aboard Polar, but also in the Ultra-Violet Imager (UVI) of one of his colleagues on the Polar team, George Parks of the University of Washington, Seattle. Frank also cited the “smearing” of some spots as evidence that they were real, saying it resulted from the Polar spacecraft's unintended wobble.

    But Parks had his name removed from the paper claiming a two-camera detection, and his own analysis of spots in his UVI images, which he first took public a month and a half ago, suggested that the spots behave just like artifacts somehow produced inside the camera. (See Science, 14 November 1997, p. 1217; the results also appeared in the 15 December issue of Geophysical Research Letters.) He found, for example, that the number of spots of a given size decreased steadily from the smallest to the largest, just like the noise in lab calibration images. And instead of being smeared into two spots along the direction of the Polar spacecraft's wobble, as Frank had claimed, closely spaced spots in UVI images fell in random directions from each other.

    Parks has now repeated the same type of analysis on spots from Frank's visible-light imager and reports finding the same noiselike behavior. The spots are “internal to the camera,” he said at the meeting. “There's no evidence anything is coming from the outside.”

    Frank responded that Parks is analyzing the wrong spots. All but the largest dark spots are instrument noise, Frank said, adding, “There's no reason to include these enormous amounts of noise. You have to do something else to the data.” If the spots are indeed clouds of water from incoming small comets, he reasoned, a cloud's motion across the field of view should add a third spot to the two created by the wobble motion of the spacecraft. Five of the candidate atmospheric holes in Parks's UVI images, said Frank, have three spots in the triangular orientation predicted by Polar's wobble. “I don't see how you can miss that sort of thing,” said Frank.

    Frank also looked at whether the spots shrink and disappear when Polar's eccentric orbit carries it farther above the atmosphere, as they should if they are real. Eighty percent of the large dark spots that Frank thinks are real holes disappeared between altitudes of around 25,000 kilometers and 41,000 kilometers, he said. “There's nothing you can do about this,” said Frank. “It's the ultimate test. The holes are a geophysical phenomenon.”

    Parks, among many others, remained unconvinced. He notes that if the holes are real, then the three spots each one produces should be connected by slightly darkened streaks where the image was smeared across the detectors, but the three spots in the example he saw Frank present actually had slightly brighter spaces between them. And although Parks has not yet checked the altitude dependence of spots, he notes that Bruce Cragin of CES Network Services in Farmers Branch, Texas, and his colleagues did apply the altitude test in 1987 to spots in images from the Dynamics Explorer spacecraft, in which Frank first discovered atmospheric holes. When spacecraft altitude varied by a factor of 3, the abundance as well as the size of spots remained the same.

    “I don't think the San Francisco exercise has changed many minds,” says planetary scientist Thomas Donahue of the University of Michigan, Ann Arbor, who has leaned toward accepting the atmospheric holes as real. But some minds could change after other researchers get their shot at the data. NASA is expected to fund more analyses this year, and Frank plans to release all his Polar data on CD-ROMs this month, so that other space scientists can see—or fail to see—the spots for themselves.

  3. MATHEMATICS

    Sieving Prime Numbers From Thin Ore

    1. Barry Cipra

    Mathematicians have known since Euclid that there is an infinite number of prime numbers. For the last 100 years, they have even had a good way to determine, approximately, how many primes there are up to any given number. But the finer points of how primes are distributed remain, by and large, mysterious. In particular, mathematicians are almost always unable to take an infinite but sparsely distributed set of integers, such as the values of n2 +1, and tell how rich in primes it is.

    Prime cut.

    The sieve captures the primes in the sequence of numbers of the form a2 + b4, up to 100.

    ILLUSTRATION: L. CARROLL

    That barrier is beginning to yield. In what number theorists are calling a major breakthrough, two mathematicians have developed powerful new techniques for assaying such “thin” subsets of integers for primes. As John Friedlander of the University of Toronto and Henryk Iwaniec of Rutgers University in New Brunswick, New Jersey, report in a paper to appear in the Annals of Mathematics, they have refined a tool known as the asymptotic sieve, developed in the 1970s by Enrico Bombieri of the Institute for Advanced Study in Princeton, New Jersey. Their first conquest: a remarkably thin sequence consisting of numbers of the form a2 + b4. Friedlander and Iwaniec's new sieve shows that even though most such numbers are composite—products of prime factors—the sequence includes an infinite number of primes.

    “This is totally new,” says Bombieri. The conclusion, he adds, “is what you would expect from heuristic arguments, but to prove things is another matter!” In his opinion, Friedlander and Iwaniec have written “one of the most important papers in analytic number theory of the century.” It “will find a lot of applications” in exploring the distribution of primes.

    Roughly speaking, a mathematical sieve determines the abundance of primes in a long list of numbers by estimating how many numbers remain when multiples of small primes are removed—a procedure that sifts out most composite numbers. For example, of the numbers between 169 and 289 (the squares of the primes 13 and 17), roughly half remain when you delete the even numbers, two-thirds of those remain after the multiples of 3 are removed, etc. Sieving the 120 numbers in the sequence yields an estimate of 120 × (1/2) × (2/3) × (4/5) × (6/7) × (10/11) × (12/13) = 23 primes. That's close to the exact count, 22. Similar sieves can be designed for other number sequences. The real work comes in analyzing the errors in such estimates to get rigorous results.

    The errors are easiest to estimate for “dense” sequences such as 1, 5, 9, 13, etc.—a progression that contains roughly one-fourth of all numbers up to a given size. But Friedlander and Iwaniec's sequence contains a vanishingly small fraction of the integers. That thinness makes it impossible to estimate the errors in the usual way, putting the sequence out of the reach of previous sieves. “Nobody dreamed you could analyze such sequences,” says Bombieri.

    The new techniques rely on special algebraic properties of the formula a2 + b4. In a number system known as the Gaussian integers, which enlarges the set of ordinary integers by including i, the square root of −1, such numbers can always be factored as (a + b2i) (a - b2i). By putting their numbers into this form, Friedlander and Iwaniec were able to exploit the well-developed theory of algebraic numbers to get a handle on the errors when they applied their sieve.

    The combination of algebraic number theory and sieve techniques is what Peter Sarnak of Princeton University finds most impressive. “These are two different worlds, algebra and the sieve,” he notes. But because the technique relies on properties found in only a small fraction of sequences, many of sieve theory's dearest problems still look hopeless. In particular, numbers of the form n2 + 1-1, 2, 5, 10, 17, 26, etc.—almost certainly include an infinite number of primes; indeed, number theorists have even conjectured an estimate for how many there are up to any given integer in the sequence. But no proof appears to be forthcoming. Number theorists are also convinced that each interval between consecutive squares contains at least one prime, but have no idea how to prove it.

    “The answer to most of these questions is, we don't know,” says Andrew Granville, a number theorist at the University of Georgia in Athens. “It's frightening how pathetic our knowledge is!”

  4. PHARMACOLOGY

    New Nonopioid Painkiller Shows Promise in Animal Tests

    1. Evelyn Strauss
    1. Evelyn Strauss is a free-lance writer in San Francisco.

    Although morphine has reigned for centuries as the king of painkillers, its rule hasn't been totally benign. Worries about its addictive properties and side effects, such as respiratory depression, have caused many doctors and patients to shy away from it. But a lowly frog could end up threatening morphine's reign. By starting with a toxin found in that animal's skin, researchers have produced a potential new painkiller that works by a different mechanism than morphine and may thus lack some of the opioid's drawbacks.

    Blocked out.

    Epibatidine binding (yellow to red color) in the spinal cord is reduced with ABT-594 present (bottom), an indication that both compounds bind the same receptors.

    JAMES PAULY

    On page 77, a research team including neuropharmacologist Stephen Arneric, behavioral pharmacologist Michael Decker, and chemist Mark Holladay of Abbott Laboratories in Abbott Park, Illinois, reports promising results in animal tests with a new drug called ABT-594. The researchers found that the drug, which apparently acts not through opioid receptors but through a receptor for the neurotransmitter acetylcholine, blocks both acute and chronic pain in rats. What's more, the researchers have seen few signs that ABT-594 is addictive or toxic in animals.

    Much more work will be needed to determine whether the drug is safe and effective in humans, but Abbott has already started safety trials in Europe and hopes to conduct trials in this country as well. “If it works in people, it's going to be a completely new kind of pain reliever,” says Howard Fields, professor of physiology and neurology at the University of California, San Francisco.

    Such drugs are urgently needed, especially for chronic pain, says pharmacologist Edgar Iwamoto of the University of Kentucky College of Medicine in Lexington. He notes that “30 [million] to 40 million people in the United States have moderate to severe pain that ibuprofen and aspirin just can't handle.” And besides many people not wanting to use opioids, “they don't work that well for chronic pain,” he says.

    Researchers got their first inkling that they might be able to block pain by targeting the acetylcholine receptor in 1932, when they found that nicotine, which binds to one variant of the receptor, dampens pain. The finding lay dormant for decades, however, largely because nicotine is a weak analgesic and causes serious side effects. The field didn't wake up until the mid-1990s, when a compound originally identified in frog skin by chemist John Daly of the National Institute of Diabetes and Digestive and Kidney Diseases burst into the limelight.

    In 1976, Daly injected mice with an extract from the skin of a frog he had collected in Ecuador and found that the animal's tail stood up and arched over its back. This intrigued Daly, because he knew that many opioids induce a similar response. “I still remember looking at those mice and getting so excited,” Daly says.

    Even more exciting, the material, which he called epibatidine after the frog, Epipedobates tricolor, was 200 times more potent than morphine at blocking pain in animals. And epibatidine remained effective even when he added chemicals that inhibit opioid action, an indication that it worked through a different set of receptors.

    Daly's efforts to isolate and characterize epibatidine were stymied, however, when lab-grown frogs turned out not to make the compound and he could no longer collect the frog in the wild because it had landed on the threatened species list. Daly stored his irreplaceable sample in the freezer and waited, hoping that technology would eventually become powerful enough to tell him what the mysterious compound looked like.

    That didn't happen until about 10 years later, when Daly lab members Thomas Spande and Martin Garraffo used nuclear magnetic resonance spectroscopy to determine epibatidine's structure. It resembles that of nicotine: Both have a pyridine ring attached to another ring that contains an amine group. Once the structure was known, several research groups jumped into the fray and synthesized epibatidine. Daly's group went on to show that epibatidine activates the nicotinic acetylcholine receptor.

    But while epibatidine is a potent analgesic, it is too toxic for human use. In lab animals, it causes seizures and even death. By then, the Abbott team was also interested, however. They had noticed that it resembles drugs, also aimed at the nicotinic receptor, that the company was studying in hopes of developing treatments for Alzheimer's disease. So the researchers fiddled with their compounds, trying to create a derivative that exclusively kills pain. Out of some 500 variants they produced and then screened in animals, the company decided to focus on ABT-594 because it seemed to work against different types of pain and produce few side effects.

    As the team now reports, ABT-594 was as effective as morphine in dampening pain from stimuli such as heat or stinging chemicals in rats: The team also mimicked a type of chronic pain in which nerve damage predisposes to pain from stimuli that don't normally hurt. To do this, they surgically compressed a nerve in the spinal cord and then applied different amounts of pressure to the animal's paw. ABT-594 dulls this type of pain as well as morphine does, they found.

    At the same time, ABT-594 appears to spare other functions. By placing electrodes in the spinal cords of rats, the researchers showed that the drug hinders the ability of nerve cells to fire in response to harmful mechanical and thermal stimuli, but it does not affect responses to benign sensations such as touch or mild heat. The company also found, Arneric says, that ABT-594 depresses the respiratory system much less than morphine does and makes animals more alert instead of sedating them.

    ABT-594 may be less toxic than epibatidine because it has more selective effects on neurons. Tests on membranes containing acetylcholine receptors showed that ABT-594 binds several hundred thousand times more tightly to a nicotinic receptor from the central nervous system, where neurons process pain information, than to one that tells muscles to contract. For epibatidine, that ratio was only 57 to 1. “They came upon a compound that gets rid of the toxic effects of epibatidine and still has analgesic capabilities,” says Daly. “I would not have thought it possible.”

    And in at least one test, ABT-594 appeared to be nonaddictive. Rats that were taken off ABT-594 after being treated with a high dose for 10 days did not suffer the withdrawal symptom of appetite suppression seen after treatment with opioids. Other researchers point out, however, that ABT-594's mechanism of action raises the possibility that it will lead to other forms of dependency. “The company would hope that their drug isn't addicting because it doesn't act through the opioid receptor,” says Fields. “But nicotine is addictive, too.”

    The big question now is whether this early promise will be borne out when the compound is tested in humans. An indication ought to come in a few months when the first results from the European safety trials become available. “We're crossing our fingers and anxiously looking forward to the summer,” says Michael Williams, Abbott Labs vice president of neurological and urological drug discovery.

  5. IMAGING

    Putting the Infrared Heat on X-rays

    1. Robert F. Service

    Doctors would love to put an old workhorse—x-rays—out to pasture. Although x-rays are still an indispensable tool for diagnosing everything from broken pinkies to lung tumors, the energetic beams can damage DNA, posing a slight cancer risk. Infrared light, which can pass through tissue harmlessly, could bring a softer touch to medical imaging. But to turn it into a serious rival for x-rays, researchers need a simple way to compose an image from the few infrared photons that pass directly through tissue without getting scattered. In this issue of Science, a team from the University of Arizona in Tucson and the California Institute of Technology (Caltech) in Pasadena reports about a light-sensitive polymer that might do the job.

    The polymer, described on page 54, can change its optical properties in response to a subtle play of light: the interference between the few infrared photons traveling straight through a scattering medium and a separate infrared beam. The result is a pattern called a hologram, from which a three-dimensional (3D) image of the tissue can be reconstructed. Until now, separating the few straight-shooting photons from the scattered stragglers to make an image has required unwieldy gas-, liquid-, or crystal-based detectors, but the new polymers are easy to handle and cheap to process into film. “They could be very useful,” says Robert Alfano, an imaging researcher at the City College of New York. But experts acknowledge that infrared systems are a long way from the clinic, because these systems so far can only image thin tissue slices.

    For years, researchers have been tinkering with polymers that store holograms made by visible light. Storing an image requires relatively simple physics: Two laser beams—one carrying information about the image to be stored and the other a “reference” beam—intersect in the polymer, setting up an interference pattern of bright and dark areas. “Sensitizing” compounds in the polymer's lit regions absorb photons, which excite electrons to a higher energy level. Electrons from a surrounding matrix of charge-conducting molecules rush to fill the gaps in the electron shells, resulting in transient positive charges that ripple through the matrix until getting trapped in the dark region. The light-dark interference pattern thus is reproduced in the polymer as a pattern of corresponding positive and negative charges.

    These islands of charge attract a third class of compounds in the polymer: dye. The stringy dye molecules themselves are polarized, having opposite charges on either end. The positive end swivels toward a cluster of negative charge, and vice versa. This reorientation alters the polymer's index of refraction, or the speed at which light moves through the film. The 3D pattern of varying refractive index is a hologram. As long as the charges remain fixed in one of these photorefractive polymers, a hologram persists. When lit up by another laser beam, the polymer diffracts the photons in a pattern, reproducing the original image.

    The research team, led by Arizona's Bernard Kippelen and Nassar Peyghambarian, and Caltech's Seth Marder, realized that such holograms might be useful for medical imaging. Most photorefractive polymers, however, are sensitive to visible light, which tissues readily absorb or reflect. To exploit the near-infrared light that is best for probing tissue, Kippelen and his colleagues added a new sensitizer to standard polymers that releases positive and negative charges after absorbing infrared photons.

    This modification alone was not enough to make the polymer useful for medical imaging. The researchers also had to boost the polymers' signal-to-noise ratio to record the few photons that make it through tissue. To do so, they created new dye molecules that are adept at orienting their charged ends in the weaker electric fields created by fewer incoming photons. The souped-up polymer had the same sensitivity as the best visible-light photorefractive polymers, while receiving four times fewer photons.

    Next, Kippelen's group set out to reproduce the image of the number “5” in a photorefractive film. First, they generated an infrared laser pulse and split it into two separate beams. One beam passed through a transparent “5” in an otherwise opaque sheet of photographic film. Photons emerging from the “5” passed through polystyrene beads floating in an organic solvent, a scattering medium used to simulate human tissue, before impinging on the polymer. Inside the polymer, the first wave of photons—which had emerged unscattered—crossed paths with others from the second beam that had been routed around the barriers and timed to arrive simultaneously. As photons from the two beams interfered, they reproduced the “5” as a hologram that could be read by another infrared beam. Scattered photons arrived too late to set up an interference pattern with the second beam; thus they were unable to muddy the hologram.

    Similar feats have been accomplished using cesium vapor and other materials as the holographic storage medium, says Irving Bigio, a holographic storage expert at Los Alamos National Laboratory in New Mexico. But the new polymer films, he says, “look far easier to use.” Major obstacles remain, however, before these new photoreactive polymers appear at the doctor's office. The key hurdle, Bigio says, is that no infrared technology designed so far can image tissue thicker than about 1 centimeter. The hunt is now on for new schemes to boost the number of usable photons, or make the most of the ones that get through. Until these efforts pay off, however, x-rays will remain a radiologist's best friend.

  6. CELL BIOLOGY

    Do Fateful Circles of DNA Cause Cells to Grow Old?

    1. Elizabeth Pennisi

    Ordinarily when we think of aging, it's the outward signs that come to mind: wrinkles, graying hair, and withering muscles. But if Leonard Guarente's hunch is right, time may leave a much more telling mark in the nuclei of our cells.

    Generation gap.

    An aged yeast nucleus (blue) has a swelled, fragmented nucleolus (red), unlike nuclei of too-young cells (left).

    D. SINCLAIR AND L. GUARENTE/MIT

    In work described in the 26 December issue of Cell, Guarente and David Sinclair, molecular biologists at the Massachusetts Institute of Technology (MIT) in Cambridge, have linked aging and senescence in yeast to the buildup in the nucleus of circles of DNA that have popped out of the chromosomes and copy themselves each time the yeast cells replicate. The researchers propose that when enough of these circles accumulate, they clog the nucleus and prevent the cell from reading or replicating its genome, causing it to stop dividing and ultimately to die. “It's a fairly precise timing mechanism, and the effect on the cell will be quite gradual, which is how we recognize aging,” Guarente explains.

    No one has yet found these aberrant DNA circles in mammalian cells. But “all cells have the potential to suffer these circle pop-out events,” comments David Shore, a molecular geneticist at the University of Geneva in Switzerland. “There's obviously the tantalizing suggestion that this [mechanism] may be related to senescence in other organisms.” If that proves to be the case, then yeast should provide a useful model not only for learning about aging in humans but also for assessing ways to slow it down.

    Guarente discovered the circle buildup while using the budding yeast, Saccharomyces cerevisiae, to study Werner's syndrome, a hereditary disease in which people age prematurely and often die before reaching age 50. No one knows how defects in the Werner's syndrome gene cause premature aging. But the human gene, discovered over a year and a half ago (Science, 12 April 1996, pp. 193 and 258), has a yeast equivalent, called SGS1, which made it possible for Guarente to address the problem in that much simpler organism.

    Not only does SGS1 look like the human gene, but it also influences the rate of aging, the MIT team found. Cells of budding yeast can reproduce by mating, but more often they bud off daughter cells asexually. Normal cells can repeat the process about 25 times before they gradually enlarge and become unable to mate sexually with other yeast cells, telltale signs of old age. But the researchers found that the SGS1 mutants aged prematurely; they stopped budding and became sterile after only about nine rounds of budding (Science, 29 August 1997, p. 1313).

    At the time, Guarente and his colleagues noted an intriguing change in a small structure within the nucleus, known as the nucleolus. The nucleolus is where the RNAs that help make up the cell's protein factories, the ribosomes, are transcribed from the ribosomal DNA. Normally compact and crescent-shaped, the nucleoli of the mutant cells had become enlarged and fragmented. Now he knows the cause of the structural changes.

    With Sinclair, Guarente found that as mother cells replicate their DNA prior to budding, they also replicate small circles of ribosomal DNA. The researchers suspect that the first circle probably arises by accident because of the highly repetitive nature of ribosomal DNA. Such DNA is more likely to be mishandled and excised by the cell's DNA-processing machinery. “It's an enormous task to prevent a circle from forming,” Guarente suggests. But once formed, circles can replicate with the rest of the yeast cell DNA.

    By adding a marker to DNA that causes cells containing the circles to turn pink, the researchers also showed that the circles almost always stay in the mother cell. As a result, they accumulate over time, eventually reaching the point where this excess DNA equals the total yeast genome. “It changes the morphology of the nucleus,” says David Finkelstein, a yeast biologist at the National Institute on Aging in Bethesda, Maryland. Indeed, so great is the burden that the nucleolus seems to burst.

    The circles are seen in both normal cells and the SGS1 mutants, but they accumulate much faster in the mutant cells, suggesting that they may be linked to the rate of aging. In addition, when the researchers took a length of DNA that, under specific conditions, releases a DNA circle and added it to another yeast strain, they found that its life-span decreased by 40%. “What [they have] found is that the ribosomal DNA is amplified and keeps building up,” says Finkelstein. “Aesthetically, it's a very pleasing observation.” In a different yeast strain, the researchers retarded the formation of the ribosomal DNA circles and extended the life-span by 25%.

    Guarente suspects that certain proteins normally hold circle formation in check and thereby slow aging. One is the SGS1 protein itself, which is concentrated in the nucleolus. Another is a set of molecules called SIR proteins, which inactivate or “silence” entire regions of genes whenever they bind to a chromosome. In earlier work, Guarente's group had shown that the SIR proteins gradually migrate to the nucleolus as yeast cells age. They also found that in a long-lived SIR mutant, this migration occurred earlier than usual, suggesting that the earlier shift delayed aging-related processes there.

    But Guarente and Sinclair haven't figured out how any of these proteins might control circle formation. And they have yet to prove that the circles are actually causing the cells to age, for example, by disrupting normal replication and transcription. Furthermore, although Guarente thinks human tissues with actively dividing cells could suffer a similar fate, Finkelstein is skeptical. “That he sees this phenomenon in yeast doesn't necessarily mean that's what happening in humans,” he cautions.

    But others are deeply intrigued. “They've really come up with the clearest molecular mechanism for aging that anyone has come up with yet,” says Shore.

    While yeast have very little repetitive DNA outside the ribosomal genes, humans have lots. That DNA might give rise to circles as it replicates, and the yeast work suggests that the circles of any kind of DNA that can replicate could have an aging effect. “This could be the tip of the iceberg,” notes Lawrence Loeb, a molecular biologist at the University of Washington, Seattle.

    The next step is to try to find out whether human cells do, in fact, accumulate circles. But that could be quite a challenge, he and Guarente note, because they don't know what kind of cells to look at. Nevertheless, says Shore, the idea “is screaming out to be tested in humans.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution