News this Week

Science  17 Oct 2003:
Vol. 302, Issue 5644, pp. 40

    Researchers Await Government Response to Self-Regulation Plea

    1. David Malakoff,
    2. Martin Enserink

    Will U.S. scientists soon have to ask the government for permission to perform certain potentially dangerous biological experiments? That's the question swirling around lab benches in the wake of a much-anticipated National Academy of Sciences report that calls on scientists to do more to self-regulate research that could aid bioterrorists (Science, 10 October, p. 206). The report makes a strong pitch to keep the government out of the labs. Although some biodefense officials seem ready to give this idea a try, it's not clear whether they will ultimately be satisfied with a voluntary approach.

    “The goal is to create a culture of responsibility among researchers who work in biodefense and biotechnology,” says Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases and the government's point man on biodefense issues. Already, he says, officials are moving to set up a new high-level panel to review potentially worrisome experiments. And he expects the government to issue a detailed response to the academy report within a month.

    Last week's study,* produced by a panel led by Massachusetts Institute of Technology geneticist Gerald Fink, tackled a thorny issue: how to prevent the misuse of powerful gene-engineering technologies without hobbling beneficial studies. The panel's main answer was to reinvigorate and expand an existing self-governance system that evolved out of earlier battles over the risks of recombinant DNA research. Under that scheme, institutional biosafety committees at some 400 U.S. institutions review gene-engineering experiments and can toss controversial cases to a national Recombinant DNA Advisory Committee (RAC) based at the National Institutes of Health (NIH).

    The Fink committee recommends that the government beef up the RAC system to oversee a booming biodefense research enterprise, focusing especially on seven types of “experiments of concern” (see box). It also urges the Department of Health and Human Services (HHS) to appoint science and security experts to a new National Science Advisory Board for Biodefense to provide guidance, and it recommends that journal editors remain the prime gatekeepers for deciding whether to publish problematic findings. The United States also needs to push other nations to adopt stringent oversight, it says.

    View this table:

    The panel did not directly address a pending U.S. proposal under a new bioterror law to require researchers to get prior permission from HHS for certain risky experiments. That plan, laid out last December in draft rules governing the handling of dozens of dangerous “select agents,” drew criticism from many scientists (Science, 21 February, p. 1175). The American Society for Microbiology, for instance, said that cementing the restrictions into regulation would make it difficult to keep the rules up to date with new discoveries and could expose naïve rule breakers to criminal prosecution. (Scientists violating current RAC guidelines typically face, at most, withdrawal of their funding.) Now, the question is whether the government will add what one analyst calls “Fink's seven deadly sins” to its list of regulated experiments. Alternatively, it could scrap the idea entirely in favor of the panel's self-regulatory approach. The answer will come when it issues final rules, possibly as early as next month.

    Fink, for one, says that requiring government approval for such experiments “would be very bad” and “unnecessary.” The RAC experience has shown that researchers are eager to comply with sensible rules, he says, suggesting that they will be especially responsible with national security at stake.

    But at least one researcher advocates a tougher approach. “Any effective solution must involve not only voluntary self-governance but also expansion of regulatory authority,” says microbiologist Richard Ebright, a Howard Hughes Medical Institute investigator at Rutgers University in Piscataway, New Jersey. He predicts that the seven types of experiments will ultimately be regulated: “The report should set the floor for oversight, not the ceiling,” he says.

    On the same page?

    Fauci (bottom) says the government is already adopting some advice offered by a panel led by Fink (top).


    The government will weigh in with its views sometime next month. Early signals suggest that officials could adopt an incremental approach. White House science adviser John Marburger, for instance, says that the Administration “is very supportive of this kind of self-regulation” and is interested in “creating flexible mechanisms that can respond to changing risks.”

    Fauci, meanwhile, notes that even before the Fink committee finished its work, NIH Director Elias Zerhouni had decided to add a new committee to handle biodefense-related issues to the existing RAC. The panel, which could hold its first meeting next month, won't necessarily be limited to reviewing experiments involving recombinant DNA or bioterror agents, he says: “The idea is to look at anything of concern, … but it's still a work in progress.”

    Fink calls that start “terrific” but agrees that many details remain unresolved. One is how the scheme would apply to industry. Another is whether RAC's tradition of transparency will extend to biodefense experiments, given that such public airing could be useful to malefactors.

    Uncertainty also surrounds the scope of the seven categories. This summer, for instance, a research group from Northern Arizona University in Flagstaff published a paper describing a way to make Bacillus anthracis bacteria resistant to ciprofloxacin, the antibiotic of choice for anthrax. Terrorists could read the study as a recipe. But at least one author of the Fink report believes that similar studies probably wouldn't need review under the proposed regime because the researchers didn't engineer the resistance; they just grew bugs on a medium and selected the resistant ones. “That's considered to occur naturally,” says microbiologist Ronald Atlas of the University of Louisville, Kentucky. Fink, however, believes that such studies would definitely qualify for review before they could be carried out.

    Once such differences are worked out, Fink's panel may succeed, at least initially, in shielding scientists from onerous regulation, says Jonathan Tucker, an arms proliferation expert at the Washington, D.C., office of the Monterey Institute of International Studies. “It was like squaring the circle; they had to satisfy both the scientific community and the security community,” he says. If the plan is implemented, he predicts, the government “will take a wait-and-see attitude.”


    Spain Offers Helping Hand to Hospital Researchers

    1. Xavier Bosch*
    1. Xavier Bosch is a science writer based in Barcelona.

    BARCELONA—The Spanish government is giving hospital-based clinical researchers more freedom and cash in an effort to boost their role in the nation's biomedical research effort. Health minister Ana Pastor unveiled a package of measures this week that is intended to boost collaboration between hospitals, universities, and research centers, liberate hospital-based researchers from clinical duties, and even lead to the creation of spinoff companies at hospitals. “The present structure of biomedical research, dividing the basic and clinical arms, is artificial and inappropriate,” Pastor told Science. She said she wants to foster “a culture of innovation in public hospitals.”

    Pastor, a 45-year-old physician who became minister last year, decided that hospital-based research needs help after noting that 47% of Spanish biomedical papers published between 1994 and 2000 had at least one author from a hospital. In Spain, most basic science is done in government research centers and universities. “We need to rethink the organization of clinical research,” she says.

    As a first step, the health ministry will provide funding for public research centers, in collaboration with hospitals and universities, to become “reference centers” for particular areas of medical research. For example, Pastor will soon approve funding for a reference center on environmental health and another on regenerative medicine. This second center will support human stem cell research, maintain a registry of facilities holding frozen embryos, and eventually supply cell lines to authorized researchers. The center will also house a national cell line bank to manage and store all lines created from embryos left over from fertility treatments.

    A tonic for health research.

    Spanish health minister Ana Pastor wants to foster innovation.


    Pastor also wants to give greater freedom to hospital-based researchers. Last July she changed medical training rules to allow young physicians to become full-time researchers after completing their residency. This week Pastor said the ministry will award 5-year “sabbaticals” to senior, talented clinicians to free them from clinical duties and allow them to concentrate on research. “Most physicians at public hospitals are currently charged with full health care duties so that any research activities are practically secondary to clinical work,” says pharmacologist Jordi Camí, director of the Biomedical Research Park of Barcelona, currently under construction.

    With that new freedom, Pastor hopes that hospital-based researchers will launch start-ups based on their discoveries. To prime the pump, the ministry is setting aside $4.7 million next year to fund “orphan clinical trials” aimed at finding new uses for drugs whose patents have already expired and are of little interest to drug companies. Pastor also announced that grants will focus on prevalent diseases that have not received as much attention as cancer and cardiovascular diseases, “with the goal of fostering research on brain and mental diseases, respiratory and environmental diseases, as well as international health,” Pastor says.

    The new initiatives are a departure from Spain's recent policy of channeling its efforts into large, focused facilities such as Madrid's national cancer and cardiovascular centers (Science, 15 March 2002, p. 1995). Researchers have generally welcomed the new decentralized approach. “By promoting a horizontal organization through networks that integrate basic and clinical researchers, the time needed for new knowledge to be transferred and actually applied to medical practice will be shortened,” says hepatologist Juan Rodés, director of the August Pi i Sunyer Biomedical Research Institute, a high-profile research institute in Barcelona.


    'Stemness' Genes Still Elusive

    1. Gretchen Vogel

    Despite the excitement surrounding stem cells' potential to perhaps cure disease or unlock the secrets of development, a fundamental question remains: What, exactly, are stem cells?

    In common parlance, they have been defined as cells that can both renew themselves and give rise to more specialized daughter cells. But that is a functional definition, akin to saying that a car is a movable machine on four wheels. Scientists are keen to get under the hood and see which genes drive stem cells' engine of renewal. Although researchers have identified a few genes that seem to play a role, the key molecular switches remain a mystery.

    A year ago, two groups reported what they hoped would be a significant step forward. As they described in papers published back to back in Science (18 October 2002, pp. 597 and 601), groups led by developmental geneticists Douglas Melton of Harvard University and Ihor Lemischka of Princeton University used gene chips to search for a common signal among different kinds of stem cells—a genetic profile that would in essence define the nature of “stemness.”

    Working independently, both groups compared the gene expression of embryonic stem (ES) cells, hematopoietic or blood-forming stem cells, and neural stem cells. Lemischka's group found 283 genes that were overexpressed in all three of their stem cell populations— presumably part of a genetic characterization of stemness. Melton's group found 230 genes that were overlapping in its populations. But there was a catch. The two sets of genes were almost completely different, sharing only six genes.

    A recent study only adds to the confusion. In a Technical Comment published online by Science (see p. 393), Bing Lim and his colleagues at the Genome Institute of Singapore and the Beth Israel Deaconess Medical Center in Boston describe similar experiments with ES cells, neural stem cells, and retinal stem cells. They found 385 genes that were overexpressed in all three cell types. However, only one of those genes is on both Melton's and Lemischka's lists.

    Mixed bag.

    Partial differentiation among stem cell populations such as these ES cells confounds gene expression studies.


    So what's the problem? “There are a lot of caveats that need to go into interpreting these experiments,” says developmental geneticist Leonard Zon of Harvard University. “The cell population you start with makes a huge difference in what is found in a microarray,” he says, noting that isolating a pure population of stem cells is notoriously difficult.

    Indeed, using gene arrays on some stem cell populations may be a bit like using a millimeter-scale ruler to measure the length of a football field. “One danger here is that the resolution power of the [gene chip] technology might be on the verge of outstripping the resolution of the biological assays” for isolating stem cells, Lemischka says. Any genes expressed by partially differentiated cells in the analyzed population will cloud the gene array results.

    Other technical problems, in addition to the purity of the analyzed cells, may confound the work, Lim notes. Key genes might vary their expression over time, he says, or perhaps the sought-after stemness genes are absent from the commercially available chips that all three teams used. In any case, he says, scientists have learned at least one thing: “The stemness gene is not a highly expressed one, if it exists.”

    Both Lemischka and Lim say that despite the lack of overlap between the studies, the results are worth following up. Both groups are working on functional studies of some of their candidate stemness genes, disabling them in populations of stem cells and observing how the cells behave. And even if the specific genes don't overlap, Lemischka says, it's possible that all three studies have identified common signaling pathways that are key to a stem cell's identity—an identity that remains frustratingly elusive.


    Outside Agitators Alter Wasp Behavior

    1. Elizabeth Pennisi

    The wasp world, about 200,000 known species strong, buzzes with strange social interactions. Some of these insects have evolved intensely close, albeit potentially antagonistic, relationships with other organisms. New studies reveal just how some of these partners manipulate wasp behavior for their own benefit.

    Julien Varaldi, an evolutionary biologist at the University of Claude Bernard Lyon I in Villeurbanne, France, has discovered that one partner, a virus, redirects the reproductive strategy of a wasp that parasitizes fruit fly larvae. The work appears online this week in Science ( And on page 437, Wittko Francke of the University of Hamburg, Germany, and his colleagues describe the compound that an orchid uses to lure its wasp pollinator.

    These interactions have the makings of an arms race between the orchid and its pollinator and the virus and its host, and thus “really interesting evolutionary features occur,” says Jacques van Alphen, an evolutionary biologist at the University of Leiden, the Netherlands. “The orchid is exploiting [one] wasp and the virus is exploiting another wasp.”

    Parasitic wasps use other insects as incubators. Female Leptopilina boulardi insert their eggs into fruit fly young, typically one egg per host. The newly hatched wasp feasts on the host's tissue, eventually killing it. Female wasps usually steer clear of flies that have already been parasitized, but every so often, entomologists find that wasps engage in “superparasitism,” and a fruit fly can wind up hosting more than a dozen eggs. From the wasp's perspective, it makes little sense to overstuff a host, because only one egg can mature. Only when there are too few fruit flies to go around do females ignore the presence of other eggs. In such environments, wasps that have a genetic tendency to be less picky are better off.

    This explanation for the number of eggs per larva is based on the idea of natural selection: that those organisms best suited to particular conditions, such as plentiful or scarce hosts, survive to pass on their genes. But increasingly biologists are recognizing that natural selection cannot explain all of evolution.

    Varaldi, co-workers Michel Bouletreau and Frédéric Fleury, and colleagues counted the number of eggs per fly laid by L. boulardi collected from Italy, Portugal, and France. When they housed wasps with a limited number of fruit fly larvae, they found that only some strains were quite particular: Wasps from Portugal laid just one egg per fly. Those from Antibes, France, were not discriminating, sometimes squeezing four or more eggs into a host. They determined that females, but not males, pass on superparasitism to their young.

    Wasp control.

    An Australian orchid needs just one chemical, not the usual mix, to deceive wasp pollinators. A virus takes charge of egg laying by this parasitic wasp (bottom).


    Strangely, though, when Portuguese and French females parasitized the same larvae, emerging adult wasps didn't always follow in their parents' footsteps. Almost three-quarters of the Portuguese offspring now engaged in superparasitic behavior. The next generation of Portuguese wasps did the same. “We were extremely surprised,” Varaldi recalls. The team realized that some sort of infectious agent—and not genes—must be responsible for this behavior.

    Evidence pointed to a virus in wasp tissue. The particles appeared in females, but the researchers never found them in males. When the team members injected the virus into a picky female, she became superparasitic.

    Some researchers point out that Varaldi and his colleagues have yet to prove that the virus is to blame for the altered egg-laying behavior. But van Alphen is convinced, and he's thrilled with the finding. “This is the first time such a virus has been discovered,” he says. And he suspects that it will not be the last.

    This superparasitic egg laying apparently benefits the viral parasite, which seems to spread only among wasp eggs sharing a host larva. The researchers speculate that the virus might destroy the female's ability to tell that a potential host has already been parasitized.

    Other organisms benefit by influencing wasp behavior as well. Orchids produce aromas that seduce certain wasp species, tricking them into carrying pollen from one flower to the next. Working with evolutionary ecologist Florian Schiestl, now at the ETH Zürich, Switzerland, Francke and his colleagues isolated one of these aromas. It's the same compound as that produced by the female wasp of a particular species, they report, and it draws males to this flower just as it would draw them to potential mates. Many other orchids might do the same, they add.

    Earlier in their quest to learn more about orchids' relationships with pollinators, the team found that female bee partners of a particular European orchid species emit a mixture of 14 compounds. Thus, “I would have expected a complex chemical story” for other orchids, comments Robert Raguso, a chemical ecologist at the University of South Carolina, Columbia.

    But when the researchers extracted volatile organic chemicals from a Chiloglottis orchid and from its partner, they found that both the female wasps and the orchid depended on just one compound. With the help of colleagues in Australia, Schiestl and Francke confirmed the chemical's effects in field studies. “That they nailed that [the attractant] was a single compound was quite nice,” Raguso notes.

    The finding runs counter to how new orchid-wasp relationships are thought to evolve. For example, European orchids produce a complex mix of chemical attractants, and even a slight alteration in the proportion of attractants sends those orchids down a path to becoming a new species. That speciation trigger is not possible with just one or two chemicals. Thus it's unclear how some 300 species in the Chiloglottis genus and other orchids pollinated by thynnine wasps came to be.

    The new studies underscore the role of outsiders in manipulating behavior and shaping evolution, says John Thompson, an evolutionary biologist at the University of California, Santa Cruz. New examples are being discovered, he adds: Manipulative partnerships are “a theme that we're coming back to time and time again.”


    Is Long Life in the Blood?

    1. Jennifer Couzin

    Nir Barzilai knew he'd hit on something when he got a call from the company where he'd sent blood samples for testing. “Were these from rugged athletes?” the company scientist wanted to know. “I told him [the subjects] were 100 years old,” recalls Barzilai, who directs the Institute for Aging Research at Albert Einstein School of Medicine in New York City. One sample was from a 103-year-old woman who'd been a heavy smoker since the age of 8. But the blood of nearly all of them—213 Ashkenazi Jews whose average age was 98—brimmed with large lipid particles, something associated with active young adults.

    But how lipid particle size—as opposed to more common measures such as total cholesterol—influences disease isn't clear. As a result, gerontologists and cardiologists cannot agree on the significance of findings published this week in the Journal of the American Medical Association (also discussed on Science's SAGE KE). Barzilai's team, which was also led by endocrinologist Alan Shuldiner of the University of Maryland School of Medicine in Baltimore, reported that long-lived people have large lipoproteins in their blood and that many of them carry a gene variant that influences cholesterol balance. How the two findings are related isn't clear. Although aging experts question the import of the new results, some cardiologists say the study suggests that a suspected heart disease risk factor may be more worrisome than believed.

    Studying why centenarians live as long as they do is no simple task. Worms and flies carry single genes that, when mutated, dramatically prolong life, but nothing that remarkable has been found in humans. Furthermore, a valid control group—critical to establishing unique centenarian traits—isn't easy to come by. “The [other] people born then are dead,” says Evan Hadley, associate director of the National Institute on Aging in Bethesda, Maryland.

    Barzilai tried to solve this problem by including the centenarians' children, whom he reasoned might have inherited some longevity traits. Seventy-five of the children's spouses and 183 other individuals from the Einstein Aging Study, also all Ashkenazi Jews, agreed to serve as controls. Adding some genetic diversity were data from the Framingham Heart Study, a cohort that's been scrutinized for decades.

    Still rocking at 121.

    Frenchwoman Jeanne Calment, who died in 1997 at the age of 122, shows off a CD she released the year before.


    Barzilai and his colleagues tested blood samples for levels of total cholesterol and its components of “good” or high density lipoprotein (HDL), “bad” or low density lipoprotein (LDL), and triglycerides. The group also tested concentrations of two proteins involved in cholesterol transport, apolipoprotein A and apolipoprotein B. There was little difference between the centenarian-plus-offspring group and the controls, although some centenarian families had high HDL levels.

    As a last resort, Barzilai and colleagues tested the size of HDL and LDL particles, and they hit pay dirt. Eighty percent of centenarians had either large HDL or large LDL particles, or both, compared to 40% and 55% among Ashkenazi controls. Offspring fell in between, and the numbers were lowest in the Framingham cohort. Almost no centenarians had small LDL particles, whereas more than a quarter of controls did.

    “The whole issue of lipoprotein in old people is controversial,” says Giovanna De Benedictis, a geneticist at the University of Calabria in Italy. She argues that previous centenarian studies have found low levels of total cholesterol—this one did not—and says it's not clear whether the large particles are a cause or effect of longevity, a point echoed by other aging experts.

    Barzilai's group also tested for a gene that produces cholesteryl ester transfer protein (CETP). CETP shifts cholesterol molecules from HDL to LDL: Inhibiting CETP helps raise HDL. The protein is “probably the most promising target” for new drugs aiming to control cholesterol and prevent heart disease, says John Kastelein, a cardiologist at Academic Medical Center in Amsterdam. The drug company Pfizer is currently testing a CETP inhibitor in clinical trials. Barzilai's hunch appeared to pay off: 25% of the centenarians had a specific CETP gene variant that blunts CETP activity, compared to 8% of controls.

    Questions persist about CETP's influence on particle size, however, particularly on LDL. But cardiologists are generally intrigued by both the CETP results and the predominance of large lipid particles in the centenarians and their children. “We really need to pay attention to this,” says Jean-Pierre Després, director of research at the Quebec Heart Institute in Quebec City, Canada. It's striking, he adds, that LDL particle size, and not LDL levels—the target of drugs and heart disease prevention efforts—appears key. Six years ago, Després reported that having lots of small LDL particles upped the risk of atherosclerosis.

    Earlier work also supports the importance of CETP in heart disease. Stefan Blankenberg, a cardiologist at Johannes Gutenberg University in Mainz, Germany, found that the same CETP gene variant prominent in the centenarians confers a lower risk of cardiac death in already ill individuals.

    Still, the team's results will likely be debated for some time. Neither particle size nor CETP is “a predictor of whether you're going to become a centenarian,” because most people with the good variant will still die before 100, says James Vaupel, director of the Max Planck Institute for Demographic Research in Rostock, Germany. Finding such a crystal ball, if it exists, is probably years off.


    A Call for Telling Better Time Over the Eons

    1. Richard A. Kerr

    WASHINGTON, D.C.—Ever since modern geology began to emerge almost 2 centuries ago, scientists have been trying to whittle the expanse of geologic time into small, manageable bits. At a workshop held here early this month at the National Museum of Natural History,* geochronologists declared that they must do better, much better and called for an unprecedented effort to calibrate the geologic time scale. An order-of-magnitude improvement in ordering and pacing the geologic record could reveal underlying causes of mass extinctions, evolutionary divergences, and geologic catastrophes—central questions in geology, paleontology, and evolutionary biology.

    “We need a major international cooperative network of geochronology centers dedicated to the goal of science-driven, integrative calibration,” said Samuel Bowring of the Massachusetts Institute of Technology, a workshop organizer. Although no specific plan emerged, Bowring notes, participants agreed that “we have to make sure we're all getting the same answer on the same rocks.”

    That doesn't always happen. Bowring himself is embroiled in a debate over the age of the mother of all mass extinctions, the Permian-Triassic, in which 85% of all species living in the sea became extinct. In 1998, he and colleagues reported that the clocklike decay of uranium to lead inside zircons from China pegged the Permian-Triassic at 251.7 million years ± 0.3 (2 sigma). But then Roland Mundil of the Berkeley Geochronology Center in Berkeley, California, and colleagues published uranium-lead data from similar Chinese zircons that supported an age of more than 252.5 million years. That seemingly slight discrepancy poses a serious problem for paleontologists and geologists seeking a cause for the Permian-Triassic extinction. Many suspect the humongous volcanic outpourings that formed the Siberian Traps 251 million years ago, but only a more precise date for the catastrophe can close the case (Science, 6 October 1995, p. 17).

    How old?

    The fine layers in Italy's Dolomites mark time, but researchers can't agree on how much.


    Other crucial ages are also out of whack. In the Dolomites of northern Italy, geochronologists have measured how long it took to pile 600 meters of microscopic carbonate skeletons on the sea floor about 240 million years ago to form the Latemar limestone (Science, 12 November 1999, p. 1279). Assuming that the distinctive layers of the Latemar matched climate cycles driven by clocklike variations in the shape of Earth's orbit, sedimentologists estimated that it took about 8 million years to form the whole pile. Uranium-lead dating of zircons from volcanic ash beds in the Latemar, however, produced a figure of about 2 million years—too little time to form such deposits, sedimentologists say. Years of work on both ways of dating the Latemar have failed to resolve the conflict.

    Earlier in the geologic record, great ice ages pushed glaciers into the tropics and may have encased the whole globe in ice. But even proponents of such “snowball Earth” scenarios can't agree on whether there were two or three glaciations late in the Precambrian about 600 million years ago. No one site records more than two glaciations, and radiometric dates are too sparse to settle the argument.

    The general sparseness of reliable ages was the primary complaint at the workshop. “We desperately need more dates, and we want them now,” said geologist Bruce Wardlaw of the U.S. Geological Survey in Reston, Virginia, only half-jokingly. How to get them was less clear. Geochronologists tend to favor adding more labs led by individual researchers who can collaborate closely with paleontologists and others on fundamental science problems. On the other hand, some nongeochronologists looking for high-volume dating would like to see centralized national facilities as well.

    In addition to more dating, researchers want better dating. Long-recognized problems with standards, interlab calibration, and sample processing have limited both the precision and the accuracy of uranium-lead and argon-argon radiometric dating. At the moment, these two leading techniques consistently differ on the age of the same sample by 1%. At the workshop, Bowring proposed that by 2015 geochronologists narrow dating precision to a consistent 0.1%—the equivalent of erring by 3 or 4 seconds in an hour. “Open, interlaboratory comparisons haven't been done at the 0.1% level,” says Bowring. “A lot of the new effort would be through shared samples analyzed at multiple labs.”

    Such cooperation was the workshop's watchword. Bowring called for new mechanisms to bring together geoscientists, from geochemists who mark time by swings in Earth chemistry to paleontologists who peg it to the comings and goings of long-dead beasts. Geochronologists themselves, everyone agreed, should work together to hammer out generally accepted “best practices” to help harmonize the discrepant ages of the Permian-Triassic extinction. Geochronologists could even help fieldworkers recognize the ash layers all-important to radiometric dating. How much this new spirit of cooperation would cost did not come up.

    • *“Calibration of Geologic Time Scale” Workshop, 3–4 October, Washington, D.C.; sponsored by the National Science Foundation.


    Sipping From a Poisoned Chalice

    1. Jocelyn Kaiser

    People have believed since antiquity that tiny doses of toxicants can be healthful. Now hormesis, a concept once discredited in scientific circles, is making a surprising comeback

    Dioxin and its chemical cousins are among the most deadly compounds on Earth. Spike a rat's water with 10 parts per billion—the equivalent of 7 teaspoons of dioxin dissolved in an Olympic-sized swimming pool—and there's a 50/50 chance that the rat will die of liver cancer. Yet even tinier concentrations of dioxins fed to rats inhibit tumors. The seemingly paradoxical findings have some scientists suggesting what would have been unthinkable not long ago: testing modified dioxins as an anticancer agent in humans.

    Dioxin is a poster chemical of a bold campaign: to rehabilitate the old saw that poisons or radioactivity at low doses are good for you. The concept, known as hormesis, has been kicking around for decades but until recently had been considered a marginal effect tainted by an unfortunate association with homeopathy. The improbable return of hormesis from the scientific wilderness, however, has riven the toxicology community.

    A flurry of new findings and a reexamination of old ones have thrust hormesis into the limelight. Many drugs, vitamins, and essential minerals exhibit hormesis, as does alcohol: Moderate drinking lowers risk of heart disease, whereas higher levels are associated with higher risks of heart and liver disease. Calorie restriction, the sole indisputable means of extending an animal's life span, may also be a form of hormesis, proponents say: The lack of calories stresses an organism, firing up responses such as DNA repair enzymes and apoptosis, or programmed cell death, which protect the body from environmental insults. And low doses of many chemical toxins, from cadmium to pesticides to dioxin, appear to have paradoxical and possibly beneficial effects on organisms. The heightened scientific scrutiny has generated juicy headlines: “Whatever doesn't kill you might make you stronger,” begins an article in the September issue of Scientific American. “A little poison can be good for you,” declares a recent issue of Fortune.

    Hormesis is alluring because it challenges the conventional wisdom that toxicants and radiation punish the body at even the smallest of doses. If hormesis is as pervasive as its backers suggest, it could mean that regulations for many chemicals, from arsenic in drinking water to polychlorinated biphenyls at Superfund sites, are too stringent. “It would fundamentally change the whole risk-assessment paradigm,” says Edward Calabrese, a toxicologist at the University of Massachusetts, Amherst, who over the past decade has doggedly compiled thousands of studies indicating that infinitesimal amounts of chemicals can help microbes, plants, and animals grow faster and live longer and healthier. “Hormesis is on the verge of being a milestone in the evolution of risk assessment,” adds John Doull, professor emeritus at the University of Kansas Medical Center in Kansas City and the co-editor of the premier toxicology textbook.


    Edward Calabrese has spent 13 years urging toxicologists to recognize that chemicals can have opposite effects at high and low doses.


    But others contend that such conclusions reach far beyond the science. Although paradoxical dose responses are “real,” the concept of hormesis “has been taken over by rhetoric,” says William Farland, risk assessment chief at the U.S. Environmental Protection Agency (EPA). It's too soon, he says, to conclude that the benefits of low-level exposures outweigh the risks. Moreover, a recent wave of studies has found that some hormonelike toxicants known as endocrine disrupters may be more harmful at small doses than they are at larger ones. The declaration that low-dose effects are often healthy “is where Ed [Calabrese] falls off the edge of the earth,” charges Frederick vom Saal, a reproductive biologist at the University of Missouri, Columbia.

    One challenge is to pin down the mechanisms governing low-dose effects. Industry may well see it in their interests to pony up significant funds for such research. But what it will take to get regulators to buy into the concept is another question, says Joseph Rodricks, a risk assessment expert at Environ Corp. in Arlington, Virginia. Hormesis, he says, “is going to be a hard sell.”

    The dose makes the poison?

    Hormesis was first described in 1888 by a German pharmacologist, Hugo Schulz, who observed that small doses of poisons seemed to stimulate the growth of yeast. Schulz also drew on animal studies of drugs at low doses carried out by Rudolph Arndt, a German physician. What became known as the Arndt-Schulz law lost credibility in the 1920s and '30s, however, because Arndt was an adherent of homeopathy, the notion that extremely dilute solutions, often containing a few or no molecules of an active substance, are therapeutic. Hormesis, coined in 1943, involves concentrations at least 10,000 times higher. They “are a direct continuation of the traditional dose response,” says Calabrese.

    Calabrese, a lanky, soft-spoken man with thick glasses and floppy gray hair, says hormesis first caught his attention in 1985, when he received a flyer for a meeting probing the question of whether low-dose radiation is beneficial (see sidebar, p. 378). It rang a bell: As a graduate student, Calabrese had noted that peppermint plants dosed with tiny amounts of phosfon, a herbicide used to stunt growth, grew faster than control plants. Plotting growth on the y-axis against dose on the x-axis, his data formed an inverted U-shaped curve instead of the usual S-shaped or linear plot for a dose-related effect (see diagram).

    Calabrese began collecting examples of similar dose responses. He also won funding from various federal agencies, including EPA, for a program of analyses and workshops called Biological Effects of Low Level Exposure (BELLE) and launched a thrice-a-year newsletter with commentaries from invited experts of all stripes on low-dose toxicology. Farland, a member of BELLE's board, says that EPA supports the program because agency scientists too have noticed that many chemicals exhibit U-shaped dose-response curves and want to understand the phenomenon better.

    Calabrese suspected that hormesis is commonplace. To find out, he and Amherst colleague Linda Baldwin got about $110,000 from an industry group to comb the literature for studies demonstrating hormesis. They uncovered thousands: plants dosed with herbicides or metals growing lusher; bacteria flourishing in the presence of tiny amounts of antibiotics; immune cells treated with arsenic proliferating faster; insects doused with pesticides or alcohol living longer and producing more eggs; rats fed a little saccharine developing fewer tumors. “We see it across the whole plant and animal kingdom” and at “essentially every endpoint,” says Calabrese. The effects, he says, are modest but consistent: typically a 30% to 60% greater response than in controls.

    In his latest analysis in the February 2003 issue of Toxicological Sciences, Calabrese looked at how frequently hormesis occurs, including all the dose-response curves he could find that featured at least two doses below the established no-effects level and a control. From 195 papers that met this criterion, he reported that hormetic dose-response curves outnumbered curves showing no effect at the lowest doses by 2.5 to 1.

    Healthy provocations

    The likely explanation for hormesis, Calabrese and others say, is that small doses of most harmful substances stimulate a beneficial response that enhances normal function and girds an organism against subsequent stresses. Potential mechanisms are manifold: enzymes that repair damaged DNA, stimulated immune responses, apoptosis that eliminates damaged cells that would otherwise become cancerous. The universal factor, according to hormesis enthusiasts, is that minute doses prod such responses into a modest overreaction. For example, heavy metals such as mercury spur synthesis of proteins called metallothioneins that remove toxic metals from circulation and likely also protect cells against potentially DNA-damaging free radicals produced through normal metabolism.

    The puzzle of hormesis.

    Low doses of phosfon, a herbicide, caused plants to grow better (top); small amounts of dioxin, a carcinogen, reduced tumors in rats (middle); and a little cadmium, a toxic metal, caused water fleas to produce more young (bottom). The effects were reversed at higher doses.


    To date, however, molecular studies on hormesis-like, biphasic dose responses have largely been carried out only on drugs, Calabrese says. Detailed studies have focused on a few dozen drugs known to act on receptors for neurotransmitters or other cell messengers. The receptor for a specific compound tends to come in two flavors: stimulatory or inhibitory. When the concentration of the drug is low, the stimulatory type of receptor is more likely to be activated; at higher levels, inhibition takes over. Opiates work this way, for example. “It probably occurs more than we're willing to admit,” says Richard Bond, a pharmacologist at the University of Houston, Texas. Such effects are often thought to be spurious or uninformative, Bond says: “We draw a line to make it go away.”

    Although many scientists applaud Calabrese's tenacity for bringing hormesis into the scientific mainstream, they point out that not all hormetic effects are beneficial. For example, vom Saal stunned his colleagues with a 1997 report linking extremely low levels of the plastics ingredient bisphenol-A fed to pregnant mice and enlarged prostate glands in their male offspring—the reverse of what is observed at higher doses. The study, which industry quickly attacked, led to a U.S. National Toxicology Program review that found that although vom Saal's result had not been reproduced, other experiments support the idea that hormonelike environmental pollutants can trigger effects at far below the levels usually tested (Science, 27 October 2000, p. 695).

    More such studies have come along since, including two high-profile papers last year from University of California, Berkeley, toxicologist Tyrone Hayes linking tiny concentrations of atrazine with reproductive deformities in frogs (Science, 1 November 2002, p. 938). Echoing Calabrese, vom Saal says that “if there are exceptions to linearity, you have to revise the system.”

    A regulatory revolution?

    Scientists are deeply divided, however, over just how the risk-assessment paradigm ought to be revised. The Texas Institute for Advancement of Chemical Technology Inc., which initially sponsored Calabrese's database, put out a flyer in 1998 citing examples of hormesis such as dioxin, mercury, and the pesticide lindane; the brochure declared sunnily that hormesis could allow “society to enjoy the benefits of many chemicals that have been banned.” Calabrese says he doesn't think it's that black and white. “There will be circumstances where the response appears to be beneficial, and cases where any change [in a standard] might not be advisable,” he says. Nevertheless, Calabrese argues that chemical carcinogens are being overregulated: Hormesis “emphasizes that there are thresholds for carcinogens,” and “the economic implications … are substantial,” he wrote in a commentary in Nature earlier this year.

    But vom Saal says that hormesis suggests exactly the opposite. He says that regulators are missing a whole suite of harmful effects of chemicals that haven't been adequately tested at low levels. Even if the effect appears beneficial—faster growth, larger offspring—that's not necessarily a good thing, he points out. Obesity, for example, is associated with other diseases later in life. “Anything but what would normally be there shouldn't be happening,” argues vom Saal.

    Hormesis proponents also err by focusing on single endpoints—such as cancer—while ignoring other endpoints, Rodricks and other skeptics argue. Christopher Portier of the National Toxicology Program cites an example from a study his group published in 1993 on cyclophosphamide, a cancer drug that stops cell division. At low doses, the drug seemed to protect rats from flu virus; all survived, unlike controls. But when injected with tumor cells, these animals were more likely to develop cancer. The apparent reason, Portier says, is that the drug skewed the animals' immune cell population, revving up T helper cells, which fight viruses, but reducing natural killer cells, which guard against foreign cancer cells. The end result was both beneficial and harmful. “What would you do with that finding if it were an environmental compound?” Portier asks. The case for the dioxins is murky as well. Tiny amounts of these chemicals suppress breast tumors in animals but can promote liver tumors. Only when all tumors are combined do the dioxins exhibit a U-shaped curve.

    Another cautionary tale is cadmium. Animal studies suggest that low doses of this element could help prevent some cancers, Calabrese notes. But in the August issue of Nature Medicine, researchers reported that at these low doses—even below those recommended as safe in the diet—cadmium acts as an endocrine disrupter in female rats, causing growth in uterine and breast tissues that could lead to cancer.

    To take a possibly beneficial effect into account in risk assessment, an agency would have to know “how all the pieces fit together,” including mechanisms, says Farland. EPA's latest cancer risk assessment guidelines encourage researchers to use that kind of data; the agency is also making an effort to integrate cancer and noncancer endpoints. “We are certainly interested in complex dose response function,” but “we're really trying to get at the biology that underlies the phenomenon,” Farland says.

    That won't come cheap, however. Because spontaneous cancers are rare in rodents, a statistically robust study showing that a toxicant cuts cancer risk would require lots of animals. “I'd have to have pretty convincing evidence before killing 5000 animals to prove the existence of a suppression effect,” Portier says. Toxicologist David Eaton of the University of Washington, Seattle, agrees. For carcinogens, he says, “I don't think the idea of hormesis is going to greatly influence the way bioassays are done. It's just too expensive … you'll never be able to characterize [a hormetic effect] to the point where people think it's real.”

    But the spotlight on hormesis is unlikely to fade anytime soon. The U.S. National Academy of Sciences is mulling whether to sponsor a study of “the science of hormesis,” says staffer James Reisa. Calabrese will lead a roundtable on hormesis at the Society of Toxicology's annual meeting in Baltimore next March, and he has been organizing international conferences on hormesis thanks to a hefty grant from the U.S. Air Force, which is interested in the phenomenon because of issues such as cleaning up jet fuel spills and safety in space flight. And a journal that debuted this year, Nonlinearity in Biology, Toxicology, and Medicine, brings together on its editorial board scientists on both sides of the hormesis debate.

    Calabrese and likeminded scientists are bullish on the prospect of their colleagues coming around to the importance of hormesis, which they are convinced will transform medicine, toxicology, and pharmacology. Many skeptics, however, are neither fomenting such a revolution nor rooting for it to begin.


    A Healthful Dab of Radiation?

    1. Jocelyn Kaiser

    The notion that certain toxic chemicals can be healthful in small doses is stirring new controversy (see main text), but a similar debate about low-dose ionizing radiation has been raging for decades. Now, research that could shed light on possible “radiation hormesis,” much of it funded by the U.S. Department of Energy (DOE), is well under way. Although these studies may not soon alter regulators' assumption that any dose of radiation is harmful, the findings about low-dose effects may be provocative.

    Radiation risks are now calculated based mainly on cancers among 86,600 survivors of the two atomic bombs dropped on Japan. These human data “are the gold standard,” notes carcinogenesis expert Julian Preston of the U.S. Environmental Protection Agency (EPA). The incidence of solid cancers in the survivors rises in a straight line with dose. This suggests that any increase in dose delivers an increase in risk, with no safe level of radiation. But at the lowest doses, there are too few cancers to calculate the actual risks. “The numbers are just not there,” says radiobiologist Eric Hall of Columbia University in New York City. To be cautious, public health agencies extrapolate risk in a straight line from higher to lower doses. That leaves open the possibility that something unexpected is going on below the threshold of measured effects.

    In this zone, there are hints that a little radiation could even be beneficial. The Japanese bomb survivors who received the lowest doses are living longer than controls, for example. Some studies have found a slightly lower incidence of cancer in people living in places such as western China and Colorado, where natural background radiation levels are three to four times higher than the global average of 2.4 millisieverts per year. And studies dating back to the 1950s report that rodents live about 10% to 20% longer if exposed to small amounts of radiation, notes cancer researcher Arthur Upton of the University of Medicine and Dentistry of New Jersey.

    In the mid-1980s, Nobel Prize-winning cytogeneticist Sheldon Wolff of the University of California, San Francisco, offered one explanation: When his team “tickled” cells with a low dose of radiation, waited a few hours, then applied a high dose, the cells showed fewer DNA strand breaks than did cells hit only with the high dose. Wolff described it as an “adaptive response,” suggesting that the low-dose radiation had stimulated the cells' DNA repair enzymes. Wolff, however, felt it was too soon to conclude that radiation hormesis was real, arguing in a 1989 debate in Science with Leonard Sagan of the Electric Power Research Institute in Palo Alto, California, that other damage caused by low-dose radiation might overwhelm the beneficial effects.

    After a funding slump, research on this topic picked up again a few years ago as DOE faced skyrocketing costs to clean up its radioactive waste sites. In 1997 Senator Pete Domenici (R-NM) persuaded Congress to create a new DOE program to study low-dose radiation. Approved through 2007, it has spent nearly $100 million so far, mostly on cellular studies.

    Bystander effect.

    Only pink cells were hit by alpha particles, but both a radiated cell and an unradiated cell (blue) have fragments that indicate broken chromosomes.


    Ironically, some new findings have heightened concern. For example, Hall and other Columbia researchers using a new technique that can hit a dish of cells with a single alpha particle reported a bizarre result in 1999: Even cells not directly hit sustain damage. Other labs have found that supernatant from such an experiment can also cause this so-called bystander effect, suggesting that radiation creates a harmful molecule that seeps from irradiated cells into neighboring cells. Adaptive responses only partially repair the damage, the Columbia team has found. The implications, Hall says, are that “low-dose risk may be being underestimated.”

    Others don't dispute this result but note that alpha particles make up only a portion of the low-level radiation that people are exposed to, and they are particularly damaging—“like a baseball bat through a cell,” says radiation oncologist William Morgan of the University of Maryland, Baltimore. Adaptive responses may offset harmful bystander effects in cells dosed with gamma rays and x-rays, Morgan suggests.

    Whether any of the changes seen in cell studies actually lead to cancer is unknown. Genomically unstable cells created by bystander effects might be more likely to die through apoptosis, or programmed cell death, for example. The net result could be that low-dose radiation helps remove potentially cancerous tissue, says molecular biologist William Bonner of the National Cancer Institute in Bethesda, Maryland. “What you really want to know is what's happening in an animal.” The DOE program aims to learn more by funding carcinogenesis modeling studies and single-particle experiments on three-dimensional tissues, says program director Noelle Metting.

    Other researchers are revisiting past animal studies that showed beneficial effects. Radiobiologist Ron Mitchel of the company Atomic Energy of Canada Limited is seeing evidence for protection against cancer in transgenic mice that lack one copy of the p53 tumor suppressor gene and are highly prone to developing tumors. Mitchel's group reported in the March issue of Radiation Research that a tickle dose of gamma rays significantly delays the development of spontaneous lymphomas and bone tumors in these mice. Low-level radiation isn't always bad, Mitchel says: “There's obviously a threshold for harm.”

    Hoping to resolve conflicts in earlier mammal data on low-dose radiation and cancer, a team at the University of Ottawa is scrutinizing details such as tissue and radiation type in 750 data sets. The team sees protective effects, but only for some strains and species. That suggests variability in humans: “A little radiation may be good for some people but bad for others,” says lead investigator Philippe Duport.

    Some scientists, including members of the Health Physics Society, already believe that there's enough evidence to assume that radiation is harmless below a certain level. But the National Council on Radiation Protection and Measurements in its latest report in 2001 said that the linear-no-threshold model should be retained for now. A National Academy of Sciences panel known as BEIR-VII is examining the latest data and could issue its verdict as soon as next year, says academy staffer Evan Douple. But even if animal data and new mechanistic studies give support to the hormesis theory, nobody thinks BEIR-VII will abandon the current linear model of risk just yet. That would be a “complete shift” for public health, says Preston, adding: “If you've got human data, you use it.”


    Proton Guns Set Their Sights on Taming Radioactive Wastes

    1. Dennis Normile

    Once mooted as energy sources, nuclear reactors that substitute particle accelerators for chain reactions are taking long-range aim at a new mission

    KUMATORI, JAPAN—On the grounds of Kyoto University's Research Reactor Institute, workers have dug into a hillside to give a 30-year-old experimental nuclear reactor an unusual companion: a proton synchrotron. When it starts up in fall 2005, the synchrotron will fire protons into the heart of the reactor, straight down the axis of a cylinder of heavy metal wrapped in a core of nuclear fuel. Neutrons dislodged from the target will hurtle into the fuel, shattering atoms as they go.

    It may seem a roundabout way to generate a nuclear reaction, and it is. But this type of accelerator-driven system (ADS), as it's called, isn't primarily designed to generate power. Instead, its aim is to transform some of the nastier ingredients of spent reactor fuel into less troublesome elements. The technology “has a unique role to play in treating nuclear wastes,” says Stefano Monti, a nuclear physicist at the Italian National Agency for New Technologies, Energy, and the Environment (ENEA) in Rome.

    Kyoto University, with its $10 million Kumatori Accelerator-driven Reactor Test Facility (KART), is not alone. By the end of this year, the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, expects to start building a $1.75 million experiment chamber for nuclear reactions at an existing proton accelerator. And ENEA, the French Atomic Energy Commission (CEA), and Germany's Forschungszentrum Karlsruhe are joining forces for the $22 million TRIGA Accelerator-Driven Experiment (TRADE), which will add a proton accelerator to an experimental reactor at ENEA's Casaccia Research Center in Rome. The three partners expect to commit to funding the project within this year and hope to win additional funding from the European Union next year. That would allow construction to start in 2005. Researchers from Los Alamos National Laboratory in New Mexico have been participating in design work and will likely join the project formally next year as well.

    Getting real.

    In Kyoto, Kaichiro Mishima and colleagues are building the first complete accelerator-driven system.


    More projects are on the horizon. Scientists in Japan are lobbying for a reactor chamber to be added later to the Japan Proton Accelerator Research Complex project, now under construction in Tokai, northeast of Tokyo. And in Europe, scientists are already starting to talk up an Experimental Accelerator-Driven System to follow on from TRADE.

    The projects are cheap in an age when big physics facilities run to hundreds of millions of dollars. Support for ADS research has been small but consistent, Monti says. “At least in Europe, nuclear waste is the main issue for public acceptance of nuclear power, and that justifies the effort to develop this technology,” he says, although he cautions that it will take “a mosaic of technologies” to solve the problem.

    The basic processes at work in an ADS—splitting atoms to change one element into another—have been understood for almost a century. Similar schemes were briefly studied in the 1950s to turn thorium into uranium-235 to fuel nuclear reactors. The idea was revived in the 1980s when scientists started wrestling with the problem of waste from nuclear power plants. The most troublesome components of nuclear waste are long-lived fission products and actinides —elements such as americium and curium —which have half-lives of thousands of years. Researchers working separately at Brookhaven National Laboratory in Upton, New York, and at Los Alamos started looking at using subcritical, or non-self-sustaining, nuclear reactions to burn up these wastes. They envisioned using an accelerator to fire a beam of protons at a target surrounded by spent nuclear fuel. In what is called a spallation reaction, the protons break target nuclei, producing neutrons that trigger reactions in the surrounding material (see figure). Some radioactive elements are rendered nonradioactive. Others absorb a neutron, become unstable, and then either fission or decay. Actinides, for example, are transmuted into uranium, which decays into shorter-lived radionuclides that can be disposed of as low-level nuclear waste. Because the reaction is subcritical, if the stream of protons is shut off, the reaction stops.

    Chain of fuels.

    In an ADS, protons slamming into a heavy-metal cylinder knock out neutrons that alter radioactive material.


    Later, researchers concluded that it would be more practical to reprocess spent fuel, extract long-lived fission products, and recycle and burn them in commercial nuclear power plants. But reprocessing raises nuclear proliferation concerns because it separates out plutonium, which can be fashioned into a fission bomb. And in any case, actinides cannot be recycled because peculiarities in the way they fission make it hard to control reactions based on actinide-rich fuel.

    Labs in the United States, Europe, and Japan studied ADS on paper, but the technology never generated widespread interest until Carlo Rubbia took it under his wing. In 1993, Rubbia, a 1984 Nobel laureate in physics who was then director-general of CERN, the European particle physics laboratory, championed ADSs as a way to generate energy. Relying on a subcritical reaction, Rubbia believed, would eliminate any chance of a Chornobyl-like accident. Moreover, ADS power plants would generate much less high-level waste than conventional nuclear power plants and no material that could be processed into nuclear weapons. They would also be able to “burn” thorium, an element three times as abundant as uranium and much easier to process into nuclear fuel.

    A preliminary study suggested that the system would produce about 30 times more energy than it consumed. But Rubbia never showed that his “energy amplifier” would be economically competitive, says Massimo Salvatores, director of research at CEA's Cadarache Center, and he eventually shelved the idea. Rubbia, who is now high commissioner of ENEA, has since switched focus: When he proposed the TRADE experiment in 2000, he talked exclusively about treating nuclear waste.

    Salvatores credits Rubbia with building the respect for ADSs that led to funding for experiments on the separate components, a first step toward a full system. The most notable of these is the 8-year-old MUltiplication of an External Source (MUSE) experiment at the Cadarache Center, which studies the nuclear reactions triggered in a subcritical core by neutrons from a deuterium-tritium reaction. MUSE's neutrons, however, achieve energies of only 14 million electron volts—less than a tenth the energy of neutrons from spallation sources. That shortfall limits the sorts of research MUSE can pursue. Other experiments have studied spallation sources, but none has brought an accelerator and a reactor together. “To confirm the feasibility of an ADS, you need to use both spallation neutrons and a subcritical [reactor] system,” says Kaichiro Mishima, a nuclear engineer at Kyoto University.

    That's where Kyoto's KART comes in. It will come on line in fall 2005, and JINR's Subcritical Assembly in Dubna (SAD) will join it a year later. Both will concentrate on the basic physics of ADS. TRADE, the most comprehensive of the new experiments, will go further. Whereas KART and SAD will be run at extremely low power, TRADE will generate several hundred kilowatts, allowing researchers to study how increasing temperatures in the core affect the reaction. It will also tackle practical issues such as cooling the target and monitoring and controlling the reaction through start-up, shutdown, and steady-state operation. “We want to validate that we can control the system under these various conditions and that we can react if there is any problem,” Salvatores says. Such steps will be necessary to address safety and licensing issues for a large-scale demonstration waste-transmutation project planned for around 2015 and likely to cost several hundred million dollars.

    Even if ADSs sail through the experiments with flying colors, the technique could still face an uncertain future. “An ADS is one of a number of different options” for handling high-level waste, says Mike Cappiello, a nuclear physicist at Los Alamos who is national director for transmutation engineering for the U.S. Department of Energy. For example, although the United States is interested in joining the TRADE project, the country could decide to rely solely on the geological depository for high-level waste it is developing at Yucca Mountain, Nevada. One factor that might give ADS an edge over other options, Cappiello notes, is economics. A full-scale ADS system would produce a significant amount of thermal power. “And there is every incentive to recover that power to either produce electricity or make hydrogen,” he says. Ironically, ADSs might end up doing double duty as energy amplifiers after all.

  10. 2003 NOBEL PRIZES

    Physicists Honored for Their Medical Insights

    1. Gretchen Vogel

    Discoveries that give doctors and researchers unprecedented views of the body's internal workings have earned this year's Nobel Prize in physiology or medicine for Paul Lauterbur of the University of Illinois, Urbana-Champaign, and Peter Mansfield of the University of Nottingham, U.K. The two physicists were recognized for their contributions to the development of magnetic resonance imaging (MRI). The award also touched off an unusual public protest from an MRI researcher who was not cited.

    MRI pioneers met with some skepticism 3 decades ago when they first suggested that the technique, which had previously been applied only to chemicals in solution, could produce useful images of something as complex as the human body. But MRI is now a standard tool for diagnosing everything from damaged joint cartilage to cancerous tumors and brain damage following a stroke. In recent years, a refinement—functional MRI (fMRI)—has enabled researchers to image blood flow, allowing insights into the workings of a living brain that were unthinkable when the prizewinners made their discoveries.

    MRI takes advantage of the fact that the nuclei of some atoms spin in predictable ways when they are placed in a strong magnetic field. Exposing the spinning nuclei to radio waves of a certain resonant frequency increases their energy. When the radio waves are turned off, the nuclei will themselves emit radio waves as they return to their original energy level. An MRI machine detects these signals and converts them into an image of internal organs.

    Image conscious.

    Paul Lauterbur (bottom) and Peter Mansfield.


    In the 1940s and '50s, chemists used what was then called nuclear magnetic resonance (NMR) to study the structure and composition of molecules. Lauterbur's breakthrough came in the early 1970s. While working at the State University of New York, Stony Brook, he showed that imposing a weaker, carefully calibrated magnetic field gradient on top of the strong uniform magnetic field could provide information about the position of atoms with a given spin. Because the spins depend on the magnetic field surrounding the atoms, this produces a two-dimensional image of a structure.

    Hydrogen atoms give strong NMR signals, and one of Lauterbur's first demonstrations of his technique, described in Nature in 1973, showed the image of a vial of ordinary water inside a vial of heavy water in which the hydrogen atoms are replaced with deuterium. The image was the first time anyone had been able to visually distinguish between the two types of water. (Water is the most abundant component of living tissue, and current MRI methods also depend on hydrogen atom signals to distinguish between tissues with different water contents.)

    Lauterbur says he was inspired to work on the possibility of noninvasive imaging after watching other researchers dissect cancerous tissue from rats. “The idea of cutting people into pieces to diagnose their problems did not sit well with me,” he says. The first living creature that he imaged with the technique, he says, was a clam his daughter had found on the beach. “Because of the time it took to do the experiment, it had to be an animal that would lie very still,” he explains. He and his colleagues also took advantage of their previous heavy-water result to show that the animal was still alive: They placed the clam in heavy water, and as long as the water in the clam's tissues remained ordinary water they knew that the creature was still alive.

    Mansfield, who dropped out of school at age 15 to become a printer and completed his secondary education part-time before attending university, developed a new way to obtain resonance signals by selectively exciting the atoms in a precise region. He also developed mathematical methods to more quickly and accurately transform the emitted radio signals into useful images. Both advances were key steps toward obtaining images of moving objects, such as a beating heart or blood flows in a brain working to recognize an unfamiliar word.

    Inside view.

    MRI scans reveal fine details of human brain structures.


    The early days of the field were marked by strong rivalries. The University of Nottingham was home to three groups working on competing approaches. (Two other prominent Nottingham researchers, Raymond Andrew and William Moore, died several years ago.) Another researcher, Raymond Damadian, then at the Downstate Medical Center of the State University of New York, has also claimed credit for inventing MRI. Indeed, Damadian published the first paper that used MRI to distinguish between healthy and cancerous tissue. But that technique involved focusing on specific regions in the body and obtaining signals from precise points rather than creating an image of a larger area.

    Damadian filed a patent on his technique and founded a company, Fonar, which makes MRI machines today. The machines use a version of Lauterbur's gradient methods, however. Damadian believes that he should have been included in the Nobel award; last week his company took out full-page ads in The Washington Post and The New York Times objecting to the “shameful wrong that must be righted” and urging readers to support his cause. “Damadian published some early papers outlining the concept,” says George Radda, an MRI expert at the University of Oxford and former chief of the U.K.'s Medical Research Council. But, he adds, Damadian's idea for detecting specific signals from cancerous tissue “did not lead to today's MRI.” Damadian did not respond to Science's requests for comment.

    The bitterness of the early rivalries may have delayed the awarding of the prize, says Radda. Except for the controversy, “it could have been given 10 years ago,” he says. The contributions of Lauterbur and Mansfield rise to the top: “There is no question that these are the right two people” to receive the prize, he says.

    Lauterbur “deserves a lot of credit, not only for the invention of the idea, but also for proselytizing,” says Waldo Hinshaw, who was a postdoctoral fellow in the lab of Raymond Andrew in the early 1970s and helped develop some of the imaging techniques used in today's MRI machines. Early on, Hinshaw notes, people doubted that magnets big enough and powerful enough to produce highly detailed images of the human body could be built. But, he adds, Lauterbur “went around to everyone he thought might be interested and said, ‘Take it seriously.’ He even showed up at my apartment [in Nottingham] one evening to convince me that it was worthwhile.”

    Even before the prize, Lauterbur says, the work has paid off in a deeply personal way. “The most satisfying thing personally is when a physician looks at an MRI and says, ‘No problem there!’ which I have experienced and relatives of mine have experienced,” he says. And that can be almost as thrilling as winning a Nobel Prize.

  11. 2003 NOBEL PRIZES

    Gateways Into Cells Usher in Nobels

    1. Greg Miller

    The 2003 Nobel Prize in chemistry honors Peter Agre and Roderick MacKinnon for their pioneering work on proteins that control which molecules pass into and out of cells. These gatekeepers are the basis of many vital functions, including the generation of nerve impulses and the ability to regulate the concentration of urine.

    Agre, 54, of Johns Hopkins School of Medicine in Baltimore, Maryland, claims half of the prize for his discovery of water channels. These protein pores shuttle water into and out of cells much faster than it could diffuse through their fatty outer membranes. Speed is particularly critical in the kidney, which reclaims water from the urine to prevent dehydration. “You'd pee out 50 gallons of water a day if these channels didn't filter the water back into the body,” says Robert Stroud, a biophysicist at the University of California, San Francisco.

    Agre's bright future in chemistry wasn't always apparent. He got a D in the subject in high school despite having both a father who was a chemistry professor and, he readily admits, a perfectly good chemistry teacher. “I was kind of a negligent high school student, more interested in making mischief,” he says.

    Agre's big breakthrough had an element of serendipity. A rheumatologist by training, he was interested in identifying Rh antigens: surface proteins on red blood cells that give blood types their “positive” or “negative” designation. But his screens repeatedly netted a 28-kilodalton protein that seemed to have nothing to do with Rh. It was prevalent not just in red blood cells but also in the tubules of the kidney.

    Agre credits his former mentor at the University of North Carolina, Chapel Hill, John Parker, for suggesting that the protein could be a water channel. As early as the mid-1800s, scientists had proposed that cells might need such channels to maintain osmotic balance, but they'd never been found, and some biophysicists argued that diffusion alone could do the trick. The matter was settled when Agre's team put the protein, subsequently termed aquaporin, into frog eggs and put the eggs in a watery solution (Science, 17 April 1992, p. 385). The cells ballooned up and exploded before their eyes as water gushed in.

    Researchers have since identified 11 human aquaporins—some of which play a role in diseases—and many more in bacteria and plants. “I think [Agre's discovery] is really one of the big breakthroughs in physiology,” says Robert Schrier, a nephrologist at the University of Colorado Health Sciences Center in Denver. It was followed by a “second big peak” in the mountain chain of work that led to Agre's Nobel, says Mark Knepper of the National Heart, Lung, and Blood Institute in Bethesda, Maryland: structural studies that revealed how the channel works. “This is probably the reason it's the chemistry prize,” says Knepper, rather than a physiology Nobel.

    Crystallized honors.

    Peter Agre (top) and Roderick MacKinnon exposed discriminating channels that allow molecules to pass in and out of cells.


    Each aquaporin channel can pass about a billion water molecules per second. Yet the channels exclude other molecules—most notably, protons, in the form of H30+ ions. In recent years, Agre's group has revealed the secret to this remarkable selectivity. Atomic-resolution images of an aquaporin showed that each channel accommodates about 10 water molecules at a time, lined up single file. The protein's electric field forces the positively charged hydrogens on each water molecule to point away from the center of the channel, so that hydrogens on half of the waters point toward the outside of the cell, whereas those on the other half point into the cell. This orientation both repels protons from the ends of the channel and prevents them from crossing through by hopping from one water molecule to the next.

    Choosy channels are also the focus of MacKinnon's work. In the early 1990s, while at Harvard Medical School, MacKinnon decided that to really understand the channels he was studying, he needed to see them. That meant learning to do x-ray crystallography—a monumental undertaking tantamount to changing careers. “A lot of us questioned it,” says a postdoc in MacKinnon's lab during that era, Kenton Swartz, now at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. Getting membrane proteins to crystallize is notoriously difficult, and ion channels are even more unwieldy than most. “It seemed like a pie-in-the-sky idea even to me,” MacKinnon concedes. But it paid off.

    In 1998 MacKinnon sent a jolt through the field with the first high-resolution picture of an ion channel derived from x-ray crystallography (Science, 3 April 1998, pp. 69 and 106). Based on the crystal structure, his team later presented an elegant model of how ions—in this case potassium ions—pass through the core of the channel and explained the channel's ability to let potassium ions through while excluding smaller sodium ions.

    Just as a rock star moves through a crowd with a ring of bodyguards clinging to his person, a sodium or potassium ion moves through a solution with an entourage of water molecules. Passing through the potassium channel's “selectivity filter,” however, requires leaving the escorts behind. The filter makes this easy for potassium ions by providing four conveniently located carbonyl groups. Potassium forms bonds with these just as easily as it does with water, and so it slips through the filter, leaving its waters behind. Sodium, however, is smaller. As a result, it can only bind two of the carbonyl groups at a time. This doesn't provide the energetic incentive needed to lure sodium ions away from their waters; thus the ions retain their escorts and stay outside the filter.

    Prizewinning proteins.

    This year's chemistry Nobel recognizes work on potassium channels (top) and aquaporins.


    Although many in the field had predicted MacKinnon would one day take home a Nobel for this work, most envisioned him getting a slice of the physiology prize. “I think the choice of chemistry is actually very clever,” says Gary Yellen, a biophysicist at Harvard Medical School in Boston. Although the question of how ion channels achieve their selectivity is critically important for biology, Yellen says, the answer was ultimately a matter of chemistry.

    More recently, MacKinnon, now at Rockefeller University in New York City, stirred up the field with the first portrait of a voltage-gated ion channel. These channels reload neurons after they've fired an impulse. Based on the channel's structure, MacKinnon's team presented a model of its mechanism that flew in the face of the view widely held by researchers in the field, many of whom refused to accept it (Science, 27 June, p. 2020). But even the critics acknowledge the research as a tremendous accomplishment and say it has energized the field. “He certainly deserves this,” says Clay Armstrong of the University of Pennsylvania in Philadelphia. “He's packed two or three careers into 10 years.”

  12. 2003 NOBEL PRIZES

    Cool Theories Garner Super Kudos

    1. Charles Seife

    Three theorists have gotten a warm reception for their work on the very cold. Vitaly Ginzburg, Alexei Abrikosov, and Anthony Leggett have been awarded this year's Nobel Prize in physics and will split the 10 million kronor ($1.3 million) award.

    Ginzburg, of the P. N. Lebedev Physical Institute in Moscow, and Abrikosov, currently at Argonne National Laboratory in Argonne, Illinois, were honored for their work on superconductors, materials that lose all electrical resistance at very low temperatures. In 1950, Ginzburg and a colleague, Lev Landau, formulated a theory that describes how superconductors behave in a magnetic field. The Ginzburg-Landau theory implied that superconductors can respond in two different ways when exposed to ever-stronger magnetic fields.

    Type I superconductors are completely impermeable to magnetism; the “field lines” can't pass through the superconducting material at all. If the magnetic field gets too strong for the material to resist, the superconductivity disappears. Type II superconductors, which include all of the famous high-temperature ones, allow field lines to penetrate under some conditions. Abrikosov built upon the Ginzburg-Landau theory to characterize the behavior of type II superconductors; he predicted, for example, that penetrating field lines would create a regular lattice pattern in the superconductor, a phenomenon observed directly in 1967. He also described how ever-increasing field strengths can finally overwhelm even type II materials and rob them of their superconductivity.


    Laureate Anthony Leggett (top) plunged into liquid helium; Vitaly Ginzburg (center) and Alexei Abrikosov braved the resistance-free currents of type II superconductors.


    Although a fuller description of superconductivity would have to await BCS theory, which three physicists (John Bardeen, Leon Cooper, and J. Robert Schrieffer) formulated in the late 1950s, “Ginzburg and Abrikosov did extremely important phenomenological work before BCS theory,” says Leggett, who hails from the University of Illinois, Urbana-Champaign. Schrieffer, currently at Florida State University, Tallahassee, says that the Russians' equations “correctly predict where you get a high [magnetic] field coexisting with superconductivity.”

    Leggett's own contribution, however, has to do not with superconductivity, but with a related phenomenon, superfluidity. In superfluidity, a substance such as very cold liquid helium acquires outlandish properties, such as flowing without friction, for reasons similar to those that cause superconductivity. BCS theory explained helium-4's superfluidity nicely, but it didn't seem to work for helium-3, whose superfluid phase was discovered in 1972.

    The breakdown occurred because the atoms in helium-3 pair up, making it a more complex beast than the solitary atoms in superfluid helium-4. Electrons in superconductors form similar pairs, which BCS theory describes handily. But helium-3 pairs have orbital momentum and spin that make them more difficult to understand, says Douglas Osheroff, a physicist at Stanford University.

    Shortly after Osheroff and his colleagues made superfluid helium-3 for the first time, Leggett forged a theoretical framework that took the mathematical complexity of the helium-3 pairs into account to explain the puzzling and unexpected behavior of the new substance. “He was as close as you can come to an oracle when it came to helium-3,” Osheroff says. In the process, Leggett brought helium-3 superfluidity under the umbrella of existing theory. “It actually fits beautifully into the BCS pattern, but it has a much richer structure,” says Leggett.

    “I was absolutely floored,” says Schrieffer of his reaction to Leggett's analysis. “We thought that the reach of the theory would be for ordinary metal superconductors” but not for helium-3. Schrieffer says he repeatedly nominated this year's winners for the prize.

    In making the award, the Nobel committee displays a pattern of its own: Four of the last eight physics Nobels have honored work in the physics of low temperatures. Clearly, cold research means anything but a cold shoulder from the Swedish Academy.

  13. 2003 NOBEL PRIZES

    Clearer Forecasts for the Dismal Science

    1. Charles Seife

    Time is not always on your side, at least in the volatile world of finance. The complicated mathematics of analyzing how indicators such as stock indexes fluctuate over time stumps economists, but Robert Engle and Clive Granger have helped clear up the confusion. The two have won this year's Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for giving analysts tools to understand time series—data sets that show how variables change as time flows.

    Economists Engle, of New York University, and Granger, of the University of California, San Diego, revolutionized the way the financial community dissects these unpredictable series. “It's fair to say that their work has had a profound impact on time-series analysis,” says Princeton University economist Yacine Aït-Sahalia.

    Granger showed that making simplistic assumptions about the time-dependent behavior of economic indicators can lead to ridiculous conclusions. In the 1970s, he proved that the techniques then in use could make it look as if random processes are linked even when they're independent. He also came up with ways of measuring whether two indicators that meander in seemingly different ways have hidden relationships. In the short term, for example, a population's wealth tends to fluctuate much more than the amount of goods and services the population consumes, even though consumption and wealth are interlinked. Also, Granger devised means of determining when one economic effect causes another—crucial inferences for economists to make. “It's easy to fall into the trap of mistaking correlation for causality,” says Aït-Sahalia. “This has serious implications for economic policy; in understanding the Federal Reserve you want to know, for example, do changes in interest rates cause a change in employment?”

    Time pundits.

    Robert Engle (top) demystified time-series data with changing volatility; Clive Granger untangled economic causes and effects.


    Engle is best known for developing a mathematical technique, known as ARCH, for analyzing time series in which the variance, or volatility, of an indicator can change over time. “Before [Engle], people didn't pay attention to variances and took the variances to be constant. It was kind of a leap, conceptually,” says Tim Bollerslev, an economist at Duke University in Durham, North Carolina. Taking that leap is vital to financiers, says Princeton's Christopher Sims. “In finance, you want to predict how volatility changes because you can make money off of that,” he says. “Variance is absolutely central to evaluating risk.” And the central principle of maintaining an investment portfolio, he says, is making the appropriate tradeoff between its risk and its returns.

    Engle and Granger will split the $1.3 million prize and will attend the ceremony in Stockholm in December. In the meantime, financiers will try to use the pair's methods to make money on their own, and economists will continue to use their analyses and methods to make crucial forecasts. “I think the beauty of it is that the stuff just works,” says Aït-Sahalia.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution