News this Week

Science  11 Apr 1997:
Vol. 276, Issue 5310, pp. 196
  1. Neuroscience: What Makes Brain Neurons Run?

    1. Marcia Barinaga

    Neuroscientists disagree over the way brain metabolism creates the signals used to produce high-tech images of activated brain areas. At the debate's center: How much oxygen do active neurons need?

    Open virtually any modern neuroscience text, and you are likely to find striking color images of the working human brain. Recent imaging methods that highlight areas of brain activity in vivid hues have revolutionized the field, helping researchers map the brain regions involved in functions ranging from sensation and movement to language and memory. Given the methods' widespread use, you might expect that neuroscientists know exactly how neural activity produces these images. Surprisingly, however, that's far from the case. Instead, that topic is at the heart of a decade-old debate still raging at scientific meetings and in the published literature.

    Bombshell.

    These PET images from the mid-1980s showed that as blood flow (top) increases with brain activation, oxygen extraction from the blood (bottom) drops, indicating little or no increase in oxygen use.

    Peter Fox and Marcus Raichle/Proceedings of the National Academy of Sciences 83, 1140 (1986)

    The problem is that the most commonly used brain-imaging methods—positron emission tomography (PET) and functional magnetic resonance imaging (fMRI)—don't record the activity of brain neurons directly. Instead, they measure surrogates for neural activity: blood flow in the case of PET, blood oxygenation in fMRI. Prominent neuroscientists disagree over how those indicators actually relate to brain activity, and the field as a whole is searching for experiments that will settle the debate. “It is very important to understand what we are looking at [with] these nice color-coded pictures,” says neuroscientist Per Roland, who does brain imaging at Sweden's Karolinska Institute.

    The implications of this conflict aren't merely academic. Without a full grasp of how the PET and fMRI signals relate to neural activity, neurobiologists worry that they may misinterpret brain images, perhaps missing active brain areas or assigning activity to a larger or different area than is actually activated by a particular stimulus or mental task. Moreover, a better understanding of the brain metabolism that underlies the images we see would help neurologists treat patients with strokes or other damage that affects brain blood flow.

    The dip.

    Grinvald and Malonek see an initial rise in deoxyhemoglobin (top graph and red areas in left panel) in activated ocular dominance columns in the cat's visual system, before a less spatially precise surge in oxygenated blood causes oxyhemoglobin (top graph and red areas in right panel) to rise.

    MALONEK AND GRINVALD

    Some recovered stroke patients, for example, never regain the ability to increase blood flow to active brain areas, and researchers don't know exactly what that means for the health of those areas. “You wonder about circumstances in which there is impairment in vasculature,” says PET imaging pioneer Marcus Raichle of Washington University in St. Louis. “What is the brain being deprived of?” An answer to that question might help design drugs to improve brain function in stroke patients.

    The main issue in this debate concerns whether activated brain neurons need more oxygen than those at rest. Until 12 years ago, this issue was not controversial: Neuroscientists knew that brain tissue is absolutely dependent on oxygen for survival; they also assumed that active neurons use more energy than those at rest and so must consume more oxygen. But their assumption remained untested because oxygen consumption can't be measured directly in living brains, and the indirect methods are arduous.

    In 1985, Raichle and his then-postdoc Peter Fox took on that arduous task and shocked the neuroscience world with their answer: By their calculations, active brain areas don't consume significantly more oxygen than areas at rest. Based on that finding, Fox and Raichle proposed that neural activity produces only a modest boost in a brain area's energy demand, and that demand is met by anaerobic glycolysis—the metabolism of glucose to lactate, which doesn't use oxygen. While their idea at first met with disbelief, it gained wide acceptance over the next decade as evidence grew to support it.

    Now, however, many researchers believe the evidence is shifting back the other way, thanks to more sensitive techniques that show an initial oxygen uptake as brain areas become active. But Raichle and Fox are not backing off, and—given all the pieces still missing from the picture of brain metabolism—the controversy seems far from over.

    Metabolic origins

    The origins of PET imaging date back to work begun in the 1950s by Louis Sokoloff and Walter Freygang, working with Seymour Kety at the National Institute of Mental Health. By then, thanks to earlier work by Kety and his colleagues, researchers knew that the brain has few energy stores and gets most of its energy by oxidizing glucose to carbon dioxide and water. But Kety's earlier work said nothing about what was going on specifically in activated brain areas. To begin addressing this issue, Sokoloff and Freygang used radioactive inert gases to show that blood flow increases to activated areas of cats' brains. Then, in the 1970s, Sokoloff and his colleagues, using an artificial form of glucose that is taken up by cells but only partly metabolized, showed that glucose use surges in activated areas as well.

    Those findings became the basis of human-brain imaging with PET, developed by a number of groups in the 1970s. PET subjects are injected with chemicals, such as water or glucose, labeled with isotopes that emit positrons, which annihilate with electrons to produce gamma rays. A PET scanner detects the gamma rays throughout the brain, and a computer then uses this information to produce images that show changes in blood flow, glucose use, or other functions in specific parts of the brain. PET provided no direct way, however, to measure one key factor in metabolism: oxygen consumption. Based on Kety and Sokoloff's work, researchers simply assumed that oxygen use would mirror the increase in blood flow.

    Fox and Raichle were uncomfortable with that assumption. So, they used a method developed by Mark Mintun, in Raichle's lab, to calculate oxygen consumption from three separate measurements made with PET: blood flow, oxygen concentration in the blood, and the fraction of that oxygen that is extracted by the tissue. At the June 1985 meeting of the International Society for Cerebral Blood Flow and Metabolism in Ronneby, Sweden, Raichle and Fox dropped their bombshell. They had compared blood flow to oxygen consumption in the brains of subjects whose hands were stimulated with a vibrator to activate sensory brain areas. As expected, PET images showed blood flow in the areas jumping by roughly half. But the researchers found only a small, statistically insignificant rise in oxygen use (about 5%) in those areas. That report “stopped the proceedings,” Fox recalls, and the debate that ensued “was an utter madhouse.”

    Raichle and Fox's result—combined with their subsequent finding that while oxygen consumption shows little increase, glucose use goes up an equal amount to blood flow—triggered an uproar because it suggested that working brain areas use glucose anaerobically, as muscles do when they run out of oxygen. That idea was “anathema” to brain physiologists, says brain-imaging researcher Richard Frackowiak of University College, London, because it flew in the face of the well-accepted view that brain metabolism is totally dependent on oxygen supply.

    Fox's explanation for this surprising result was that even resting brain neurons need so much energy to maintain the electrical potential of their copious and leaky membranes that their oxidative enzymes are already working at full throttle—an idea, he notes, that is supported by calculations made in the 1980s by Dutch biochemist Cees Van den Berg of the University of Groningen. If so, then any extra energy needed when the brain area becomes active would have to come from anaerobic glycolysis of glucose, Fox argues.

    That explanation spurred several labs to search activated brain areas for increases in lactate, the end product of the anaerobic glycolysis pathway. James Prichard, Robert Shulman, and their colleagues at Yale University were the first to find it, reporting their results in 1991. “In a sense, that was partial support for the nonoxidative glycolysis hypothesis,” says Shulman. He notes, however, that the lactate peak they saw seemed too small and transient to account for the large increase in anaerobic glucose breakdown predicted by Fox and Raichle's measurements of glucose use. Nevertheless, other labs also found lactate in activated brain areas, and many researchers took that as support for Fox and Raichle's view.

    Still, not everyone's experiments pointed in that direction. Several other groups also had been doing the technically difficult measurements required to compute oxygen consumption. In 1987, Roland's team at the Karolinska Institute, using a variation on the technique employed by Fox and Raichle, measured an 18% increase in oxygen consumption in activated brain areas.

    Roland says Fox and Raichle's results may have been thrown off because they used water labeled with oxygen-15 to measure blood flow. Roland, who uses butanol labeled with oxygen-15, says that water can skew the calculations, diminishing the final value for oxygen consumption. “He is absolutely correct technically,” says Raichle, “but I think it is not a big enough effect to make the difference” between his and Roland's data.

    Albert Gjedde, of the Aarhus University Hospital in Denmark, and his team repeated the Fox and Raichle experiment with a slightly different method for calculating oxygen use and found—as Fox and Raichle had—an insignificant increase in such use. But when they employed a more complex visual image as the stimulus, he says, “we found a very substantial increase in oxygen consumption.” Based on that result, Gjedde proposed that the ability of brain neurons to increase their oxygen use may vary, depending on the type of task a neuron performs.

    On top of that collection of results came, in the spring of 1992, what most people saw as the most persuasive support for Raichle and Fox's hypothesis: the emergence of a new brain imaging technique, fMRI. Its very existence depended on the imbalance between blood flow and oxygen demand that Fox and Raichle had reported.

    Researchers had used MRI for years to map brain structures, based on the different magnetic properties of the organ's tissues. But in 1990, biophysicist Seiji Ogawa at AT&T Bell Laboratories in Murray Hill, New Jersey, proposed that MRI also could be used to follow blood-oxygenation changes in living brains. He based his proposal on a discovery made decades earlier by biochemist Linus Pauling: that when the blood pigment hemoglobin loses its oxygen, it becomes paramagnetic, which means it should interfere with the magnetic field during MRI, creating a dip in the magnetic resonance signals emitted by water protons. Ogawa manipulated the blood oxygenation of rats and detected a change in the MRI signal, which he dubbed the blood oxygen level-dependent (BOLD) effect.

    If Raichle and Fox were right in their claim, Ogawa figured that the surge of oxygenated arterial blood to activated brain areas—without a matching rise in oxygen use—would cause a positive BOLD effect. He teamed up with MRI specialist Kamil Ugurbil and his colleagues at the University of Minnesota, Minneapolis, to test this idea in human subjects looking at visual images. The researchers found the predicted surge in MRI signal in the activated brain areas. A team led by Tom Brady and Bruce Rosen at Massachusetts General Hospital in Boston was working on a similar track; when the groups published their findings in 1992, fMRI was born.

    Many neuroscientists took the positive sign of the BOLD signal in fMRI as support for the idea that activated brain areas had very little, if any, demand for extra oxygen to meet their energy needs. “Initially, people didn't quite know what to do with [the anaerobic hypothesis],” says neuroscientist Costantino Iadecola, of the University of Minnesota. “But functional MRI really supported the validity of Fox and Raichle's observation.”

    There was another possibility, however: The blood flow to the active brain areas might have increased so much that it produced an excess of oxygen, even in the face of stepped-up oxygen use by neurons. “If Raichle and Fox are correct, there should be a BOLD effect, but there could be a BOLD effect even if they are wrong,” says Ugurbil.

    New insights

    Despite this uncertainty, the weight of the fMRI findings kept the balance of opinion on Raichle and Fox's side—until recently. Now, different sorts of evidence have begun to shift that balance toward the aerobic point of view once again. The first nudge came in 1995. Shulman and his colleagues observed an increase in the BOLD signal in the sensory area of rats' brains when they stimulated the animals' forepaws with an electric shock. But when the researchers calculated oxygen use (employing nuclear magnetic resonance spectra to follow the metabolism of carbon-13-labeled glucose in active brain areas), they found evidence of “a very large increase in oxygen consumption,” says Shulman.

    Researchers in the field say that finding alone didn't turn the tide of opinion because, like Raichle and Fox's, it was indirect and depended on calculations and assumptions that could be challenged. Then, last April, Amiram Grinvald and graduate student Dov Malonek at the Weizmann Institute of Science in Rehovot, Israel, published more evidence.

    Grinvald and Malonek used a method developed over the past decade by Grinvald and his colleagues at IBM, Rockefeller University in New York City, and later at the Weizmann Institute. The two researchers shone light through holes in the skulls of anesthetized cats and analyzed the spectrum of light reflected off the cats' brains. Because the oxygenated and deoxygenated forms of hemoglobin are different colors, the spectra of light reflected from the two forms are different. This allowed Grinvald and Malonek to measure changes in blood oxygenation as the brain area was activated.

    They saw a rise in deoxyhemoglobin in the cat's visual cortex within 200 to 400 milliseconds after an image was presented to the cats' retinas. But, within 3 seconds, that effect was swamped by a rise in oxyhemoglobin, caused by a wave of extra blood flow. This surge, they observed, kicks in 2 seconds after stimulation and occurs over a larger area than the region where they saw the increase in deoxyhemoglobin. But their glimpse of an early dip in oxygenation before the surge in blood flow provides direct evidence, says Grinvald, of “delivery to the tissue of oxygen that had been bound to hemoglobin.” One reason the dip had not been observed in fMRI is that most fMRI images are collected over several seconds and thus would miss such a brief effect.

    Ugurbil, with then-postdoc Ravi Menon, learned of the dip in the early 1990s from work by Grinvald's team and by Juergen Henning at Freiberg University in Germany. He began tooling up to use fMRI for a look at that early deoxyhemoglobin increase. After some technical refinements by team member Xiaoping Hu to improve the sensitivity of its fMRI techniques, the Minnesota team tested human subjects with visual stimuli and saw the oxygenation dip in the subjects' visual cortex—with the same timing Grinvald and Malonek had seen. Their findings are in press at the journal Magnetic Resonance in Medicine. A group at the University of Alabama, led by Don Twieg, also has seen the effect. Those findings, along with Grinvald's work, are “consistent with an early deoxyhemoglobin increase,” says Ugurbil, “and this suggests a relatively significant increase in oxygen consumption,” which is masked when the extra blood rushes in.

    These recent results have persuaded many observers that brain activity is indeed aerobic, says Frackowiak, of London's University College: “There still is a controversy over the amount that oxygen utilization goes up, but the thinking of most of the community has shifted” toward an aerobic model of brain activity.

    The mystery of the extra glucose

    Despite what many interpret as the growing evidence that active brain areas do increase their oxygen use, Raichle and Fox are standing firm. Before the issue is resolved, they say, a nagging question must be answered: What becomes of all the extra glucose consumed when a brain area is activated? As Raichle puts it, nobody doubts that glucose consumption rises as much as blood flow in activated brain areas, and that oxygen use—by anyone's measure—lags behind. And that, he says, argues strongly that there must be anaerobic metabolism of glucose in active brain areas.

    Neuroscientist Pierre Magistretti, of the University of Lausanne in Switzerland, has proposed an explanation for the excess glucose consumption. He suggests that non-neuronal brain cells called astroglia metabolize the excess glucose to lactate, which they then pass on to neurons as an energy source. That would explain the transient appearance of lactate in activated brain areas, says Magistretti, whose team has shown that both glia and neurons have the right enzymes and lactate transporters to form the basis for such a lactate shunt. But Fox counters that if such an explanation were true, oxygen use would eventually have to rise to balance glucose consumption, because the neurons would require oxygen to process the lactate. And that, he says, is not what most researchers observe.

    As the controversy boils on, researchers who use imaging techniques puzzle over what the various findings may mean to the interpretation of their images. If Aarhus's Gjedde is right, for example, and some brain areas increase their oxygen use more than others do upon activation, neurons with high oxygen consumption might produce less of a BOLD signal, and fMRI imagers could miss their activity. And Grinvald and Malonek's finding, that blood flow increases over a broader area than does electrical activity, could render the BOLD signal spatially inaccurate at high resolution. The fMRI technique hasn't yet reached a level of resolution where that is an issue, but Ugurbil says it could become one as his group and others push to higher resolution.

    Meanwhile, researchers on both sides of the debate are searching for the definitive experiment to resolve the issue. Some would like to see further exploration of Magistretti's model, while Fox says he is designing better experiments in which to look for the prolonged lactate production he believes to exist in active brain areas. Ugurbil notes that researchers at various imaging centers are gearing up to explore the early oxygen dip that Grinvald's group has found, as well as to search for new ways of using fMRI to measure oxygen consumption. And Raichle wonders what need the excess blood flow is serving, if it's not answering a call for oxygen.

    “What this imaging business has done is to bring to light information about the [brain] and its relationship to its blood vessels and metabolism that we clearly don't understand,” says Raichle. Karolinska's Roland cautions those caught up in the debate not to decide what the answers should be before all the results are in. “We are here to find out what is actually happening,” he says. “Then, we can philosophize about whether it makes sense or not.”

    Additional Reading

  2. Physics: A Fine-Grained Look at Forces in Sand

    1. James Glanz

    KANSAS CITY, MISSOURI—Next time you stand on a sandy beach gazing out over the ocean, you may be intrigued to know that some surprising physics is taking place right under your feet. At an American Physical Society meeting here last month, Susan Coppersmith, of the University of Chicago, described a set of experiments that have begun to piece together just how weight gets divvied up and passed downward in granular materials such as sand and beads. It turns out that the forces are not distributed evenly among the grains, as you might expect. Instead, like a human pyramid in which some of the tumblers are doing more than their fair share of the work, the weight gets transmitted downward from grain to grain in jagged “force chains.”

    Coppersmith's group—using devices ranging from high-tech optics to low-tech carbon paper—has provided the first three-dimensional views of these chains. And colleagues such as James Kakalios, a physicist at the University of Minnesota, agree that the experiments are “rather clever.” Understanding such measurements, he says, “is very interesting and important, because forces in granular materials do not get transmitted in the same way they would in a fluid.” This penchant for pushing in directions other than straight down, says Kakalios, is important in everything from grain silos, the sides of which may suddenly rupture, to the steady, unfluidlike flow of sand in an hourglass.

    Coppersmith's group includes researchers at the University of Chicago; the Xerox Corp.; the University of California, San Diego, and Santa Cruz; and the Tata Institute of Fundamental Research in India. The team used a simple image to convey the topic under study: “a foot on the beach in Aruba.” Coppersmith described two sets of experiments to examine the way grains under the hypothetical foot might be affected. In the high-tech version, researchers used clamps to apply pressure to a 10-centimeter-tall container filled with half-millimeter beads made of a special material, the light-transmitting quality of which changes under pressure, exhibiting “stress-induced birefringence.” As stress increases, this material rotates the polarization of light. Coppersmith's group shone a light through a polarizing filter, then through the beads, and finally through a second filter at right angles to the first. Only light rotated by beads under stress could get through, producing an illuminated map of stress lines.

    To learn how the weight is distributed quantitatively, the group went low-tech. They discovered that when the bottom of the stack of beads rested on a piece of carbon paper placed facedown on a clean sheet of paper, the sizes of the resulting smudges were proportional to the weight on a given bead. “The idea that you can get real, quantitative data from carbon paper blows my mind,” says Sidney Nagel, a Chicago team member.

    The carbon paper not only produced useful results, but those results agreed reasonably well with a quantitative model the group had developed. This model, based on observations of the force chains and computer simulations, assumes that a given bead divides the weight randomly among the three beads on which it rests. This approach leads to sharper spatial concentrations of weight than would be predicted by standard “bell curve” statistics. But the approach keeps the weight more evenly distributed than does “fractal statistics,” used to describe some gases and liquids. “It's quite amazing that you can do so well with such a simple model,” says Coppersmith. She now aims to apply concepts gleaned from this work to phenomena such as shear motion—which occurs as sand or mud begins to avalanche.

    In order to tackle more realistic problems involving piles of grain, says Pierre-Gilles de Gennes of the École Supérieure de Physique et de Chimie Industrielles in Paris, researchers will need to refine the principles they have extracted from this weight-distribution research. For example, he points out, the Chicago model ignores the role of arches of grains that can spontaneously form and direct weight toward the edges of the container. Other groups, especially that of Jean-Philippe Bouchaud at the Atomic Energy Agency's Centre d'Études de Saclay and Michael Cates at the University of Edinburgh in the U.K., are looking into that issue. The curious physics of sandpiles should keep these researchers occupied for years to come.

  3. Science Publishing: Fermilab Group Tries Plain English

    1. James Glanz

    Leonardo da Vinci composed his notebooks in mirror-image script, in part to make them incomprehensible to prying competitors. A casual reader of today's physics journals, packed with jargon and unwieldy diction, might conclude that modern savants have discovered how to achieve the same goal in standard, left-to-right prose. Now, however, a major research group in one of the field's most intimidating enclaves—particle physics—has decided it cares enough about its readers to post, on the World Wide Web, a “plain English” version of every technical paper it publishes. The seemingly revolutionary experiment aims to bring particle physics to a wider audience and ease the job of scientists trying to stay abreast of specialties outside their own.

    The philosophy is simple, says John Womersley, a physicist in a collaboration known as D0, at the Fermi National Accelerator Laboratory (Fermilab), just west of Chicago. “We really feel that if we can't explain what we're doing, then we shouldn't be doing it,” says Womersley, who wrote the first of three D0 group reports now available on Fermilab's Web site.*

    Early reviews of the Web products' accessibility have been decidedly mixed, but at least one other group has plans to begin a similar program, and some observers predict that more will be forced to follow suit. “D0's ‘plain English’ is a first, very important step” toward better public communication, says Petra Folkerts, a press and information officer at the Deutsches Elektronen-Synchrotron in Hamburg, Germany.

    Members of the D0 group, the name of which is derived from the huge D0 particle detector hunched over Fermilab's Tevatron accelerator, concede that generating publicity was at least part of the motivation for starting the project. “We know there is going to be a lot of pressure on science in the future to justify its existence,” says Harry Weerts, a D0 spokesperson at Michigan State University. “We shouldn't forget where the funding is coming from, right?” Taxpayers, who foot the bills, are on D0's readership wish list. But the physicists say they don't want to create just another type of press release; instead, they hope to educate the public about the way scientific research is usually done—not in big jumps, but in many small steps. That way, says Womersley, “we don't have to worry so much about writing a punchy story that will get us above Michael Jackson's baby” in the daily newspapers. The reports, he says, will be written by the principal author of the technical report itself and reviewed mainly within the collaboration. Then, each will be put on the Web when the technical report gets submitted to a journal.

    Like any literary debut, the group's first efforts have been deconstructed by the critics. “I certainly welcome” their intentions, says Michael Riordan, a particle physicist at the Stanford Linear Accelerator Center and the author of several books, including The Hunting of the Quark. “But I caution that [translating physics to English is] not so easy to do.” The writers' style in the first reports, he says, “assumed the reader already knew what they were talking about.”

    Getting the message beyond the scientific community to the wider public could be difficult. “The main issue,” says Teri Ann Doerksen, who uses science articles as an aid for teaching composition at Hartwick College in Oneonta, New York, “is one I see in composition papers all the time”—expecting too much of the readers. “If you can't gauge your audience, you have no way to figure out what level of detail you need to go into when you define your terms,” says Doerksen. Weerts admits that the expositions are “not totally plain English yet,” but predicts that as D0 blazes the trail, other groups will follow.

    Within Fermilab, at least, the idea already seems to be winning new adherents. Members of a competitor group, called CDF, have announced that they will write lay versions of not only technical publications but major conference presentations as well. “I wouldn't be surprised if other groups do this also,” says Alfred Goshaw, a CDF spokesperson-elect at Duke University. “It's such an obviously good idea.”

  4. Physics: In Search of the Cleansing Axion

    1. Andrew Watson
    1. Andrew Watson is a science writer in Norwich, U.K.

    Few particles offer so much as the axion: Its proponents claim it will iron out a wrinkle in the Standard Model—physicists' description of nature's fundamental particles and forces—while solving the mystery of the universe's missing mass. First, however, this elusive beast must be discovered. Physicists are keenly awaiting the results of two experiments—one just analyzing its first data and the other just starting up—which may finally answer the question of whether these ghostly particles really exist. “If the axion is discovered, it would be an extraordinary triumph for theoretical physics,” says Frank Wilczek of Princeton's Institute for Advanced Study.

    The axion was proposed nearly 20 years ago to overcome a problem in quantum chromodynamics (QCD), a central plank of the Standard Model that describes how quarks—the building blocks of protons and neutrons—are held together by gluons. QCD requires that the interactions between quarks and gluons be symmetrical under time reversal—they must work identically if time runs backward—and that they have handedness symmetry, too: Left- and right-handed quarks and gluons must be treated equally. In 1978, Wilczek and, independently, Steven Weinberg of the University of Texas, Austin, realized that a new particle would be required to guarantee the symmetry of quark-gluon forces. Wilczek had seen a detergent called Axion and thought it sounded like a good name for a particle. “When it appeared that one needed a particle to clean up a problem with the axial baryon number current of QCD, it was impossible to resist,” he says.

    Since then, numerous attempts to detect axions have all ended in failure. Wilczek says part of the problem is that physicists searching for axions in particle collisions or nuclear transitions were looking in the wrong place. Now, they are looking to cosmology and astrophysics: Current theories of the formation of the universe predict that the primordial fireball spawned axions in huge numbers, and we should, theorists argue, be awash in a soup of axions each weighing just a few microelectron volts. There could be as many as a million million axions per cubic centimeter around us, so although they are very light, they could make up a large proportion of the invisible 90% of the mass of the universe.

    Axion proponents are pinning their hopes on two experiments to detect this axion soup. Both are based on an idea first proposed in 1983 by Pierre Sikivie of the University of Florida. “The principle is that an axion may convert to a photon in a large, externally applied magnetic field,” says Sikivie. “The conversion probability is very small but gets enhanced if the conversion occurs inside a cavity tuned to resonate at the [photon] frequency set by the axion mass.”

    One of the experiments involves one Russian and six U.S. institutions and is based at Lawrence Livermore National Laboratory (LLNL) in California. “Basically, the experiment is a radio receiver,” says physicist Karl van Bibber, co-leader of the LLNL-based effort. The receiver is a copper cavity, cooled to 1.3 kelvins and bathed in the strong magnetic field from a 6-ton superconducting magnet. The researchers tune the cavity so that its resonant frequency matches the frequency of the photon an axion should transform into, and if one does, it would be detected electronically.

    The experiment has now been running for a year, and the team expects to finish analyzing the first data this month, says van Bibber. They have some candidate axion signals, but if these do not pan out, the researchers hope to at least set some limits on the number of axions around us, or on the chance that they convert into photons. Van Bibber says they plan to increase the experiment's sensitivity 10-fold next year, putting all favored axion models within reach.

    Meanwhile, across the Pacific, Seishi Matsuki and his team at the Institute for Chemical Research at the University of Kyoto in Japan are in the process of starting up a rival axion experiment that is even more technologically challenging. “We visited his lab in early December … and it is really a fantastic project,” says van Bibber.

    Matsuki's experiment also relies on converting axions to photons in a resonant cavity, but he plans to detect the photons with Rydberg atoms, in which electrons are pushed to the outermost orbitals, almost out of reach of the nucleus's pull. If any axions convert into photons, these are absorbed by Rydberg atoms in a beam passing through the cavity. The beam then passes through an electric field that will ionize only those atoms that have absorbed a photon. “Thus, ions are detected as a signal of axions,” says Matsuki. First, the group is searching for axions with masses of about 10 microelectron volts. “It will take about 3 months to complete the measurements with the present experimental sensitivity,” says Matsuki.

    Van Bibber says even agnostics who are not “sold on axions until they are found” are eagerly awaiting the results of the two experiments: “It is encouraging to us that even those people are strong supporters of our experiment.”

  5. Meeting Briefs: Toxin's M.O. Identified

    1. Jocelyn Kaiser

    CINCINNATI—Record floods on the Ohio River didn't keep more than 4000 scientists from gathering here last month for the annual meeting of the Society of Toxicology. Topics ranged from the possible synergistic effects of endocrine disrupters (Science, 28 March, p. 1879) to toxins produced by marine microorganisms and how pesticides may affect the development of the eye.

    Many years ago, the movie Jaws scared a lot of people right out of the water. Nowadays, the latest findings from marine science labs are apt to play a similar role. In 1993, a marine microbe called Pfiesteria piscicidathat secretes potent toxins gained notoriety by apparently poisoning two scientists who were studying it. And at the meeting in Cincinnati, other researchers reported that Pfiesteria, one of a broad class of microorganisms that lurk in red tides and other algal blooms, can damage cultured neural and gut cells and cause learning problems in rats.

    Pfiesteria, which has been found from the Delaware Bay to the Gulf of Mexico, is a shifty character that can assume at least two dozen forms. When threatened by a protozoan, for instance, it slips into a predator-engulfing, amoebalike guise, and when fish appear, it takes on a deadly dinoflagellate form. In 1988, botanists JoAnn Burkholder and Howard Glasgow at North Carolina State University in Raleigh identified this dinoflagellate as a major cause of fish kills in coastal waters.

    Researchers became acutely aware of the organism's ability to attack the nervous system in 1993 after Burkholder and Glasgow were poisoned when they inhaled Pfiesteria toxins wafting out of laboratory fish tanks. In addition to experiencing nausea and a burning sensation in the eyes, they suffered from loss of short-term memory, difficulty reading, and disorientation.

    To get a better understanding of Pfiesteria's possible effects on mental processes, a team led by environmental toxicologist Edward Levin of Duke University Medical Center in Durham, North Carolina, gave rats a learning task. The researchers trained rats to walk down eight planks radiating from a platform like spokes on a wheel to reach a reward—a piece of Froot Loop cereal. The rats soon learned that once they had fetched their reward, there was no point in venturing down a particular plank again, and they remembered this lesson even after being injected with Pfiesteria cells. But when rats were injected before training, they learned the lesson much more slowly. “We've nailed it down primarily to learning,” not memory, says Levin, adding that this may explain why the researchers had trouble picking up new information like telephone numbers.

    But while scientists may have homed in on the organism's modus operandi, they have not yet identified its weapon. “We just need to find the active ingredient”—the toxin secreted by Pfiesteria—and probe its mechanism, says U.S. Environmental Protection Agency neurotoxicologist Hugh Tilson. As a first step, a team led by North Carolina State toxicologist Patricia McClellan-Green is studying a partially purified extract that contains the poisons. As they reported at the meeting, this extract proved “extremely toxic” when they applied it to cultured human gut cells and mouse neural cells, causing the cells to lyse at a concentration as low as mere femtograms per milliliter. Fortunately for fish eaters, says McClellan-Green, the toxin's potency appeared to fade within a few hours. If true, that could mean only the freshest fish would pose a threat to human health.

  6. Meeting Briefs: A Pesticide-Myopia Link?

    1. Jocelyn Kaiser

    CINCINNATI—Record floods on the Ohio River didn't keep more than 4000 scientists from gathering here last month for the annual meeting of the Society of Toxicology. Topics ranged from the possible synergistic effects of endocrine disrupters (Science, 28 March, p. 1879) to toxins produced by marine microorganisms and how pesticides may affect the development of the eye.

    Add one more possible health effect to the long list of hazards associated with overexposure to pesticides: eye and vision problems. In Cincinnati, researchers described data suggesting that one widely used insecticide, chlorpyrifos, can alter eye development in young chickens.

    The impetus for the work, led by William Boyes at the U.S. Environmental Protection Agency (EPA) in Research Triangle Park, North Carolina, was an intriguing entry in the annals of medicine: reports of an epidemic of nearsightedness among Japanese schoolchildren in the late 1950s through the early 1970s. The rise in reported myopia coincided with widespread use of certain insecticides, called organophosphates (OPs), that kill insects by inhibiting the enzyme acetylcholinesterase, which also is found in humans. Although no definitive experiments were done, some Japanese researchers claimed the pesticides caused the myopia.

    Boyes's group thought the hypothesis merited a closer look. So, together with collaborators at Duke University and the University of North Carolina, Chapel Hill, the researchers studied pesticide effects in chickens. These animals provide a useful model for investigating the impact of neurotoxins on vision because their nervous system is highly vulnerable to damage from toxicants, and they can develop vision problems like those of humans. For example, researchers can induce myopia in a chick's eye by covering it with a translucent plastic goggle. The developing eye adapts to exposure to blurred images by growing longer, resulting in extreme myopia, says Andrew Geller, the postdoc at EPA who performed the experiments. The researchers hypothesized that if pesticides, in fact, interfered with eye development, a goggled chick dosed with the chemicals would have trouble adjusting to the goggle.

    When the researchers fed the chicks moderate doses of the OP insecticide chlorpyrifos for 1 week, they found that the chicks' goggled eyes did not adapt as much as is typical: The eyes were significantly less elongated. Further, when the team examined the unblocked eyes of chicks dosed for 2 or 3 weeks, they found that the pesticide caused the eyes to grow slightly longer, suggesting that, in this case, the pesticide was making the eye myopic. “This is one of the first visual-testing models to show an OP might affect visual regulation,” says toxicologist Carey Pope of Northeast Louisiana University in Monroe.

    Geller says that more work needs to be done, including studies with mammals and with different doses: “We can't say what we're finding accounts for the Japanese data, but it may be we're starting down the road to understanding what happened.”

  7. Neuroscience: Key Protein Found for Brain's Dopamine-Producing Neurons

    1. Elizabeth Pennisi

    Just as a delicious entree can be marred by too much or too little seasoning, the brain needs exactly the right amounts of all its chemical signaling molecules. Take the neurotransmitter dopamine. People who have too little because they have lost the brain neurons that make this chemical develop the debilitating symptoms of Parkinson's disease. Conversely, too much may contribute to the mental disorder of schizophrenia. Now, researchers have identified a molecular chef that may play a key role in controlling how much dopamine the brain makes.

    On page 248, a team led by Thomas Perlmann of the Ludwig Institute for Cancer Research and Lars Olson of the Karolinska Institute, both in Stockholm, reports that a molecule called Nurr1 plays a critical role during embryonic development in the formation of the group of dopamine-producing brain cells that are lost in Parkinson's disease. Nurr1 also appears to help keep those cells active throughout life.

    Neuroscientists are intrigued by the discovery because it may help explain why that particular set of neurons degenerates in Parkinson's patients. The problem might, for example, result from a defect in Nurr1 activity. “It's a very good candidate to play a role in the pathology of Parkinson's disease,” comments cell molecular biologist Orla Conneely of Baylor College of Medicine in Houston, whose team originally discovered Nurr1 and now has unpublished findings supporting the current work. The finding also raises the tantalizing possibility that boosting or restoring Nurr1 activity in failing nerve cells may delay or prevent the onset of Parkinsonian symptoms. “Finding a [protein] that affects such a specific [section] of the brain is very exciting,” comments neurobiologist Ron McKay of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. “The potential for pharmacology is very interesting.”

    Conneely and her colleagues discovered Nurr1 in 1992 while they were screening mouse tissues for nuclear receptors, molecules in the nucleus that bind to hormones or hormonelike substances and then regulate gene expression. Among their finds was a gene they called Nurr1 because its protein resembles a previously discovered nuclear receptor called Nur77. Both bind DNA, and because their DNA-binding domains are almost identical, it is likely that they control some of the same genes, Conneely says. But unlike Nur77, which is present in tissues throughout the body, Nurr1 was found by Conneely primarily in the brain.

    That location caught molecular biologist Perlmann's eye, particularly when he realized that the gene encoding Nurr1 is most active in the dopamine-producing cells. Working with Rolf Zetterström in Olson's lab, he went on to create knockout mice lacking either one or both copies of the Nurr1 gene.

    As they now report, the mice with no copies of the gene failed to suckle and died a day or so after birth. The major physical difference the Swedish group could detect between these mice and normal animals of the same age was in the midbrain region, which contains the neurons that degenerate in Parkinson's. The cells there were poorly organized, suggesting that they had never specialized into dopamine-producing neurons.

    The team confirmed this suspicion by testing for the presence of proteins known to be produced by these particular neurons. Nurr1, tyrosine hydroxylase (an enzyme critical for dopamine production), and the other proteins they screened for were all absent. With no Nurr1, Perlmann says, “that cell type is clearly missing.” Conneely, whose group has seen the same changes in the Nurr1 knockout mice they made, agrees: “Clearly this [result] suggests that Nurr1 has a selective, essential role in neurodevelopment,” she says.

    In addition, although mice lacking one copy of the gene develop normally, as adults they don't make as much dopamine as do mice with two intact copies of the Nurr1 gene, Perlmann notes. The Swedish team's preliminary experiments suggest that this is because the animals' dopamine-producing cells make less of the neurotransmitter, not because they have fewer of these cells. Thus, the group thinks that Nurr1 plays two roles: Not only does it cause dopamine cells to form in the first place, but it also helps them produce the right amounts of dopamine.

    That idea still needs to be confirmed, however. Indeed, neuroscientist Clifford Saper, of the Beth Israel Deaconess Medical Center in Boston, cautions that “a number of different pieces of the puzzle need to go into place” before researchers will understand just what Nurr1 does. He notes, for example, that it is unlikely that Nurr1 affects only the midbrain dopamine-producing neurons. McKay and Conneely agree. They note that Nurr1 and related nuclear receptors are unusual because their genes are turned on very early when cells receive signals to grow. They then tend to stay on, although their activities may be modified by their natural ligands, in turn affecting which genes the receptors activate. They are “part of a very generalized signaling mechanism that cells use,” says McKay.

    One critical missing piece is the identity of the natural molecule that binds to Nurr1. It will be the target of an intensive search, because it is needed to fully understand Nurr1 function, and it might provide clues to therapies for dopamine-related disorders. If Nurr1 is indeed essential for dopamine production in adults, then it may be possible to treat those disorders by finding drugs that either increase or decrease the amount of Nurr1 activity, suggests Olson.

    Alternatively, researchers may be able to transfer active Nurr1 genes into undifferentiated nerve cells growing in culture, converting them into dopamine-producing cells that could be used to replace those damaged in Parkinson's, Perlmann adds. The use of cultured nerve cells for this purpose is currently being studied (Science, 4 April, p. 66).

    Even better, though, would be a way to prevent Parkinson's disease before it develops, and Conneely thinks the Nurr1 discovery could help here as well. Because this disease does not usually seem to run in families, experts have long sought an external cause, such as an environmental toxicant. It may now be possible to narrow the search by looking at how potential toxicants affect Nurr1. And that could lead to a treatment, she adds: “If we find [toxicants] that inhibit the activity of Nurr1, we may then be able to identify drugs that can counteract that.”

    Perlmann cautions that any applications are mere speculation now, but he hopes that eventually clinicians can learn to keep the brain seasoned with just the right amount of at least one important neurotransmitter.

Log in to view full text