News this Week

Science  20 Jun 2003:
Vol. 300, Issue 5627, pp. 1856

    New Institute Aims to Put the Genome in the Doctor's Bag

    1. Andrew Lawler

    Backed by a billionaire entrepreneur and two prestigious universities, a team of scientists is preparing to take a big leap beyond the human genome. This week, the Massachusetts Institute of Technology (MIT) and Harvard University will unveil plans for a new institute designed to transform genetic research into clinical medicine.

    Eric Lander, head of the Whitehead Institute-MIT genome center, will lead the ambitious effort, which will be based in Cambridge, Massachusetts, and will be fueled by an initial pledge of $100 million from Eli Broad, a Los Angeles businessman. MIT and Harvard are committed to raising up to $200 million more in the next decade. The Broad Institute will aim to pull together specialists in more than a half-dozen disciplines to design genetic tool kits for use in the fight against cancer, diabetes, and inflammatory and infectious diseases. “This is biology for the next generation—and we could serve as a Pied Piper,” says Lander.

    Other companies, universities, and institutes are using genomics to develop diagnostics and drug targets, but Lander and his collaborators say that combining MIT's genomics and bioengineering expertise with Broad's money and Harvard's chemistry and practical medical knowledge will give them an edge on this research frontier. The initial focus will be “to understand all components of the cell,” including its variations and circuitry, says Lander. “And not just information—but how to manipulate it as well.” Knowing the molecular basis of disease then sets the stage for the second goal—preventing and treating illness.

    Gene bank.

    L.A. businessman Eli Broad (top) is backing MIT's Eric Lander (bottom) and Harvard's Stuart Schreiber.


    The starting budget of $30 million a year will support about a dozen faculty and 30 associate faculty members, ultimately including biologists, medical doctors, chemists, computer scientists, mathematicians, and engineers. “We each already have an efficient and well-funded infrastructure,” says Stuart Schreiber, a Harvard chemist who will be part of a founding faculty. The core group also includes Todd Golub, a pediatric oncologist at Boston's Dana-Farber Cancer Institute, and Whitehead geneticist David Altshuler.

    The unusual partnership emerged from 2 years of tough negotiations over funding, location, and organization (Science, 21 December 2001, p. 2451). Broad was key in getting the two notoriously competitive universities to join forces. A self-described “venture philanthropist” who built his fortune in housing and real estate, Broad may be known best in Los Angeles as an art connoisseur. But he also has plowed millions of dollars into a biological sciences laboratory at Pasadena's California Institute of Technology.

    Caltech President David Baltimore a few years ago introduced Lander to Broad, who gave Lander's center a grant to study the genetic basis of inflammatory bowel disease, which the billionaire's son suffers from. During a visit to Boston in October 2001, Broad and his wife visited the Whitehead center, and the following year they invited Lander to dinner in Los Angeles. Lander says that the two men discussed “all sorts of possibilities” on how to organize the new institute and where to locate it. Other sources familiar with the talks say Broad tried to entice Lander to Los Angeles. But outside Cambridge, Lander says, “there are relatively few places in the world which have the critical mass to take this on.”

    Unlike the Whitehead, the Broad Institute will not be an independent entity; MIT will administer it on behalf of the partners. “This is not a freestanding thing,” says MIT President Chuck Vest. “It is an undertaking of both universities.” Adds Harvard President Larry Summers: “This is really a joint venture, and an enormously exciting one.” Vest and Summers say they will go all out to raise the additional $200 million for the effort. The combination of Broad's gift and university fundraising would provide about $30 million a year, says Vest. “I'm confident we'll be able to mobilize the resources,” according to Summers.

    The institute will start without an endowment, though Lander says one may be established later. “Broad wants rapid turnaround,” says one source familiar with the talks. “And he wants to see the fruits of his largesse in this life.”

    The institute will combine MIT's genome center with Schreiber's Harvard Initiative for Chemical Genomics; institute faculty will have joint appointments to MIT and Harvard, a factor that required “a little bit of fancy work” by university administrators, says Lander. University lawyers discovered “it violated no laws of physics for the different institutions to work together,” he explains.

    But Broad Institute leaders have not yet succeeded in bringing private industry into the picture. Novartis, the Basel-based pharmaceutical company, moved its R&D center to Cambridge last year and expressed interest in joining this partnership. Negotiators failed to reach an agreement before the memorandum of understanding was signed last week, however. “I think Novartis's plate has been so full, it was a timing issue,” says Schreiber, “and maybe they were a bit overoptimistic.” Mark Fishman, Novartis's biomedical research chief, says that “we want to be part of such things, and this is just one of the possible ways.”

    Lander and Schreiber say the new institute will emphasize the need to make data freely available, and they predict that companies will understand the need to collaborate openly. Institute staff members, many of whom worked on the genome project, “are all inclined to put things in the public domain,” says Lander, adding that the universities would share patents.

    Although the institute won't exist as a legal entity until November, a couple dozen researchers from Schreiber's lab already have moved into space adjacent to Lander's team. Eventually they all will move into a new building nearby. In the meantime, Lander has a chance to savor a new research agenda. “I'm thrilled” by the challenge, he says, noting that this one is much broader than the goal-driven human genome project. Schreiber likewise is happy that the long period of uncertainty is over. “But I woke up this morning with a feeling of anxiety,” he adds. “I thought, be careful what you wish for.”


    NASA Panel to Ponder Hubble's Demise

    1. Andrew Lawler

    Pulling the plug on one of the world's most popular and productive scientific instruments is an unpleasant—but inevitable—task. So NASA has asked a high-powered group of astronomers and astrophysicists to share the burden of determining how and when to shut down the Hubble Space Telescope.

    A critical issue facing the panel is how often to use the shuttle to replace instruments and equipment on the space telescope, which requires human assistance to remain in orbit and generate data. NASA officials have been planning to service the 13-year-old orbiting facility one last time in late 2004, which could extend its life to the close of the decade, but many researchers are pressing for an additional servicing mission that would keep Hubble operating well beyond 2010. Although Hubble's successor, the James Webb Space Telescope, is slated for a 2011 launch, scientists worry about the possibility of a data gap. “This is a very hot potato issue,” says Princeton University's John Bahcall, who has agreed to chair the NASA panel. “This is not a job to take on to win friends.”

    Hubble's fate is complicated by the Columbia disaster, which puts even the next servicing mission into question. “The equation has changed,” says David Black, visiting scientist at Houston's Lunar and Planetary Institute, who recently chaired a NASA panel that failed to win unanimity on the need for, and content of, a second servicing mission. He fears that any Hubble mission may be seen as too risky, because the shuttle astronauts would be unable to rendezvous with the space station in an emergency.

    Last clasp?

    Safety concerns could disrupt NASA's plans to schedule another shuttle-servicing mission to Hubble.


    NASA science managers want to present a united front this fall, after the Columbia investigation panel completes its report. And Bahcall was a savvy choice to win consensus. He chaired a much-praised 1991 National Academy of Sciences panel that set a long-term vision for astronomy. He will be joined by five other distinguished colleagues, including Nobelist Charles Townes of the University of California, Berkeley, and Martin Rees of the University of Cambridge, U.K. The panel will hold a public meeting in Washington, D.C., on 31 July, and it is also soliciting opinions from the community. “We don't have decision-making powers,” adds Bahcall. “But we can [come to a] conclusion.”

    NASA space science chief Ed Weiler—who was once chief scientist for Hubble—strongly supports ending Hubble's life by 2010 to provide money for other efforts. But others note that Congress might be willing to pony up money for an additional servicing mission to keep Hubble active. To date, Weiler has chosen not to ask lawmakers for new resources. “He has a certain amount of political capital to expend on new missions—but he hasn't wanted to spend it on Hubble,” says one astronomer.

    Some of the strongest advocates for continued Hubble operations are at the Space Telescope Science Institute in Baltimore, Maryland. “Hubble is right now the highest impact scientific mission at NASA,” says Steven Beckwith, the institute director. “It's a living mission, and we have not tapped its full potential yet.” But both sides agree that the new panel, officially the Hubble Space Telescope-James Webb Space Telescope Transition Plan Review Panel, will be highly influential. “They have both the wisdom and the freedom to ask good questions,” adds Beckwith.


    Crib Death Exoneration Could Usher In New Gene Tests

    1. Quinn Eastman*
    1. Quinn Eastman has just completed an internship in the Cambridge, U.K., office of Science.

    CAMBRIDGE, U.K.—For years, prosecutors in the United Kingdom have applied an unwritten three-strikes-and-you're-out rule to mothers whose babies die in infancy: One unexplained death is tragic but innocent, two is suspicious, and three is murder. This credo, tested in many a court case, led the U.K.'s Crown Prosecution Service to try a pharmacist named Trupti Patel for murder. Over a 4-year span, Patel and her husband, Jay, lost three babies before the age of 3 months. An open-and-shut case? Far from it. Recent genetic studies that challenge the three-strikes rule were a decisive factor in Patel's stunning acquittal last week in Reading Crown Court.

    The ruling could have profound implications for criminal justice. Well-publicized trials in which multiple cases of sudden infant death syndrome (SIDS) led to murder convictions have tended to discredit the idea that SIDS could run in families. In the wake of the Patel ruling, many lawyers and child protection advocates have criticized the eagerness to prosecute cases of multiple unexplained infant deaths. The outcome could lead to more extensive screening of babies for inherited disorders, as well as to genetic testing of mothers accused of killing their babies.


    Jay and Trupti Patel were all smiles outside the courthouse last week after a jury cleared Trupti of wrongdoing. Recent genetic findings appear to have played a decisive role in the verdict.


    SIDS, sometimes called crib or cot death, is a “diagnosis of exclusion,” notes the American Academy of Pediatrics. Doctors assign a death to SIDS only after an autopsy and examination of the baby's environment and medical history reveal no other possibilities. Although the cause or causes of SIDS are unclear, breathing difficulties appear to play a central role. In the last decade, a “Back to Sleep” campaign urging parents to avoid allowing babies to sleep on their stomachs appears to have had major benefits: Since 1991, the number of SIDS cases has fallen by 50% in the United States, although it is still the third leading cause of U.S. infant mortality.

    Pathologists have testified that the odds of two or more siblings dying of SIDS are vanishingly small: When factors such as parental smoking or low birth weight, which increase the risk of SIDS, are excluded, coincidence cannot provide a plausible explanation for multiple SIDS deaths. Negligence or child abuse is a far more likely cause, prosecutors argue.

    The underpinnings of the three-strikes rule rest largely on a 1977 study by retired U.K. pediatrician and child abuse expert Roy Meadow. He invoked a disorder called “Munchausen syndrome by proxy,” in which caregivers in multiple SIDS cases inflict suffering to get attention or sympathy. Meadow has served as an expert witness in several successful prosecutions of multiple unexplained infant deaths, and he testified for the prosecution in the Patel case; he declined to comment for this article.

    In the Patel case, the defense challenged the basis of the three-strikes rule, arguing that genetics, not coincidence, lies behind the tragic deaths. Suggestions of a genetic link came from Patel's grandmother, who told the jury that five of her own children died—including three before the age of 6 weeks of unexplained causes—in the 1940s in Gujarat, India. The prosecution did not offer evidence to the contrary.

    Telltale electrocardiogram.

    A sometimes fatal heart arrhythmia called long QT syndrome may underlie some SIDS cases.

    But it was the scientific testimony that provided the real fireworks. Michael Patton, a clinical geneticist at St. George's Hospital in London, testified that an autosomal dominant inheritance pattern with “variable penetrance” could explain the Patel family's history of infant deaths. He suggested two candidates: mitochondrial respiratory chain disorders—a set of conditions in which mutations in nuclear and mitochondrial DNA affect cellular metabolism—and long QT syndrome, a heart arrhythmia known for striking down young athletes and linked to mutations in four ion transport genes. Patton estimates that doctors miss about 30% of long QT cases, which are identifiable by electrocardiogram.

    Doctors had tested Patel's third child, Mia, for problems with heart rhythms or breathing 10 days before her death and found no abnormalities. However, biochemical tests in two of the infants provided some support for the notion of a mitochondrial disorder. The science-based defense was robust enough to create reasonable doubt in the minds of the jury, which cleared Patel after deliberating for barely 90 minutes.

    Other findings not aired during the trial could explain some SIDS cases. For example, an inability to metabolize fats properly may masquerade as SIDS. And SIDS has been linked to the IL-10 gene: Babies with a particular allele have an exaggerated immune response to common infections, says microbiologist David Drucker of the University of Manchester, U.K. Inherited factors could combine synergistically, he says, with factors such as parental smoking or sleeping position to trigger harmful fluid buildup in the lungs.

    The bottom line, says Peter Fleming, a pediatrician at the University of Bristol who testified in the Patel case, is that pathologists should routinely take and retain tissue and DNA samples and screen for metabolic disorders and heart arrythmias. Carl Hunt, director of the National Center on Sleep Disorders Research in Bethesda, Maryland, and an expert on SIDS diagnosis, agrees: “There are multiple genetic risk factors for SIDS, just as for any other condition.”

    The Patel case has given momentum to a proposal last month from the U.K.'s Independent Review of Coroner Services that panels conduct inquiries into new cases as they arise. In the United States, the view of genetic explanations for multiple SIDS cases is “just beginning to change and is not widely accepted,” Hunt says. “But based on the explosion of genetic information, we need to take a fresh look at SIDS.”


    Making Clouds Darker Sharpens Cloudy Climate Models

    1. Richard A. Kerr

    Clouds are the great uncertainty of climate prediction. If researchers could get them more or less right in their models, they could be more definite about how warm it could get this century. Plenty remains wrong with model clouds, but researchers are now fixing a crucial shortcoming.

    Measurements published in 1995 cast a shadow over climate models: They indicated that clouds absorb 40% more incoming solar energy, and let less pass through to warm the surface, than the models predicted (Science, 27 January 1995, p. 454). The discrepancy was bad news because where the models take in energy—up in the clouds or down at the surface—makes a difference in how they run, affecting aspects such as the speed of the hydrological cycle and possibly the pattern of climate change. Researchers pointed the finger of blame every which way: the models, the measurements, and even something weird and mysterious about cloud behavior. Now, new observations indicate that the models were clearly at fault, with flawed observations possibly contributing as well. But the good news is that more sophisticated calculations have largely closed the absorption gap between real and model clouds, just in time for the next international assessment of climate change.

    Analysis of a major field experiment conducted in north-central Oklahoma and published 9 May in the Journal of Geophysical Research (JGR) shows a near match within the uncertainties between newly measured cloud absorption and the latest calculations; the clouds really do seem to be darker than once presumed. “The field has benefited from this controversy,” says modeler William Collins of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. “We have much better climate models as a result.” How much better will be clearer once they've finished running the new models.

    Heads up.

    Upward-looking radiometers helped show that clouds are darker than once presumed.


    The dark-cloud controversy was brought to a boil by three papers published in Science in 1995. Using three different approaches involving satellite, aircraft, or ocean data, three groups found that clouds were absorbing roughly 40% more sunlight than model calculations suggested they should. The calculations—made in so-called radiative transfer models—account for how gases (mainly water vapor) and cloud droplets absorb, scatter, and reradiate solar radiation. All climate models contain such codes. If their cloud absorptions were really off by a whopping 40%, the models were portraying climate incorrectly—possibly one reason the models do a less-than-perfect job simulating even current climate.

    To find out, the Department of Energy mounted the Atmospheric Radiation Measurement (ARM) Enhanced Shortwave Experiment (ARESE) during more than a month in late 1995 at the ARM program's Oklahoma site. But despite the instrument-laden aircraft, satellites, and ground stations involved, ARESE was plagued by good weather—so few clouds that only 1 day had enough overcast to yield good data. That 1 day's measurements suggested a huge cloud absorption, too huge, it turned out. Something—engine oil or ice, perhaps—had darkened the radiation inlet of an airborne instrument, experimenters eventually concluded.

    And so was born ARESE II at the same site in 2000. Researchers flew three different sets of upward- and downward-viewing radiometers on a plane above and below layered clouds while instruments on the ground looked up, much as in ARESE I. The bottom line: These clouds, at least, absorbed more than calculated in the 1995 Science papers, and almost as much as two state-of-the-art radiative transfer models say they should, according to an analysis of ARESE II data published in JGR by atmospheric scientist Thomas Ackerman and colleagues at Pacific Northwest National Laboratory in Richland, Washington. A second analysis of ARESE II data by atmospheric scientist Francisco Valero of the Scripps Institution of Oceanography in La Jolla, California, and colleagues also finds a rough match, as long as the latest, most sophisticated radiative transfer models are used.

    All concerned agree that the new and improved radiative transfer models are making a big difference by more faithfully representing true atmospheric absorption. Absorption by water vapor, for example, occurs primarily in innumerable narrow bands of the spectrum separated by intervals of little or no absorption. Early-generation radiative transfer models had a fuzzy picture of water's forest of absorption peaks, which inevitably left out a fraction of water's true absorption. State-of-the-art models correct this problem.

    Another improvement involves absorption of radiation by particles. Lately, researchers are appreciating that they must also take account of how radiation is absorbed by microscopic aerosol particles such as dust and pollutant soot. When Ackerman and Valero add reasonable aerosol absorptions into their cloud calculations, any remaining gap disappears.

    Climate modelers are taking the new findings to heart. “The past few months there has been a strong emphasis to improve” the radiative transfer calculations in climate models, says climate modeler Jeffrey Kiehl of NCAR. Modelers are also adding absorbing aerosols. Everyone is looking to finalize the models before the start of the next international assessment by the Intergovernmental Panel on Climate Change. “You do see improvement” in the models' match to the amount of radiative energy observed reaching the surface, says Kiehl. In one way, at least, the climate machine in researchers' computers will be a bit more realistic.


    Astronomers Nail Down Origin of Gamma Ray Bursts

    1. Govert Schilling

    It's party time for gamma ray burst researchers. For decades, no one had a clue about the origin of these brief, intense flashes of high-energy radiation. Now, astronomers are confident that titanic stellar explosions known as hypernovae (superenergetic supernovae) are the culprit. Two newly published papers on the best-studied burst so far clinch the case. “This is a 99%-level proof,” says theorist Peter Mészáros of Pennsylvania State University, University Park. “That's as good as things get in astrophysics.” Meanwhile, other papers detail the discovery and evolution of the afterglow the burst left in its wake.

    The revelry began on 29 March, when NASA's High Energy Transient Explorer 2 (HETE-2) satellite detected an extremely bright gamma ray burst in the constellation Leo, lasting some 25 seconds. Within hours, European astronomers studying the burst's afterglow pinned down its distance at a mere 2 billion light-years—much closer than usual. That meant its light should be relatively bright and revealing, says Peter Garnavich of the University of Notre Dame in Indiana. “As soon as the distance was determined,” Garnavich says, “I think everyone had the same idea: This will be a good gamma ray burst to test if they come from supernovae.”

    In the 6 June online edition of Astrophysical Journal Letters, Garnavich and colleagues describe how the “monster burst,” as HETE-2 scientists called it, passed the test. Working with Krzysztof Stanek and Thomas Matheson of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, Garnavich studied the slowly fading afterglow using 6.5-meter telescopes in Arizona and Chile. By 6 April they had recognized the characteristic spectroscopic “fingerprint” of an underlying supernova explosion. Apparently the gamma ray burst had been the death cry of a dying star.

    Meanwhile, a large European collaboration led by Jens Hjorth of the University of Copenhagen, Denmark, independently detected the supernova (now known as 2003dh) in spectra taken with the European Southern Observatory's 8.2-meter Very Large Telescope in Chile. In this week's Nature, the team calculates that stellar material is streaking away from the site of the blast at 12% of the speed of light. That's much faster than in an average supernova, evidence that the explosion reached hypernova proportions.

    Now you see it.

    Clues hidden in the afterglow of a nearby gamma ray burst (bottom) tied the cosmic blasts to hyperenergetic exploding stars.


    Also in Nature, Paul Price of the Australian National University in Canberra and colleagues suggest that the 29 March burst may be the “missing link” between normal gamma ray bursts at very large distances and a peculiar burst that flared up in 1998. That burst apparently coincided with a supernova known as 1998bw. But “no afterglow was ever observed, so the direct connection could not be made,” Matheson says, and the burst was so extraordinarily weak that some astronomers dismissed it as an oddity.

    Hypernova 2003dh, however, looks uncannily similar to the enigmatic supernova of April 1998. Garnavich says the resemblance “gives more weight to the idea that [the 1998 gamma ray burst] was a classical burst seen at an angle.” Astronomers think gamma ray bursts spew cones of matter and radiation into space in opposite directions, like pairs of shotgun blasts (see figure). In 1998, astronomers may have seen these jets from the side.

    But the March burst offers astronomers more reason to celebrate. Radio observations beautifully confirm the popular relativistic fireball model for gamma ray burst afterglows. According to this model, the afterglow is produced by shock waves in the jets of tenuous gas that shoot from the blast at almost the speed of light. Using a network of radio telescopes across the United States, a team led by Dale Frail of the National Astronomy Radio Observatory found that 3 weeks after the burst the source of the afterglow was a few hundred billion kilometers across—“what you expect from the relativistic fireball model,” Frail says.

    None of this means that astronomers are about to close the book on gamma ray bursts. “There's still a lot of guesswork. We don't understand supernovae very well,” Mészáros says. For instance, no one knows for sure whether gamma ray bursts always produce black holes, or whether they may leave ultradense neutron stars behind.

    Stanek hopes that the next nearby gamma ray burst will help clear up such mysteries. He may need to be patient, though; according to Frail, nearby bursts are expected to happen only once per decade.


    Has RHIC Set Quarks Free at Last? Physicists Don't Quite Say So

    1. Charles Seife

    Call it the result that dare not speak its name. At a colloquium this week, scientists working on the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in Upton, New York, are expected to present the most compelling evidence yet that they have melted gold nuclei into their component quarks. Although nobody is yet claiming a firm discovery of such a quark-gluon plasma, the new results make it harder and harder to avoid the conclusion that RHIC has created a form of matter last seen a few microseconds after the big bang.

    “It's really pretty,” says Tom Kirk, associate laboratory director for High-Energy and Nuclear Physics at Brookhaven. “It's a very important test of the quark-gluon plasma.”

    Ordinarily, quarks are gregarious creatures that are never found alone. In an atomic nucleus, for instance, they come in packets of three, which make up protons and neutrons. But theorists have long thought that if you pour enough energy into a nucleus, the quark bundles will “melt” and let the quarks roam free, just as water molecules in ice slip their rigid shackles as the crystal melts.

    Over the past few years, evidence has mounted that RHIC, which slams together gold nuclei at more than 99% of the speed of light, has been creating quark-gluon plasmas in the superenergetic collisions. For example, scientists there have seen a phenomenon known as “jet quenching” (Science, 26 January 2001, p. 573). When a collision knocks two quarks together, they fly away in opposite directions, creating twin cone-shaped sprays, or jets, of other particles as their energy turns into mass. But in high-energy gold-gold collisions, about 80% of the expected number of particle jets disappeared, or were “quenched.” This could be a sign that a sticky quark-gluon plasma was trapping the jets as they tried to flee the nucleus.


    Gold-deuterium collisions at Brookhaven, here seen end-on, smack of quark-gluon plasmas.


    But there were alternative explanations. Maybe an atomic nucleus was naturally stickier than expected, or maybe something more complicated was going on. To rule out such scenarios, scientists at RHIC performed another test. Over the past few months, instead of smashing gold atoms with gold atoms, physicists slammed them into much lighter deuterium atoms. Gold-deuterium collisions look like gold-gold smashups—quarks scatter in the center of a nucleus, creating jets that spray away—but too little energy winds up in the nucleus to create a quark-gluon plasma. Thus, the researchers reasoned, the jet quenching should disappear if it's due to a quark-gluon plasma, whereas it should remain the same if it's caused by some unknown quirk of ordinary nuclear matter.

    The jet quenching disappeared. “It's like the difference between −1 and 1,” says Columbia University's William Zajc, spokesperson for PHENIX, one of the four experiments at RHIC. Although John Harris, a Yale University physicist with the STAR experiment at RHIC, won't claim that there's conclusive evidence of a quark-gluon plasma, he says that current theory can't easily explain the effect without it. “An alternative scenario is not one that's been formulated yet,” he says.

    “It looks convincing,” says Karel Safarik, a physicist at CERN, Europe's particle physics laboratory near Geneva. “Now maybe theorists will find an alternate explanation, but as far as I know, there's no known alternative.”

    The next run at RHIC begins late this fall and may well put the matter to rest. Zajc says the teams will be looking for subtle signatures of the creation of a quark-gluon plasma, such as the destruction of particles known as J/Ψs. Until then, RHIC physicists will remain tantalizingly close to being able to claim that they have induced the too, too solid nucleus to melt.


    Diabetes' Brave New World

    1. Jennifer Couzin

    Three major prevention trials have failed to stop a brutal autoimmune disorder in young people; now researchers are testing some riskier strategies

    Like many clinicians in his field, Kevan Herold vividly recalls the first time he tested an aggressive new approach to preventing type I diabetes. In May 1999, he spent 2 weeks running between floors at a New York City hospital, drawing blood from a teenage research subject and ferrying it to the lab. The boy, diagnosed 6 weeks earlier with the disease, had signed up to receive an experimental intravenous drug to suppress his immune system. Herold, a clinician at Columbia University, hoped that the treatment would protect the patient's scant remaining insulin-producing cells from destruction. But the drug was new, and Herold was concerned about the possibility of unexpected side effects. To Herold's relief, the boy did fine, and initial results looked hopeful.

    Four years later, this drama is beginning to play out on a much grander scale, as research teams worldwide prepare similar risky trials. But as clinicians experiment with new approaches like Herold's immunosuppressant therapy, they are sailing into uncharted waters. Supported by tens of millions of dollars from the National Institutes of Health (NIH), the Juvenile Diabetes Research Foundation, and a number of European countries, scientists are trying to prevent type I diabetes before it can be diagnosed or stall its progression early on. Armed with blood tests that can identify those at high risk for the disease, they are testing everything from bottle-feeding special milk to babies to administering drugs used by transplant patients.

    Bitter disappointments, however, are forcing clinicians to abandon less hazardous routes: One by one, the strategies considered the safest are failing. Last weekend, researchers reported at the annual American Diabetes Association meeting in New Orleans, Louisiana, that a planned 5-year trial of oral insulin failed to prevent type I diabetes in nearly 200 people at risk of the disease. The announcement came as a brutal blow, the third in 3 years. Two years ago, a sister trial of injected insulin was found not to prevent type I diabetes. Last year, a dispirited group of Europeans pulled the plug on their vast prevention experiment with nicotinamide, a form of vitamin B. These setbacks have prompted researchers to shift to therapies that modulate the immune system. Unlike the hormone insulin, which has been studied for 81 years, many of these therapies are novel, their science poorly understood.

    The risks are worth taking, Herold and others say, if they can halt type I diabetes and the devastation that accompanies it. An autoimmune disorder that normally surfaces in childhood, diabetes occurs when the body attacks cells in the pancreas that churn out insulin. Complications—which usually don't appear until adulthood—include blindness, kidney failure, heart disease, painful nerve damage, and circulatory problems so severe they necessitate amputation.

    Small subject.

    Four-year-old Becca, a participant in the oral insulin prevention study, is readied for an intravenous glucose infusion to test how well she clears glucose from her blood. Doctors estimate her risk of diabetes to fall between 26% and 50%. Now 6, Becca remains healthy, although the trial failed.


    Proposals to test preventive therapies are generating furious debate and even acrimony among immunologists, endocrinologists, and pediatricians. How much risk, they ask, should we tolerate in trying to prevent or stall a disease that may not threaten life for many decades? How well must we understand autoimmunity, or a drug's potential hazards, before treating an 8-year-old?

    “We can't be cowboys on this,” warns Carla Greenbaum, who directs diabetes clinical research at the Benaroya Research Institute in Seattle. At the same time, the balance is a fine one. “We have to be aggressive,” says Greenbaum, “if we're going to make a difference.”

    Predictable disease?

    At least 13,000 infants, children, and young adults are diagnosed with type I diabetes each year in the United States, making it the second most common chronic disease in young people, after asthma. (The more common form of diabetes, type II, is not an autoimmune disorder; it arises when the body no longer responds properly to its own insulin.) The health of diabetes patients depends on a dizzying succession of finger pricks and insulin shots, a constant struggle to avoid dangerous highs and lows in blood sugar. At the Joslin Diabetes Center in Boston, where several hundred newly diagnosed patients come each year, head of pediatrics Lori Laffel tours the expansive waiting areas, filled with brightly colored artwork and stacks of board games. “We'd love to put ourselves out of business,” she says.

    Joslin began an effort to do just that in the 1980s, when researcher George Eisenbarth dug into a treasure trove in Joslin's freezers. Now executive director of the Barbara Davis Center for Childhood Diabetes at the University of Colorado Health Sciences Center in Denver, Eisenbarth and his colleagues learned that for decades, researchers had been storing blood samples from sets of identical twins, only one of whom had diabetes at the outset. Researchers had long known that diabetes runs in families; here, potentially, were biochemical snapshots of incipient disease.

    Analyzing samples from the healthy twins, Eisenbarth spotted something intriguing: Over time, some began displaying antibodies to islet cells in the pancreas. A specific type of islet cell called the beta cell secretes insulin, which in turn propels glucose into cells throughout the body. The antibodies that Eisenbarth saw indicated that, although the unaffected twins appeared healthy, their immune systems had each launched a full-scale pancreatic assault. “One could see years ahead this loss of ability to secrete insulin,” says Eisenbarth. And indeed, almost all of these children eventually developed full-blown diabetes.

    Work by Eisenbarth and others supports the current view that individuals with at least one of three specific antibodies—called GAD (for glutamic acid decarboxylase), IA-2, and insulin—have a roughly 25% or greater chance of developing diabetes in the next 5 years. Those who have multiple antibodies, harbor certain gene variants, or cannot rapidly clear glucose from their blood may have a risk topping 90%.

    These advances have made diabetes the only autoimmune disease that can be so reliably anticipated. They have also exposed a window of opportunity for preventive therapy. “I call it a state of armed neutrality,” says Edwin Gale of the University of Bristol, U.K. “The immune system has spotted the beta cells as a target, but it hasn't wiped them out. It could well be that there's a kind of bush war going on … islet by islet.” In one of Gale's patients, this “bush war” lasted 18 years before diabetes won out.

    Still, it's difficult to trace the progress of this unseen battle. One reason is that a key indicator of immune activity—the subset of T cells carrying out the attack—is difficult to isolate and quantify. The lack of a simple measure of success or failure makes prevention trials time-consuming and costly to conduct. Today, the only valid way to assess a therapy is by waiting and seeing whether subjects develop diabetes. And that takes years.

    Furthermore, finding the telltale antibodies doesn't guarantee that diabetes will strike. “Some of these kids [who test positive] may become 80 years old and never develop the disease,” says Olli Simell of the University of Turku in Finland. “This has to be kept in mind as well.”

    Stages of destruction.

    Doctors can detect early signs of type I diabetes, an autoimmune disorder that prompts T cells (blue) to attack and destroy healthy beta cells (purple) in the pancreas, wiping out the body's source of insulin.


    Rewriting destiny

    The capacity to forecast diabetes with even 25% confidence put researchers in an uneasy position. “It's an uncomfortable time when we can predict a disease well that's relatively common and don't have a preventive therapy,” says Eisenbarth. Seeking to intervene, hundreds of sites around the United States and Canada converged in the early 1990s to launch the first major diabetes prevention effort. It was called Diabetes Prevention Trial-Type 1, or DPT-1, and led by pediatrician Jay Skyler of the University of Miami in Florida. Researchers had to screen 104,000 relatives of diabetics to track down and enroll just 700 trial subjects, each with at least a 26%, 5-year risk of developing type I diabetes. The youngest were 3 years old. High-risk individuals were randomly assigned to receive either twice-daily injections of insulin or no treatment; half those with a moderate risk of 26% to 50% took the hormone orally and half got a placebo.

    Insulin was favored for several reasons. Earlier rodent and preliminary human data had suggested that it could prevent diabetes. Theories explaining this varied. Some scientists reasoned that insulin could boost the number of normal T cells. Others argued that reducing glucose levels by supplying insulin lessened stress on the beta cells, making it easier for them to fend off immune attacks. Besides, insulin had been injected by diabetes patients for decades and appeared safe.

    “We were naïve in thinking that we knew what we were doing,” says Greenbaum, who expected the trial to succeed. Indeed, some physicians with high-risk children were so convinced of insulin's effectiveness that they refused to let their children be randomized, opting instead to quietly give them the hormone.

    Both the oral and injection arms of DPT-1 have now failed utterly to prevent diabetes—a major letdown. An equally large trial of high-dose nicotinamide in 18 countries reached the same disappointing conclusion, despite patchy earlier evidence from animal and human studies that loading up on the vitamin could boost beta cell defenses. “We gambled a continent each on nicotinamide and insulin and got nowhere,” says a dejected Gale. In both cases, to the relief of researchers, no safety concerns surfaced.

    New strategies

    Many scientists agree that, with DPT-1 ending last weekend, the stakes for diabetes prevention have ratcheted up. While some still aspire to make insulin work, many are pinning their hopes on other experimental therapies—particularly on drugs that suppress or alter immune activity. Although these drugs would inhibit the immune system when given, the hope is that, once withdrawn, they would have lasting, modulatory effects—and that treatments would need to be administered only once, or a few times over many years.

    Two groups are investigating a modified version of a powerful immunosuppressant used by organ transplant recipients. The original drug, which is on the market although rarely prescribed, causes perilous side effects. Jeffrey Bluestone of the University of California, San Francisco, and scientists at Johnson & Johnson found that altering its chemistry made it more tolerable. Their version of the drug is a monoclonal antibody called hOKT31 (Ala-Ala), or simply anti-CD3, and evidence from animal studies suggests that it inactivates the CD3-positive T cells believed to be the primary attackers of the pancreas.

    Bluestone and Johnson & Johnson have been granted a patent for the drug. In 1999, Herold began testing anti-CD3 in a clinical trial of patients as young as 7.5 years old who had just been diagnosed with diabetes.

    “New-onsets” are a favorite for early tests of preventive therapies. They're easier to identify than prediabetic subjects, and it's considered more acceptable to expose them to greater experimental risk. Finally, many still have a handful of beta cells. Researchers hope that they can halt beta cell destruction and preserve some natural insulin production. But they also quietly express a practical concern: New-onset patients have already suffered pancreatic injury and may have a lower chance of responding to therapy than prediabetic children. Negative results from new-onsets might give the misleading impression that a drug is useless, when it may work well in a high-risk child whose pancreas is still largely intact.

    The initial results on anti-CD3, however, have been encouraging. Last year, Herold, Bluestone, and their colleagues reported in The New England Journal of Medicine that 6 months after receiving a 2-week course of anti-CD3, the 12 volunteers had, on average, responded dramatically, requiring nearly 40% less insulin to sustain healthy blood sugar levels. After a year, the therapy's effects were wearing off, although individuals who'd received the drug were still doing better than controls.

    Seeking a balance.

    Diabetes researcher Carla Greenbaum advises clinicians and volunteers on the risks and benefits of prevention trials.


    Another trial using the monoclonal, designed by a group in the United Kingdom but similar to Bluestone's version, is now under way in Belgium and Germany. Lucienne Chatenoud of the Necker Hospital for Sick Children in Paris and Bart Keymeulen of Vrije Universiteit Brussel in Belgium are coordinating the trial and have enrolled 80 new-onset subjects. Unlike their American counterparts, however, they're including only older adolescents and adults, reflecting a more conservative European attitude to diabetes prevention. They plan to complete their trial next year and are optimistic about the future of anti-CD3. “To be frank, I think it is the best hope today,” says Necker's Jean-François Bach, who has studied the drug extensively.

    Diabetes experts agree that the drug appears enormously promising. But they also wonder if its impact on a swath of T cells, not just those targeting the beta cells, will have long-term side effects. Some worry that it could reactivate dangerous viruses harbored in the body by dampening disease-fighting capabilities.

    Acute side effects of anti-CD3 include fever, anemia, rash, and joint pain, as well as shifts in the ratios of different types of T cells, Herold's group has reported. The latter persisted for at least 3 months.

    No one denies that anti-CD3 is a potent drug. Indeed, scientists and clinicians are divided over how far to push the envelope, especially given that many eventual recipients of these therapies will be young children. Aldo Rossini, director of the diabetes division at the University of Massachusetts Medical School in Worcester, endorses prevention trials while advocating extreme prudence. “You have to balance the Rossinis of the world, who are ultraconservative and say, ‘I treat the patient as if it's my own kid,’” he notes, “versus the person who says, ‘You have to stick your neck out a little bit.’”

    Walking a tightrope

    The U.S. outfit overseeing this balancing act is TrialNet, an ambitious 9-month-old creation of NIH. Chaired by Skyler, TrialNet is an effort to preserve the complex infrastructure formed for the DPT-1 studies. With $100 million in NIH backing, it's charged with overseeing and assessing large-scale trials of therapies to prevent and stall progression of type I diabetes. TrialNet's leaders, seeking consensus from its dozens of members, are proceeding slowly and with extraordinary care.

    The pace suits some, but not all. “TrialNet would be better off not going for the perfect study,” says Greenbaum. At meetings, she adds, “the tensions are evident” between immunologists, who tend to be more aggressive and impatient in pressing for clinical trials, and wary pediatricians. Bluestone, an immunologist whose father has had diabetes for 30 years and has suffered major complications, argues that “we do [risky trials] in cancer because we see this immediate, terrible outcome.” The same, he says, should be true for diabetes, which can ravage lives 20 years after diagnosis.

    Last year, TrialNet tentatively approved a request from Herold and Bluestone to test anti-CD3 in healthy patients at high risk of diabetes, contingent on additional safety data. TrialNet must now draw up a formal protocol. It also recently gave a nod to a regimen crafted by endocrinologist Peter Gottlieb of Colorado's Barbara Davis Center. Gottlieb plans to recruit 12- to 35-year-olds for a trial that will combine two drugs used by organ transplant patients. Originally, he considered including only volunteers 18 and older, but “then you're really missing part of the age group you really want to treat.”

    Independent of TrialNet, a few companies are testing another strategy that involves giving subjects peptides, bits of proteins, to ameliorate the autoimmune damage. TrialNet is also weighing some peptide studies. In animals, researchers have found that certain peptides can slow or stop T cell attacks on the pancreas. It's speculated that this occurs, according to Greenbaum and others, because the peptide activates a favorable T cell response that overwhelms the destructive one. But these peptides are also risky, Bluestone says: Subtly varying how and when the drug is given could trigger an immune attack, causing or accelerating diabetes. It may be possible to prevent this by altering the peptides or administering them along with immunosuppressant drugs, he adds.

    View this table:

    Proceeding cautiously, an Israeli company, Peptor, has partnered with the French drug firm Aventis to run one of the first U.S. peptide trials. They're enrolling 100 adults with a slow-progressing form of type I diabetes. Additional peptide clinical trials are being sponsored or considered by the Immune Tolerance Network, an organization headed by Bluestone and established by NIH 3 years ago to oversee studies in autoimmune diseases, transplantation, allergy, and asthma. There is no answer yet for the big question facing all of these therapies, says Rossini: “Is the drug better or worse than 20 or 30 years of diabetes?”

    Clashing on risk

    Some European scientists look askance at how U.S. diabetes trials are being designed. “Americans are always, dare I say it, a bit more trigger-happy, more willing to go for a new intervention,” says Gale. Finland's Simell agrees that although immunomodulators may be “the way to go,” he's not interested in testing them yet on children.

    Simell, a pediatrician, is overseeing Finland's Diabetes Prediction and Prevention Study (DIPP). Launched in 1994, DIPP screens every infant born in three Finnish cities; those with the highest risk genotypes are followed until they turn 15. Nearly 80,000 babies have been screened so far, and 10,000 have joined the DIPP cohort. Of those, 550 have developed antibodies. These children can participate in an ongoing prevention trial that gives insulin by nasal spray. Although oral and injected insulin have failed, Simell hopes for better results because his therapy is introduced earlier—as soon as antibodies surface.

    The Finns have also inspired a massive $25 million dietary experiment spanning Europe, Australia, the United States, and Canada, called the Trial to Reduce Insulin-dependent diabetes in the Genetically at Risk (TRIGR). Currently, TRIGR is recruiting thousands of pregnant women who are type I diabetics or have a diabetic husband or child. Babies' cord blood is screened at birth for higher-risk genotypes. Qualifying infants receive either regular formula or Nutramigen, a formula for babies with allergies in which the large proteins of cow's milk have been broken down. Pilot studies found that babies drinking Nutramigen had fewer diabetes-related antibodies, but the reason for this is unclear.

    The cautious tactics of DIPP and TRIGR highlight divergent views on risk in the community. “How aggressive are you willing to be?” asks Dorothy Becker, a pediatric endocrinologist at Children's Hospital of Pittsburgh who's overseeing the North American arm of TRIGR. Her center declined to participate in DPT-1, she says, because the hospital worried about the hazards of insulin and wasn't confident that existing tests were good enough to pinpoint exactly who would develop diabetes, and when. Becker says that methods have greatly improved since then, and it's now clear that insulin is safe, but she still questions the ethics of risky prevention strategies.

    Asked what side effects might prompt him to disqualify a preventive therapy, Herold pauses to consider. The “appearance of malignancies” is certainly one, he says, as is profound, long-term suppression of the immune system.

    In the end, it will be diabetes-prone children and their families who wrestle with these uncertainties most directly. Each will have to consider whether the risks are worth taking, a question no one will be able to answer for them.


    Can Northern Snow Foretell Next Winter's Weather?

    1. Richard A. Kerr

    Snow cover in the Northern Hemisphere last summer might have predicted this past winter's abnormal cold on the U.S. East Coast

    When there's no strong El Niño or La Niña in the tropical Pacific to influence weather at higher latitudes, forecasters trying to predict next winter's weather have next to nothing to go on. But some meteorologists think they may have found a helpmate for El Niño: the snows lingering in high northern latitudes in summer.

    Researchers have found an apparent connection between the extent of northern snow cover in summer and the state of the vortex of winds swirling around the pole the next winter. Because that northern vortex sets the tone for winter weather at mid-latitudes, the finding holds the promise of better winter forecasts a season ahead, says dynamical climatologist Roxana Bojariu of the National Institute of Meteorology in Bucharest, Romania. The catch is that “at this stage, we don't have a clear mechanism, a theoretical framework,” to explain the connection, she cautions. But some meteorologists are using the link anyway to make winter forecasts, and last winter they even did better than the official U.S. forecast.

    The winter forecaster's bugaboo is the Arctic Oscillation (AO). It's a wobbly atmospheric seesaw that alternately pumps up and slows down the vortex of westerly winds circling the pole, while shifting it latitudinally (Science, 19 October 2001, p. 494). In its strong phase, the AO brings warmth and added storminess to mid-latitudes. Weakened, it retreats and allows frigid arctic air to penetrate farther south than normal.

    Blame last summer?

    Late-lingering snow at high latitudes may influence the next winter's weather.


    But the AO is itself hard to predict, marching to its own erratic drummer. Only the El Niño-La Niña temperature swings of the tropical Pacific seemed to be able to steady the AO in its strong or weak phase, and then only in wintertime over North America. And even El Niño doesn't always get a good grip on the AO. Last winter, with a moderate El Niño under way, forecasters at the National Weather Service's Climate Prediction Center (CPC) in Camp Springs, Maryland, predicted an unusually warm winter for the northern half of the conterminous United States, based on El Niño's expected influence. Instead, the winter turned out abnormally cold in the East and warm in the West.

    Was something besides El Niño tweaking the AO last winter, or was that just another random wobble? Meteorologist Judah Cohen of Atmospheric and Environmental Research Inc. in Lexington, Massachusetts, thinks there was nothing random about it. He and colleagues have published reports in Geophysical Research Letters (GRL) and elsewhere that when snow covers an unusually large part of Siberia, as it did last October, the wintertime AO tends to be weaker and the Eastern United States tends to be colder. That's the way Cohen went with his private forecast for last winter, beating out CPC's El Niño-driven forecast.

    It now appears that Cohen wasn't just lucky. On 5 April, meteorologists Mark Saunders, Budong Qian, and Benjamin Lloyd-Hughes of University College London (UCL) in Dorking, U.K., reported in GRL that the summertime extent of snow around the whole Northern Hemisphere is a good predictor of the North Atlantic Oscillation (NAO), the AO's strongest regional expression—even better than Cohen's fall Siberian snow cover.

    Snow can make you blue.

    When more snow covers the Northern Hemisphere in summer, the eastern United States and northern Europe tend to be colder (blue) the following winter.


    In hindsight, using the extent of Northern Hemisphere summer snow to predict the strength of the NAO between 1972 and 2001 would have been 20% to 35% more successful than guessing that it would be normal, the UCL group found. And 75% of the time, snow cover would have correctly predicted the sign of the NAO: strong or weak. A few months earlier, Bojariu and Luis Gimeno of the University of Vigo in Ourense, Spain, had found a similar correlation between summer snow cover over all of Eurasia and the winter NAO. The connection “doesn't work every single year,” says Saunders, but “I think it is quite compelling; it merits serious attention.”

    The idea has met with a chilly reception, however—which is not unusual for any proposed link over thousands of kilometers of atmosphere and months of weather. “I'm a little cautious about this hypothesis,” says meteorologist Rowan Sutton of the University of Reading, U.K. “You've got to be really careful making correlations on short time series.”

    Everyone would be more comfortable with the technique if there were a good explanation for why it works as well as it appears to. Both Cohen and Saunders offer mechanisms that begin with snow cover chilling the overlying air, which goes on to alter atmospheric circulation and the AO-NAO. But “all these scenarios involve quite a few feedbacks that aren't well understood,” says meteorologist James Hurrell of the National Center for Atmospheric Research in Boulder, Colorado. “The more feedbacks involved, the more difficult it is to prove what you're looking at is real.”

    It may well be that something else—varying tropical sea surface temperature is Hurrell's favorite—is driving variations in both snow cover and the AO-NAO. If so, identifying the true driver should improve forecasting. Computer modeling, in which the various possibilities can be evaluated separately, will no doubt be required.


    Excited by Glutamate

    1. Constance Holden

    Fine-tuning drugs to act on particular glutamate receptors could lead to treatments for anxiety, depression, addiction, and even schizophrenia

    NEW HAVEN, CONNECTICUT—In 1988, says Darryle Schoepp, director of neuroscience discovery research at Eli Lilly in Indianapolis, “someone told me, ‘Don't work in glutamate—you'll never develop a safe drug.’” But times have changed for glutamate, the brain's most pervasive neurotransmitter. An April meeting* here highlighted several potential new drugs based on recent discoveries about glutamate and its surprisingly varied family of receptors.

    Glutamate research has been overshadowed in the past by work on other molecules that carry signals between brain neurons. “We're emerging from a field that has been hypnotized by dopamine for many, many years,” says psychiatrist Eric Rubin of Columbia University in New York City. In the areas this meeting focused on—disorders of cognition and motivation—dopamine stands out as the main subject of scientific scrutiny over the past few decades. It plays a central role in addiction and is targeted by all major drugs for schizophrenia. More recently the spotlight has shifted to serotonin, the basis of a family of antidepressants that includes Prozac.

    But drugs that modulate glutamate activity hold promise for treating an exceptionally wide range of disorders, including schizophrenia, depression, anxiety, addiction, pain, epilepsy, and neurodegenerative diseases such as Parkinson's and Alzheimer's. At the meeting, researchers described what is expected to be the first approved drug designed specifically to target this system—an antianxiety drug that Lilly hopes will hit the market within 3 years. Numerous other glutamate modulators are also in the clinical pipeline.

    Glutamate is involved in so many conditions because it is the brain's main excitatory neurotransmitter, sending “go” signals to neurons that are attuned to it. It's “the workhorse of the brain,” says Robert Malenka of Stanford University. It strengthens synaptic connections and consolidates new pathways throughout the brain.

    But glutamate's pervasiveness makes it a vastly more complex challenge for drug designers than dopamine. Virtually all brain cells have receptors that allow them to respond to glutamate. And well over half of the brain's 100 billion neurons generate the neurotransmitter. In contrast, the brain has only about 10,000 dopamine-generating neurons.

    Neurobiologists once thought that developing safe glutamate-based drugs was impossible because anything targeting this neurotransmitter was likely to have effects—and side effects—all over the brain. Glutamate-stimulating drugs can induce seizures, for instance. Indeed, early clinical interest focused on curbing glutamate's bad qualities. It's toxic in excess, overexciting neurons to the point of killing them. Glutamate causes much of the damage that occurs after a stroke, and it's also probably the chief neuron-killing villain in amyotrophic lateral sclerosis and some other neurodegenerative diseases.

    Even as recently as the late 1990s, says Schoepp, “the only promising thing” on the glutamate drug horizon was a class of compounds called NMDA antagonists, which block glutamate's action at the so-called NMDA receptor. But although NMDA antagonists can produce a variety of desirable effects, from blocking drug craving to mitigating the results of a stroke, they can't be used clinically because they make people psychotic. The hallucinogenic drug phencyclidine (PCP), for example, is an NMDA antagonist.

    Before the 1990s, all glutamate actions were thought to be mediated by ion channel receptors. When glutamate binds to these receptors, they open a channel for positively charged ions, thus stimulating the neurons that carry the receptors. The ion channels are classified into subtypes according to which glutamate-mimicking molecule activates them: NMDA, AMPA, or kainate. The ion channel receptors generally proved to be poor drug targets, however, because their effects are too nonselective, leading to dangerous side effects. But in recent years, the discovery of a variety of new types of glutamate receptors has opened up a host of potential specialized targets in different parts of the brain. One of the most noteworthy advances has been the discovery of a family of modulatory receptors called metabotropic glutamate (mGlu) receptors, of which eight have been discovered so far.

    The mGlu receptors operate very differently from ion channel receptors, says Jeffrey Conn, formerly of Merck and now at Vanderbilt University in Nashville. Because the receptors activate biochemical processes within cells rather than opening the gates to ions, they are capable of “subtle modulatory actions.” And, unlike the ion channel receptors, they are located on both sides of a synapse—that is, they operate in both the sending and the receiving neurons.

    “Discovery of the mGlu receptors dramatically changed our view of glutamate signaling,” says Conn. It revealed that glutamate doesn't simply excite neurons but, through mGlu receptors, can fine-tune neuronal signaling, toning down or beefing up transmission in specific brain circuits. And because the various mGlu receptors are distributed in different regions of the brain, targeting one subtype should have little effect on other glutamate-based communication.

    First payoff

    The first drug payoff from research on mGlu receptors is likely to be an antianxiety drug based on a compound called LY354740 produced by Lilly. This compound activates a particular mGlu receptor that helps maintain normal glutamate concentrations in the brain. It dampens glutamate output and excitability when the levels get too high but doesn't interfere with normal glutamate excitation.


    A candidate antianxiety drug acts on glutamate receptors in a rat's forebrain (yellow and orange), but it doesn't tweak glutamate-based communication in other parts of the brain (blue).


    Schoepp and his colleague Gary Tollefson reported at a meeting of the American Psychiatric Association in San Francisco in May that Lilly now has promising results from a trial involving about 600 people with generalized anxiety disorder. He says LY354740's efficacy matches that of benzodiazapines, which include Valium. The advantage is that it does so without the side effects or withdrawal associated with benzodiazepines, which operate on receptors for the neurotransmitter GABA. Schoepp says that the company will seek approval from the Food and Drug Administration for LY354740 in the next couple of years. “This compound is really the beginning,” he predicts.

    Schizophrenia is another disorder that may one day be treated by glutamate-targeting drugs. That fits well with the growing consensus that cognitive deficits—and not psychosis—are at the heart of schizophrenia (Science, 17 January, p. 333). Existing antipsychotic drugs curb symptoms such as delusions and hallucinations by reducing dopamine transmission. But “there's no real evidence for pathology in the dopamine system itself,” explains Anthony Grace of the University of Pittsburgh. He and others suspect that the disease is driven by disruptions in the glutamate system.

    The glutamate theory of schizophrenia features a central role for the NMDA receptor not only in psychosis but in a range of other cognitive and emotional symptoms. The case is based in large part on the fact that, when given to healthy people, NMDA antagonists such as PCP and ketamine generate symptoms, including disturbed cognition and emotional withdrawal, that bear an uncanny resemblance to those of the disease. And “if you give ketamine to a schizophrenic subject, they can't tell the difference from a relapse,” says Grace.

    By blocking NMDA receptors, these drugs block the capacity of neural projections called dendrites—which are already thin and scraggly in schizophrenia—to conduct and coordinate signals. At the same time, PCP and ketamine appear to abnormally enhance glutamate neurotransmission at non-NMDA receptors. The bottom line is what conference co-organizer Bita Moghaddam of Yale University calls “a state of glutamatergic chaos” that results in the kind of disorganized cortical activity found in schizophrenia.

    Scientists are therefore looking at compounds that either enhance NMDA receptor function or reduce the excess release of glutamate as potential treatments for schizophrenia. Promising targets include regulatory sites on the NMDA receptor. One example, says John Krystal of Yale, is a site where the amino acid glycine binds; its stimulation facilitates NMDA receptor function and reduces the effects of ketamine.

    As for decreasing glutamate release, some researchers hope that Lilly's new antianxiety compound will also fight schizophrenia. Moghaddam and her colleagues showed that pretreating rats with LY354740 blocks symptoms such as cognitive impairments and psychotic-like behaviors that would otherwise be caused by PCP (Science, 28 August 1998, p. 1349). The compound also reduces the excess glutamate release that PCP elicits.

    Although LY354740 has not yet been tested on humans with schizophrenia, Krystal and colleagues have taken the first step: They have found in normal human subjects that both LY354740 and the anticonvulsant lamotrigine reduce some of the cognitive effects of ketamine. Some preliminary studies also suggest that lamotrigine may enhance the efficacy of the antipsychotic drug clozapine in schizophrenic patients.

    Kicking the dopamine habit

    Glutamate is also moving in on dopamine in the field of addiction research. According to conference co-organizer Marina Wolf of the Chicago Medical School, the current view of addiction focuses on the importance of learning, and glutamate is the chemical that engraves new learning in the brain. So although dopamine may fuel the high people feel from taking an addictive drug, it looks increasingly as though it's glutamate that gets people hooked.

    Researchers are now looking for drugs that act on glutamate receptors to block certain stages of addiction, such as the sudden, intense craving that can hit cocaine addicts in particular even after years of abstinence. Encouraging evidence has come from animal research. For example, the cough medicine Dextromethorphan, a weak NMDA antagonist, has been shown to prevent tolerance to opioids in rats. And Wolf and others have shown that administration of an NMDA antagonist called MK-801 prevents rats from getting sensitized to cocaine (Science, 26 June 1998, p. 2045). Unfortunately, says Stanford's Malenka, there's no drug in the works yet for human addicts: “MK-801 makes people psychotic.”

    Glutamate-based drugs might also straighten out cognitive distortions that encourage addicts' powerful denial mechanisms and wacky risk assessments, says Charles Dackis of the University of Pennsylvania (Penn) in Philadelphia. Glutamate is crucial for higher thought processes in the prefrontal cortex, and repeated cocaine use alters glutamate transmission there, says Wolf.

    Targets galore.

    Glutamate sends its messages via many different channels. Researchers are aiming at particular types of glutamate receptors in hopes of creating highly refined therapeutics.


    Penn researchers are now studying a drug that they think may help cocaine addicts stay clean. It's Modafinil, an awakening agent used to treat narcolepsy, a disorder in which patients fall asleep suddenly during the day. The drug, which activates glutamatergic circuits and inhibits GABA, appears to reduce craving in addicts. A double-blind study is currently being conducted at Penn. “We're very excited about this drug,” says Dackis. “It's one of the few drugs that's glutamate enhancing” without inducing seizures.

    Depression, the most common mental illness, may also yield to glutamate-based drugs. Indeed, says Ian Paul of the University of Mississippi Medical Center in Jackson, “NMDA antagonists have been demonstrating potential as antidepressants since 1959.” This feature was discovered when tuberculosis patients unaccountably cheered up after treatment with an antibiotic, cycloserine, which is also an NMDA antagonist. Paul says other NMDA blockers, including amantadine, used to treat Parkinson's disease, have also shown antidepressant effects.

    “For the longest time folks arguing for a role for glutamate in affective disorder have kind of been voices crying in the wilderness,” says Paul. But a lot of people took a new look in 2001, when Robert Berman of Yale published a study showing that when depressed patients were given an intravenous infusion of ketamine, the result was an antidepressant response that persisted long after psychotic symptoms disappeared. Animal studies show the same thing. The key to getting positive effects is to partially block NMDA receptors, says Paul. That's where drugs that stimulate the subtle action of mGlu receptors are needed: “You can block [mGlu receptors] and people don't become psychotic,” he says.

    What drugs are next in line to come on the market? “One big advance that's coming,” in Conn's opinion, is compounds that hamper specific subunits of NMDA receptors. Many NMDA receptors are made up of several subunits, the exact combination of which may vary depending on the type of neuron on which the receptors appear. By targeting just one subunit, researchers can localize drug actions just to areas of the brain where NMDA receptors carry that subunit, Conn explains.

    A subunit of particular interest is called NR2B. Ken Koblan of Merck in West Point, Pennsylvania, says that based on the areas where it is concentrated, this receptor population looks like a promising target for both Parkinson's disease and pain control. Parkinson's might be slowed through the inhibition of excessive glutamate activity in brain areas where the dopamine neurons are dying. And NR2B subunits are also found in the spinal cord, where blocking glutamate can block transmission of pain impulses. NR2B antagonists have been found safe in animal tests; currently investigators are conducting toxicity studies in macaques. The goal, says Koblan, is to start clinical trials in a year or so.

    Down the road, there's the AMPA receptor, one that has scarcely begun to be explored, says Paul. He says there's some evidence that stimulating AMPA receptors increases levels of brain-derived growth factor, a protein that keeps brain cells fit. That would make it of interest for depression as well as for any condition involving deteriorating cognition, including Alzheimer's. AMPA receptors also may be key players in the craving that drives addiction.

    Thanks to the broadening of the horizons for glutamate, “I believe there is a renaissance in terms of molecular neuroscience targets within the pharmaceutical industry,” says Koblan. The potential payoffs for glutamate-based drugs are dazzling. But so are the perils, as Koblan points out: “The path [to successful therapeutics] is littered with many people's dead compounds.”

    • *“Glutamate and Disorders of Cognition and Motivation,” 13 to 15 April, sponsored by the New York Academy of Sciences.


    NSF Hopes Congress Will See the Light on NEON

    1. Jeffrey Mervis,
    2. Jocelyn Kaiser

    U.S. ecologists want to enter the era of big science with an ambitious and costly network of high-tech observatories

    In 1997, National Science Foundation (NSF) officials had a dream: to build a cross-country network of high-tech field stations for ecologists. The network would let researchers carry out all manner of cutting-edge science while monitoring the health of the local ecosystem. The data would be accessible to anyone, in real time, and would serve the needs of policy-makers as well as scientists.

    By 1999, the project had acquired a name and a snappy acronym, the National Ecological Observatory Network (NEON). The use of the word “observatory” was no accident: As ecology's first “big science” project, NEON is the biological equivalent of a ground-based telescope, a major facility that researchers can use for a range of studies. And it has a price tag to match—$391 million.

    But despite support throughout the foundation, NEON has yet to see first light. Although the project has been part of NSF's budget request in three of the past four fiscal years, including the 2004 plan now pending before Congress, skeptical legislators have yet to award it a dime. And although backers have worked to sharpen the concept—including a March report explaining its scientific rationale and its importance to researchers—the effort hasn't yet registered with many scientists. “There's still a lot of rumor and confusion and mythology,” says Scott Collins, a former NSF program manager. He guesses “that about 20% are pro-NEON, 20% are anti, and 60% don't know what it is.”

    Last week a National Academies panel began a study, at NSF's request, of the pressing continental-scale issues facing ecology and whether NEON could help address them (Science, 6 June, p. 1487). Expecting a positive answer, the foundation has asked for a preliminary report by August, a superfast track that will enable the findings to be part of NSF's 2005 budget submission this fall to the White House. Others, however, worry privately that, like day-old sushi, NEON may be rapidly approaching its expiration date.

    A bright idea

    NEON began as a gleam in the eye of ecologist Bruce Hayden, who in August 1997 took leave from the University of Virginia, Charlottesville, to lead NSF's environmental biology division. Mary Clutter, head of NSF's biology directorate, told him to think about “observatories,” he says, large shared facilities analogous to telescopes. The observatory idea extended the concept of NSF's Long Term Ecological Research stations, two dozen individual sites where scientists study how ecosystems change across time and space.

    A year later NSF began a series of workshops that helped broaden what was an “extremely narrow and unworkable” concept, says Collins. One key change was switching from the idea of single field sites to the current “network of networks,” in which core sites are linked with other regional field stations and campus labs. Rita Colwell, a microbiologist who became NSF director in August 1998, enthusiastically embraced the idea, and within a year NEON had sailed through the National Science Board, NSF's oversight body, and won a place in the foundation's 2001 budget request.

    In silico ecology.

    NEON would outfit field stations with high-tech instruments like this buoy of solar panels, weather, and other wireless sensors at Wisconsin's Trout Lake.


    Many environmental scientists believe NEON could transform their field. “I see it having the same effect in the environmental sciences as GenBank [the gene database] did for molecular biologists,” says Kent Holsinger, chair of an American Institute of Biological Sciences (AIBS) panel that in March issued a report on the “rationale, blueprint, and expectations” for NEON. And it's easy to see why: NSF envisions 16 observatories spread across the country (plus one in Antarctica) at locations not yet identified that would represent principal ecological regions. Each would be equipped with high-tech instrumentation—from towers measuring CO2 wafting in and out of ecosystems, to wireless sensors, to DNA sequencers. Each observatory would cost $20 million to build and $3 million a year to maintain and would run for 30 years. NSF's current proposal asks for $18 million to set up and operate the first two prototype sites.

    But NEON remains a tough sell. Some congressional aides complain that NSF has repeatedly changed its sales pitch. In addition, Colwell's strong advocacy of the project left the impression that NEON has little grassroots support. “One of their big mistakes,” says a lobbyist familiar with NSF's major facilities account, “is that the ecologists felt that they didn't have to do much because Rita would make it happen.”

    NEON has also been weighted down by an ongoing battle between NSF and Congress over how the foundation selects and manages large new research facilities (Science, 30 May, p. 1352). In particular, some congressional staff say that they don't understand why NEON jumped ahead of other projects approved by the science board for the foundation's Major Research Equipment and Facilities Construction account.

    Another problem stems from NSF's deliberate decision to leave open the scientific questions that NEON will answer. At workshops and discussions over the years, NSF officials have talked about NEON's application to a number of pressing problems, from providing a better handle on invasive species to measuring the impact of global warming. But they emphasize that it will be the community, not the agency, that will shape NEON through its proposals to use the observatories. “It's an infrastructure project,” insists Clutter. “The science will be left up to the merit-review system.”

    That argument has faced tough sledding on Capitol Hill, however. NSF's reliance on proposals from scientists might work well for individual research projects, notes one staffer, but not “for something that is a piece of equipment or a facility.” The view is shared by a group of hydrologists who hope NSF will eventually fund a network of field observatories similar to NEON. “We did it the other way around,” says Richard Hooper, executive director of the Consortium of Universities for the Advancement of Hydrologic Science, which this spring received a $6 million grant from NSF to design an observatory on the Neuse River in North Carolina. “We can say, ‘It looks like this.’ We have a prototype,” says Hooper.

    It hasn't helped that NSF's explanation for NEON has fluctuated over the years. Testifying before Congress in May 2000, Colwell described it as a research tool, a “pole-to-pole network with a state-of-the-art infrastructure of platforms to enable … ecological and biocomplexity research.” After the 11 September terrorist attacks, however, NEON acquired an important new mission. As Colwell told a December 2001 meeting at the National Academy of Sciences, NEON could also serve as “a biological early warning system [that] could be used to monitor various locations for disruptions by bioterrorism.”

    Aides say legislators felt that giving NEON such a role—spelled out in NSF's 2003 budget request—crossed the line from science into politics, providing them with another reason to say no. So NSF officials excised bioterrorism from the agency's 2004 budget request unveiled in February. The latest iteration defines NEON as “a continental-scale research instrument … to obtain a predictive understanding of the nation's environments.”

    Field of dreams.

    Simulation shows one of NEON's 17 regional sites.


    Staying ahead

    Proponents have lots of ideas about how NEON could strengthen environmental research. For example, only four stations in the United States now measure solar radiation, a key variable for modeling how much carbon ecosystems store through photosynthesis, notes ecologist John Aber of the University of New Hampshire, Durham. A series of recent high-level reports has called for better U.S. environmental monitoring to track changes such as invasive species and the spread of wildfires. The museum community is also excited about NEON, Scott Miller of the Smithsonian's National Museum of Natural History told the National Academies panel, anticipating that it would stimulate new research and training in systematics and enhance ongoing projects.

    The AIBS panel was equally enthusiastic. “[NEON's] impact will spread to many other fields, and it will also help the U.S. remain a global leader in the environmental sciences,” says panel chair Holsinger, an evolutionary biologist at the University of Connecticut, Storrs. The AIBSpanel, whose activities are funded by a $1.3 million NSF grant, held a series of town meetings to lay the scientific groundwork for NEON. Clutter hopes the academy panel's report will further solidify the case for NEON, which she expects to have a major impact on the field. “Ecologists have never before had access to this sort of state-of-the-art technology, which will change biology in ways that we can't imagine.”

    However, even NEON's biggest supporters acknowledge that some colleagues have been reluctant to embrace the project. Part of the reason may be that “collaborative science is not for everyone,” says Collins, now at the University of New Mexico in Albuquerque. There's also a running debate over whether NEON can serve both as a series of monitoring stations and as sites for investigator-driven experimental science. Clutter says “absolutely,” but others aren't as confident. Jerry Elwood, head of the climate change division within the Department of Energy's Office of Science, hopes that NEON won't “try to be everything for everybody. … If you want it to be a real network of networks, then you need to do some serious strategic planning on the important issues to address.”

    NEON's proponents are counting on Congress to look past these unresolved questions and get the project moving. “The Hill reaction has been much more positive this year” than last year, says AIBS Executive Officer Richard O'Grady, whose organization has teamed up with the Ecological Society of America to lead the lobbying effort. With Congress poised to take a first crack at NSF's 2004 budget in the next few weeks, it may soon be clear if lawmakers are finally willing to turn an ecological dream into a reality.


    Scoping Out the Political Process

    1. Jeffrey Mervis

    Ecology isn't the only discipline trying to tap a National Science Foundation (NSF) infrastructure account traditionally dominated by physicists and astronomers. In addition to asking for a network of ecological observatories called NEON, NSF's 2001 proposed budget contained a $17 million down payment for EarthScope—a $218 million effort to probe the structure and dynamics of the North American continent. Both projects promised to weave together a distributed network of highly instrumented research sites. But whereas NEON remains grounded after congressional rejections in 2000 and 2002 (see main text), EarthScope received $30 million last year for its three major elements, with future annual installments of $40 million. EarthScope's success, say science policy mavens, is the result of exciting scientific opportunities—and a sophisticated political campaign.

    The research community was “so naïve the first time around,” confesses Greg van der Vink of the Washington, D.C.-based Incorporated Research Institutions for Seismology. So when the project reappeared in NSF's 2003 budget request, van der Vink helped to assemble a SWAT team of scientists who could come to Washington “at the drop of a hat” to discuss the project with legislators, White House officials, and other policy-makers. “We were ready to answer the so-what questions,” he says: “What is EarthScope, why is it important, and how will it be managed?”

    That effort was enough to erase “the stench of dead” that van der Vink says surrounded EarthScope after its initial defeat in 2000. And NEON's supporters say that they have gone to school on what EarthScope has accomplished. “This time we're ready,” says Robert O'Grady, executive officer of the American Institute of Biological Sciences, who recently formed a similar group of scientists and policy analysts to work the political ropes. “Most ecologists tell me to come back when we've gotten the money,” says O'Grady. “They don't understand how much work it takes to get to that point.”


    Solid Hints of a Strange State

    1. Robert F. Service

    Physicists close in on what they hope is the end of a decades-long quest: the first Bose-Einstein condensate in a solid

    BALTIMORE, MARYLAND—They don't have definitive proof yet, but researchers from California's Lawrence Berkeley National Laboratory (LBNL) reported at a meeting* here on 5 June that they may have created the oddball state of matter called a Bose-Einstein condensate (BEC) in a solid. If verified, the achievement will end a decades-long race and will mark the first BEC created in a system other than a collection of gaseous atoms cooled to ultralow temperature.

    “They have something very exciting,” says Gang Chen, a physicist at Lucent Technologies' Bell Laboratories in Murray Hill, New Jersey. If the work pans out, Chen says, it could make BEC research accessible to far more researchers, because the solid state systems don't require the complex laser-cooling equipment needed to create BECs from atoms.

    Bose-Einstein condensates were first created in 1995 when researchers used lasers to cool rubidium atoms in a gas to just billionths of a degree above absolute zero and trap them in a tight space (Science, 14 July 1995, p. 152). The frigid conditions caused the atoms to “condense” and move in quantum mechanical lockstep. This success sparked a flurry of international research, and BECs are now being investigated for use in everything from precision measurement to nanotechnology.

    Researchers had long thought that it would be easier to create a BEC in a solid, because in theory they could do it at much higher temperatures. That's because instead of making BECs out of relatively bulky atoms, they could use waiflike excitons, pairs of negatively charged electrons and positively charged electron “holes” in the material. The less massive the particle, the less it needs to be cooled down to condense into a BEC. Unfortunately, excitons normally survive only a few nanoseconds, because oppositely charged electrons and holes attract one another: When they collide, they annihilate each other and give off a photon of light. The upshot is that exciton BECs are likely to be very short lived and hard to spot.

    Last year, a team led by physicist Daniel Chemla at LBNL took a key step forward by growing a semiconductor crystal designed to help excitons live longer and thus give researchers a better chance of spotting a BEC. The crystal was composed of alternating layers of two semiconductor alloys, two layers of gallium arsenide (GaAs), separated and sandwiched by three layers of aluminum-gallium-arsenide (AlGaAs). In semiconductors, electrons are forced into discrete energy levels called bands. GaAs allows electrons to reside at a lower energy level than AlGaAs does. Because the AlGaAs layers couldn't accommodate the low-energy electrons, the sandwich structure forced electrical charges to flow into the two well-like GaAs layers and stay trapped there.

    BEC in a semiconductor?

    Laser produces pairs of electrons (-) and holes (+) (left). Trapped in low-energy layers (right), charges form excitons that may condense into a BEC.

    The researchers then applied a voltage to their semiconductor sandwich and shined a laser on the surface. Where the light energy struck the semiconductor, it created pairs of electrons and holes. Rather than immediately attracting and annihilating one another, the charged particles were attracted to oppositely charged electrodes. But the AlGaAs barrier layers prevented them from getting all the way there. As a result, the electrons wound up confined to one GaAs layer and the holes to the other (see figure). The separate layers remained close enough for the electrons and holes to attract one another—and thus to behave as single exciton particles—but the barrier in between kept them from collapsing together long enough to survive around 100 nanoseconds apiece.

    When the researchers looked for the result, they noticed that about 100 micrometers from the laser spot—a huge distance on a quantum scale—photons were streaming from a small spot on their crystal. The group concluded that excitons generated by the laser light were somehow being trapped at the distant site in the material and then recombining to give off photons—a tantalizing sign that excitons were condensing, a necessary step to creating a BEC.

    At the meeting, Chemla's Ph.D. student Chih-Wei Lai offered stronger evidence. Systematically varying the temperature and electric field applied to the semiconductor, Lai showed that at a particular combination of temperature and laser intensity the excitons condensed to form a cloud more than 100 times denser than the surrounding material. This condensation, Chemla says, is what researchers would expect to see in a BEC but is still not yet definitive proof. To clinch the case, Lai says, the team needs to understand exactly what traps the excitons. One possibility is that “dopant” atoms added to fine-tune the electronic properties of the alloys collect in one spot and create a small region of charge that attracts the excitons. If that's what is going on, the researchers could create a pair of artificial traps side by side and then show that light emitted by two such traps interferes in the manner expected if the excitons were indeed BECs.

    But even if the scheme works, it won't explain all the mysteries seen in such layered semiconductors. Last year Chemla's group as well as another led by David Snoke at the University of Pittsburgh reported similar crystal-zapping experiments and found that in some cases light was emitted in a ring around the laser spot. Chen, whose group is also chasing the dream of a solid state BEC and has seen similar rings, says it's possible that excitons form a BEC in the center of such rings and emit photons when they collapse into one another at the edges. For now, Chen says all the groups are being careful not to claim that they've made a BEC in a solid. But if they can muster a bit more evidence, BEC researchers could find themselves with a whole new field to play in.

    • *23rd Annual Conference on Lasers and Electro-Optics and 11th Annual Quantum Electronics and Laser Science Conference, 1 to 6 June.

  13. The Warped Side of Dark Matter

    1. Robert Irion

    Weak gravitational lensing, a subtle distortion of all distant galaxies, promises the most direct way of mapping the universe we can't see

    Imagine flying over a mountain range on a moonless night. You know that peaks loom below, but you can't see them. Suddenly, specks of light pop into view: isolated country homes, dotting the hilly slopes. The lights outline part of the massive edifice, but your mind grasps that the darkness hides something far larger.

    Astronomers face a similar situation. In recent years, their research has confirmed that the luminous universe—our sun, our galaxy, and everything that shines—makes up but a wee bit of all there is. Instead, the strange new recipe calls for more than one-quarter “dark matter” and two-thirds “dark energy.” This is the universe your teacher never told you about: matter of a completely unknown nature and energy that hastens the expansion of the cosmos toward future oblivion.

    To divine the properties of dark matter, astronomers first must find out where it is. And to learn how dark energy controls the fate and shape of the universe—including how matter is distributed—they must trace how the dark matter clumped together over time. But they can't see it; all they have are some bright dots in a vast, mountainous wilderness.

    That's about to change. Researchers are refining an exciting new technique that relies on the warping of space itself to reveal dark matter. Called weak gravitational lensing, the method exposes dark matter by tracing the subtle distortions it imparts to the shapes and alignments of millions of distant galaxies. The effect isn't obvious to the eye, yet it alters the appearance of every remote galaxy. Although widespread detection of this “cosmic shear” first hit journals just 3 years ago, several teams worldwide have embarked on major new surveys in a race to exploit its potential. Indeed, astronomers now feel that weak lensing will become a cornerstone of modern cosmology, along with studies of the cosmic microwave background radiation and distant explosions of supernovas.

    “I no longer regard galaxies as tracers of the cosmos,” says astronomer Richard Ellis of the California Institute of Technology (Caltech) in Pasadena. “We now have the confidence to go after the real physics. Let's image the dark matter directly; we have the tools to do it. Weak lensing is one of the cleanest cosmic probes of all.”

    Brought to light.

    Weak gravitational lensing exposed these patches of dark matter, otherwise hidden from telescopes.


    Line up and stretch

    Weak lensing is akin to the far more spectacular process called strong gravitational lensing. In the latter, the intense gravity of galaxies or clusters of galaxies bends and magnifies light from more distant objects as the light travels toward Earth. Strong lensing can split a single quasar into four images or distort remote clusters into dizzying swirls of eerie arcs. These funhouse mirrors in space, captured exquisitely by the Hubble Space Telescope, are vivid displays of the pervasive light-bending effects in Albert Einstein's general theory of relativity.

    Relativity also causes weak lensing, but without such drama. “Strong lensing is like pornography: You know it when you see it,” says astronomer R. Michael Jarvis of the University of Pennsylvania in Philadelphia. “Weak lensing is like art.” And like art critics, astronomers have honed their perception to see weak lensing where others see a featureless array of galaxies.

    The array is a background of millions of faint blue galaxies, first recognized in the late 1980s. This “giant tapestry,” in the words of astronomer Ludovic Van Waerbeke of the Institute of Astrophysics in Paris (IAP), freckles any exposure of the heavens by research telescopes with mirrors larger than 2 meters across. The galaxies date to a time when the universe was less than half its current age, and they are everywhere astronomers look.

    Although each galaxy looks like a disk or an elongated blob, the mathematical average of a large number of them is a round shape. In a similar way, the galaxies should not line up in a special direction; on average, their orientations should be random. Weak lensing, induced by the tugging of dark matter between us and the faint galaxies, leaves patterns in those shapes and alignments at a tiny level of distortion: about 1%. Finding the patterns thus becomes a statistical game. “Each galaxy is like a little stick on the sky, and we want to measure its elongation and orientation,” Van Waerbeke says. To see that signal reliably, astronomers must take steady photos of the galactic tapestry. Useful images typically capture at least 20,000 galaxies in a patch of sky the size of the full moon—one-fifth of a square degree.

    Then, using the physics of relativity, the researchers convert the slight distortions into a plot of all of the mass—both luminous and dark—along the path between Earth and the distant galaxies. This plot (see figure at left) is a two-dimensional projection; it doesn't reveal the distance to each blob. Even so, it exposes unseen mountains of mass whose gravity changes the appearance of everything on their far sides. “To see this, we don't have to make assumptions about what the dark matter is,” says astronomer Jason Rhodes of Caltech. “It's the most direct way to simply measure everything that's there.”

    Of course, there are complications. The atmosphere blurs galaxies, telescopes jitter, and electronic detectors have flaws. Statistics quickly degrade unless images are rock solid over a wide patch of sky. But the promise of weak lensing was so potent in the late 1990s that a spirited race pushed astronomers to tackle these technology issues. When success came, it came with a flash: four nearly simultaneous papers in March 2000 from groups in Canada, Europe, and the United States on the first detections of cosmic shear over large areas.

    Since then, teams have extended their efforts in two ways. Some look at broader sweeps of the sky with modest telescopes, such as the 3.6-meter Canada-France-Hawaii Telescope (CFHT) on Mauna Kea, Hawaii, and the 4.2-meter William Herschel Telescope on La Palma, Canary Islands. Those projects aim to examine as many dark-matter patches as possible in a sort of population survey, improving the overall statistics of their distribution through the universe. Others use big telescopes, including one of the European Southern Observatory's four 8.2-meter Very Large Telescopes on Cerro Paranal, Chile, and one of the twin 10-meter Keck Telescopes on Mauna Kea, to zero in on a few distant regions with greater depth.

    Most of the invisible mass found by weak lensing is mingled with ordinary galaxies visible in either optical light or x-rays. However, some teams claim to have spotted concentrations of matter with no associated galaxies at all. These truly dark clusters, if they are real, would betray the universe's dirty secret: Big piles of mass don't necessarily come with lights attached.

    Most agree that shaky statistics make those claims vague for now, but the fundamental lesson is valid. “The ratio between emitted light and underlying mass changes quite considerably” from cluster to cluster, says theorist Matthias Bartelmann of the Max Planck Institute for Astrophysics in Garching, Germany. “This is something unexpected.”

    The implication is profound. Astronomers cannot rely on large-scale surveys of galaxies alone to trace the history of how matter has assembled in the universe. But that history is critical to unraveling the riddle of dark energy. As Bartelmann notes, dark energy apparently has exerted its greatest influence during the past several billion years. As the expansion of space carried matter farther apart, gravity became less effective at slowing the expansion. Meanwhile, dark energy—manifested as a self-repulsion within the fabric of space itself—grew dominant (see p. 1896).

    Theorists are eager for an atlas of how dark matter clumped together to help them see what makes dark energy tick. “We have no other way to calibrate how structures formed in an unbiased way in the last one-third of cosmic evolution,” when dark energy's sway took hold, Bartelmann says. “Weak lensing is without competition in that field.”

    Teams already are taking a first crack at measuring the clumpiness of dark matter. In essence, a smooth spread of dark matter between us and a distant galaxy has a minor lensing effect, whereas blobs of the stuff enhance the weak-lensing signal—just as marbled glass on a thick shower door distorts light more than plate glass does. Even with current statistics, results from weak-lensing surveys help pin down numbers for the mass content and expansion rate of the universe, according to a paper in press at Physical Review Letters by astrophysicist Carlo Contaldi of the Canadian Institute for Theoretical Astrophysics in Toronto and colleagues. “The combination of [cosmic microwave background radiation] and weak-lensing data provides some of the most powerful constraints available in cosmology today,” the team writes.

    Another promising way to chart dark matter's behavior is “3D mass tomography,” named by a pioneer of weak lensing, astrophysicist J. Anthony Tyson of Lucent Technologies' Bell Laboratories in Murray Hill, New Jersey, and his colleague David Wittman. Researchers can gauge the distances to blobs of dark matter by crudely estimating the distance to each distorted galaxy in the background tapestry. Light from the most distant galaxies crosses the greatest chasm of space and gets lensed most severely, whereas relatively nearby galaxies aren't affected as much.

    By correlating the distortions of galaxies with their rough distances, Tyson's team can convert the 2D projections of total mass into 3D volumes. That reveals where the dark-matter mountains are in space with 10% to 20% accuracy. Using data from the 4-meter National Optical Astronomy Observatory telescopes at Kitt Peak, Arizona, and Cerro Tololo, Chile, the group has derived locations for about two dozen dark clusters. When the astronomers complete their survey of 28 square degrees of the sky in 2004, they expect to identify 200 clusters out to a distance of about 7 billion light-years, says Wittman.

    Shear science.

    Distant galaxies show random shapes and orientations (left) unless intervening dark matter shears those patterns in a subtle but detectable way (right).


    Take a wider view

    Still, Tyson's program and all other efforts face similar problems: Images aren't sharp enough, deep enough, or wide enough. “The facilities we have worldwide don't yet have the light grasp and field of view required to get the scientific promise out of weak lensing,” Tyson says.

    Astronomers are launching a second generation of cosmic-shear surveys that should achieve some of that promise. Foremost is the CFHT Legacy Survey, powered by the biggest astronomical camera ever built: MegaPrime, which can take sharp images of a full square degree of sky (five full moons). The 170-square-degree survey, set to begin within weeks, will consume 100 nights per year for 5 years on the CFHT. Goals include searching for supernovas and nearby transient objects, such as hazardous asteroids. However, the weak-lensing part of the survey—led by IAP astronomer Yannick Mellier—has the community abuzz. “MegaPrime is a magnificent instrument, and this survey will be a landmark in the field,” says Caltech's Ellis.

    A hot competitor is one of CFHT's neighbors under the crisp Mauna Kea skies: Japan's 8.2-meter Subaru Telescope and its new Suprime-Cam. Although its field of view is just one-fourth that of MegaPrime, Suprime-Cam has won equal raves for its image quality. Moreover, Subaru's mirror has more than four times as much light-collecting power as does CFHT. That will let the Japanese team examine lenses in far greater detail. The astronomers plan to use 3D tomography to pinpoint the masses, distances, and rough shapes of hundreds of dark entities. “We would like to publish the first mass-selected object catalog [of dark-matter lenses] in a timely manner,” says team leader Satoshi Miyazaki of the National Astronomical Observatory of Japan in Hilo, Hawaii.

    These and other planned surveys will set the stage for weak lensing's coup de grâce next decade. Tyson leads a large U.S. team that is working on the Large Synoptic Survey Telescope (LSST), a project that has won top billing for ground-based astronomy from national review panels. A radical optical design of one 8.4-meter mirror and two other mirrors larger than 4 meters will open up a giant swath of sky—at least 7 square degrees—for LSST to see at once. Among many projects, LSST will discover 300,000 mass clusters and tighten the errors on cosmic parameters—such as the dark energy “equation of state,” a measure of its physical cause—to about 2%, Tyson predicts. He hopes observations will begin by 2011.

    Wide eye.

    The Large Synoptic Survey Telescope will look for dark-matter warping.


    At about the same time, supernova researchers led by astrophysicist Saul Perlmutter of Lawrence Berkeley National Laboratory in California hope to launch the SuperNova Acceleration Probe (SNAP). The satellite, an ambitious proposal to study dark energy by tracing the expansion history of the universe more than 10 billion years into the past, will carry a wide-field 2-meter telescope ideal for measuring weak lensing as well. Current plans call for SNAP to devote 32 months to supernova searches and 5 months to a weak-lensing survey spanning at least 300 square degrees, Perlmutter says.

    Lensing aficionados hope to avoid a battle for funding between the two expensive approaches. Research on the cosmic microwave background radiation showed that cleverly designed telescopes on the ground and on balloons could answer key questions. Then, the Wilkinson Microwave Anisotropy Probe satellite nailed the answers beyond doubt from the quiet of space. In a similar vein, outside observers think that both future lensing projects should proceed. Still, some believe that SNAP may yield the most stunning results. “We need to measure the shapes of galaxies as accurately as possible, and we have problems [doing that] from the ground,” says Van Waerbeke of IAP. “But from space, it's just perfect.”

    That debate may sharpen as weak lensing becomes more widely known, but so will the basic shift in how we study the cosmos. “The universe is not those pinpoints of light we can see in the night,” Tyson says. “It is in fact this dark side. In some sense, we are using what most people thought was the universe, namely radiation and light, as a tool to measure the real universe for the first time.” As that door opens, we will grow accustomed to a warped universe where no shining object is quite as it appears.

  14. Dark Energy Tiptoes Toward the Spotlight

    1. Charles Seife

    Discovered less than a decade ago, a mysterious antigravity force suffuses the universe. Physicists are now trying to figure out the properties of this “dark energy”—the blackest mystery in the shadiest realms of cosmology

    It's the biggest question in physics: What is the invisible stuff blowing the universe apart? A decade ago, the idea of “dark energy” was a historical footnote, something Einstein concocted to balance his equations and later regretted. Now, thanks to observations of distant supernovae and the faint afterglow of the big bang, dark energy is weighing ever more heavily upon the minds of cosmologists. They now know that this mysterious “antigravity” force exists, yet nobody has a good explanation for what it might be or how it works.

    That vexing state of affairs may be starting to change. Scientists are finally beginning to get the first tentative measurements of the properties of this ineffable force. It's a crucial endeavor, because the nature of dark energy holds the secret to the fate of the universe and might even cause its violent and sudden demise.

    “We're off to a very good start,” says Adam Riess, an astronomer at the University of California (UC), Berkeley, who hints that within the next few months, supernova observations will finally help scientists begin to shine light on dark energy.

    The modern story of dark energy began in 1997 when supernova hunters such as Riess and Saul Perlmutter of Lawrence Berkeley National Laboratory in California shocked the scientific community by showing that the universe is expanding ever faster rather than slowing down as physicists expected. They based that conclusion on observations of large numbers of supernovae known as type Ia. Because every type Ia explodes in roughly the same way with roughly the same brightness, the astronomers could use characteristics of their light to determine how far away the supernovae are (which is equivalent to determining how old they are) and how fast they're moving. When they calculated how fast the universe had been expanding at various times in the past, the results were “a big surprise,” says Perlmutter: The universe has been expanding faster and faster rather than slowing down.

    On the face of it, this was an absurd conclusion. As far as most physicists were concerned, only two big forces had shaped the universe. First, the energy of the big bang caused the early universe to expand very rapidly; then as the energy and matter in the universe condensed into particles, stars, and galaxies, the mutual gravitation of the mass started putting on the brakes.

    The supernova data showed that something else has been going on. It is as if some mysterious antigravity force is making the fabric of the universe inflate faster than gravity can make it collapse (Science, 30 January 1998, p. 651). Observations of the cosmic microwave background radiation bolstered the case. By looking at the patchiness in the microwave radiation from the early universe, cosmologists could see that the universe as a whole is “flat”: The fabric of spacetime has no curvature (Science, 28 April 2000, p. 595). Yet there is far too little matter in the universe to pull it into such a shape. There has to be an unknown energy—dark energy—suffusing the universe. “The fact that these two teams came up with essentially the same result is why they are taken so seriously,” says Alexei Filippenko of UC Berkeley. “With each year, it's taken more and more seriously.”

    So, what is dark energy? Some theorists think it might be the energy latent in the vacuum itself. According to the rules of quantum mechanics, even empty space is seething with particles—particles that can exert pressure (Science, 10 January 1997, p. 158). It may be that the vacuum energy somehow is causing the fabric of spacetime to expand ever faster. Other physicists suspect that the foot on the cosmic accelerator might be a weaker form of the physics behind inflation, a period of superrapid expansion shortly after the big bang. To figure out what is going on, physicists need more information about the specific properties of dark energy.

    Luckily, cosmologists and astronomers are finally beginning to get data that allow them to delve into those properties. One of the key targets is “w”: the so-called equation of state of dark energy. “w is a parameter which will characterize the nature of dark energy,” says Riess. “It tells you how squishy it is”—more precisely, how dark energy behaves under different pressures and densities. Physicists have long invoked similar parameters to describe the behavior of gases. But whereas a gas, when allowed to expand into a larger volume, exerts less pressure on the walls of its container, dark energy exerts more pressure as it expands. This counterintuitive property makes the value of w a negative number rather than a positive number.

    In cosmological models, the “container” is the universe itself. At any given moment, its volume determines the pressure that drives the universe to expand. In theory, the pressure could have been affected by the volume in any of infinitely many ways, each writing the history of the universe in a slightly different manner. To find out which scenario we live in, physicists need to nail down how forcefully the dark energy is bearing down on the universe and whether the push has varied over time.

    The key to that determination is w. If dark energy's pressure has been constant throughout the history of the universe, w is −1. If the properties of dark energy have been changing over time, as various “quintessence” theories suggest, w lies between 0 and −1 and might even change as time passes. According to Riess, unpublished supernova measurements by the Hubble Space Telescope and other sources indicate that w is about −1. “We should get the first very crude estimates of whether w is changing later this year,” he adds.

    However, the supernova results leave open a bizarre possibility. Earlier this year, physicist Robert Caldwell of Dartmouth College in Hanover, New Hampshire, and his colleagues investigated what happens if w is less than −1, for example, if it's −1.1 or −1.2 or −2. Physicists had shied away from such values, because they make theoretical equations start spewing out ugly infinities and other logical inconsistencies. But Caldwell's group didn't flinch. “Interesting things happen as dark energy becomes more and more repulsive,” says Caldwell.

    “Interesting” is putting it mildly: The universe dies a horrible death. The ever-strengthening dark energy makes the fabric of the universe expand ever faster and things fall apart. In a few billion years, galaxy clusters disintegrate. The galaxies' mutual pull is overwhelmed by the dark energy, and they spin away from each other in ever-widening gyres. Several hundred million years later, galaxies themselves, including our own Milky Way, fling themselves to pieces. Solar systems and planets spin into fragments. Even atoms lose control of their electrons, and then atomic nuclei get torn apart and protons and neutrons shatter under the enormous expanding pressure. “Space becomes unstable,” says Riess. The universe ends in a “big rip,” a cataclysm where all matter gets shredded by the ever-stretching fabric of spacetime.

    Although few physicists favor the big-rip scenario, nobody can rule it out a priori. In fact, some big-rip values of w could explain the supernova data pretty well. “Apart from distaste, there's no other reason and no observations pushing you to a w greater than −1,” says Caldwell. Riess agrees: “Some of the values look like a good fit, −1.1 or −1.2.” Unfortunately, although supernova data are rapidly narrowing down the possible values of w greater than −1, they don't shed nearly as much light on the regime below w of −1. It will be a while before physicists can figure out whether the big rip awaits us.

    In the meantime, other scientists are using distant supernovae to figure out another aspect of dark energy's history. Because dark energy gets relatively stronger as it expands and the force of gravity gets relatively weaker as matter gets more diffuse, they reason, there must have been a time when dark energy's expansionist push was weaker than the contracting force of gravity. Cosmologists think the tipping point occurred when the universe was less than about 4 billion years old. Before then, the expansion of the universe must have been slowing—just as physicists used to assume it was doing today.

    Shaping our end.

    The properties of dark energy have determined the universe's history so far and may dictate an alarming denouement.


    By pinpointing when the era of slowing gave way to the era of speeding up, Riess says, supernova hunters can test whether dark energy really behaves as theorists assume it does—or whether it defies all expectations. “And what's exciting is that we have data in the can now” that might pinpoint that time, says Riess. According to Caldwell, figuring out when the deceleration switched to acceleration might yield even more information about the nature of dark energy than w can: It will be a relatively sensitive probe to the strength of the energy. And although Riess and Perlmutter haven't released their full data sets yet, Filippenko says that there is a “hint” in the data of this ancient deceleration before the acceleration.

    These are baby steps into a new realm of physics that was entirely obscure until a few years ago—and scientists are just beginning to figure out its properties. “I'd love to be able to take a lump of dark energy and see what happens when you knock it about, squish it, drop it on the floor,” says Campbell. But short of that, observations of supernovae and eventually the evolution of distant galaxy clusters and galaxies will begin to pull back the veil over dark energy. Until then, dark energy will likely be the darkest mystery in a very dark universe.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution