News this Week

Science  11 Jul 2003:
Vol. 301, Issue 5630, pp. 148

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Management Directives From 'Downtown' Again Roil NIH

    1. Eliot Marshall*
    1. With reporting by Jocelyn Kaiser.

    For almost a year, waves of angst have been washing through the world's largest biomedical research agency, the National Institutes of Health (NIH) in Bethesda, Maryland. The source is a series of directives from its parent, the Department of Health and Human Services (HHS). The department wants NIH to surrender personnel decisions to HHS, to speak publicly in “one voice” with HHS, to cut back on staff travel, and to pare its payroll by having contractors take over some functions. The unrest marks the reemergence of a Washington perennial—a debate over the independence of the $27 billion agency.

    In private, NIH officials say that although they're in favor of efficient management, the proposed changes are an annoyance that could indirectly hurt research programs. A massive analysis of jobs to determine which could be contracted out is taking time away from science and depressing morale, they complain. They also worry that the uncertainty will make it more difficult to fill three vacant directorships.*

    NIH Director Elias Zerhouni met with HHS Secretary Tommy Thompson on 27 June to discuss some of these concerns, and many at NIH hoped he would emerge with a compromise plan that would spell relief. But that didn't happen. Although Zerhouni has said little about the matter publicly, he told a director's advisory council meeting on 30 June that he had “asked the secretary to allow NIH to determine its own” methods for complying with HHS management directives. The aim, Zerhouni said, would be to create “a more unified department,” but also to preserve the “independence of managing science.”

    The Administration's plans to streamline NIH focus on two broad initiatives. The first is HHS's “restructuring” plan, which according to a 2003 budget document would “reduce duplication by consolidating administrative functions.” Among other things, it proposes giving HHS control of NIH press offices, congressional relations, and human resources. Last year HHS ordered NIH to include the HHS logo on all official documents—a requirement that led to recruitment advertisements being yanked from several publications for modification (Science, 10 January, p. 187). And earlier this year, NIH staffers were asked to sign a pledge that they would support the “One HHS” agenda (Science, 20 June, p. 1859).

    Putting its stamp on management.

    HHS wants NIH to be an integral part of its team.


    In response to media coverage, Congress last year inserted language in the HHS appropriation bill to forbid the proposed shift in press and congressional functions. Now Congress seems ready to intervene again: Last month, the Senate subcommittee that approves HHS's budget told HHS not to take over human resources. NIH Deputy Director Raynard Kington—who has been leading a committee of NIH executives and eight working groups seeking a new management scheme that would be acceptable to HHS—says that whatever the outcome, “we will do what the law tells us.”

    The second big push—to get NIH and other federal agencies to hire private contractors to do work that is not “inherently governmental”—comes from the White House (Science, 21 March, p. 1823). Kington points out that the mandate to use contractors when possible, a presidential directive called A-76, is 4 decades old; it's only now being applied rigorously to NIH. For nearly a year, NIH has been examining all staff jobs and sorting them into three grades: commercial, governmental, or still to be determined. It also has agreed to review two groups of employees in 2003 for a possible “competition” that would determine whether NIH or a private contractor could do the work more cheaply. Up for grabs are roughly 1448 jobs in facilities management and support staff for grant review—but not scientific positions in grant review. Indeed, no scientific positions are being put up for competition.

    Simply considering the use of contractors, however, has raised anxiety, says one institute director. And a center director notes that every recent applicant has asked whether a scientific job will last more than a couple of years. Others wonder if they will be allowed to hire their own support team. Kington says no decision has yet been made on which job categories will come under A-76 review in 2004. But he confirms that the program that supports NIH postdocs is among those on the table.

    Zerhouni addressed the discontent over A-76 during a staff town meeting on 18 June. A union representative rose to upbraid him for what he claimed was an “unfair” and hurried analysis of the possibilities for contracting out maintenance jobs. Zerhouni urged staffers not to be defensive, to avoid “whining,” and to use the challenge as an opportunity to prove how capable NIH employees are. “My intention is to win” the competitions and keep the jobs in NIH, he told the town meeting.

    There is irony in this exercise, notes Tony Mazzaschi of the Association of American Medical Colleges: The streamlining analysis and the competition could cost millions of dollars. The agency must spend between $2000 to $5000 to review one position, NIH officials confirm. In the next few years, 9000 positions could be reviewed.

    The tab rises if a job actually is contracted out, because HHS Secretary Thompson has promised that the A-76 process will not be used to force anyone out of NIH. So NIH could end up paying the salaries of both a contract worker and the person being replaced. NIH has estimated it will need an extra $250 million if its own staffers do not win competitions for the first two categories of workers on the A-76 agenda.

    NIH is grappling with these directives just as it is emerging from a period of budget doubling. Partly as a result of that largesse, the scientific opportunities have never been more exciting. But NIH is discovering that a $27 billion budget also brings increased attention from the higher-ups.

    • * The National Institute of General Medical Sciences, the National Institute of Neurological Disorders and Stroke, and the National Heart, Lung, and Blood Institute.

  2. Lenfant to Retire From Heart Institute

    1. Eliot Marshall

    An enduring figure in U.S. biomedical research, Claude Lenfant, director of the National Heart, Lung, and Blood Institute (NHLBI), announced last week that he plans to retire this summer. Lenfant will step down on 30 August after 21 years as NHLBI director, its longest-serving chief. He previously served as director of the Fogarty International Center (1981–1982) and director of the NHLBI lung division (1970–1981). In all, he has worked with seven directors of the National Institutes of Health (NIH).


    In announcing Lenfant's decision last week, NIH Director Elias Zerhouni issued a statement praising the former heart surgeon as a “talented and capable administrator” and “first-class scientist” whose departure will leave “a significant loss.” Lenfant, 74, was on leave and could not be reached for comment.

    Among NIH senior staff, Lenfant is known as an experienced manager who takes pride in keeping his $2.8 billion agency on an even keel. “Claude knows his institute inside and out; he has been an excellent spokesperson for his research community,” says Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

    In an earlier interview, Lenfant boasted that NHLBI was the first institute to back gene therapy. But he said he favors low-tech solutions, too, including “one of my best investments”—$15 million spent on training ambulance crews to treat heart attack patients more rapidly.


    Ventilator Study Gets Second Wind

    1. David Malakoff

    A controversial study involving patients with serious respiratory problems has won a qualified endorsement from a government oversight body. The Office for Human Research Protections (OHRP) last week concluded that the trial, designed to test ventilator treatments, has not needlessly endangered subjects. And it said researchers may be able to restart a suspended portion of the trial if they improve patient consent practices and provide more information to review boards.

    “This trial has now been subjected to repeated, intense scrutiny and been found safe. … We're very happy,” says pulmonologist Roy Brower of Johns Hopkins University in Baltimore, Maryland, one leader of the three-part, $37 million study. Still, the verdict isn't likely to end the debate over optimal ventilator settings—or how to design ethical trials.

    The 3 July OHRP report caps a year-old controversy surrounding the study, which involves 19 institutions and is funded by the National Heart, Lung, and Blood Institute (Science, 23 May, p. 1225). Researchers associated with the Acute Respiratory Distress Syndrome Network (ARDSNet) have already completed the first two parts, which tested how patients responded to different mixes of air volume, oxygen, and pressure. A third leg, testing fluid management strategies, was blocked by OHRP last year after several researchers raised questions about the overall study's design. In particular, anesthesiologist Charles Natanson and pulmonologist Peter Eichacker of the National Institutes of Health argued that the trial was flawed— and dangerous to some patients—because researchers didn't offer subjects the currently accepted best treatments.

    OHRP officials, however, wrote ARDSNet leaders that “almost all” of the eight external consultants who were asked to review the studies “opined that risks to subjects … were minimized and reasonable” relative to potential benefits. But OHRP officials suggested that the community look into better ways to design such trials and chided the institutional review boards that approved the trials for not obtaining enough information about the study and for approving informed consent forms that lacked a full description of the research's purpose, alternatives, and potential risks.

    ARDSNet researchers have until 29 August to tell OHRP how they plan to fix those problems. But “our expectation is that we will resume the trial” within a few months, says Gordon Bernard, a pulmonologist at Vanderbilt University in Nashville, Tennessee, who chairs ARDSNet's steering committee.

    Meanwhile, both he and critics agree that physicians should heed the trial's main finding to date, namely, that lower volume settings are better for patients. But more study will be needed, they say, to determine optimal settings.


    A Sigh of Relief for Painkillers

    1. Jennifer Couzin

    Opiates are powerful painkillers, but they come with some baggage: a troubling tendency to depress breathing. By giving an experimental drug along with a narcotic, a team of researchers eliminated the opiate's potentially lethal side effect while preserving its ability to blunt pain. The result could have far-reaching clinical implications for anesthesia and the treatment of acute and chronic pain.

    Like morphine and other narcotics, a painkiller called fentanyl disrupts nerve cells deep in the brain that register pain as well as another subset that governs breathing rhythm. Well-controlled doses of the drug can work wonders, but overexposure can be disastrous: In October 2002, 129 people died in a Moscow theater when authorities subdued hostage-takers there by pumping what many believe was fentanyl into the building.

    Even before the hostage crisis, physiologist Diethelm Richter and his colleagues at the University of Göttingen, Germany, were puzzling over a question others had been unable to answer: Could fentanyl's effects on breathing and pain be separated? Their conclusion, reported on page 226, is a resounding “yes.” “It was idealism, if you like, wondering whether or not these processes could be untangled,” says Julian Paton, a physiologist at the University of Bristol, U.K., who was not involved in the research. The positive finding, he says, is “spectacular.”

    Richter's group began disentangling the two effects by examining a small chunk of rodent brainstem called the pre-Bötzinger complex (PBC). The bundle of neurons (named after a wine consumed over dinner by one of the scientists who discovered it) regulates breathing. Cells in the PBC host a half-dozen different receptors that respond to the neurotransmitter serotonin. The scientists pinpointed one, called 5-HT4(a), that was abundant but less common elsewhere in the brain.

    Collateral damage.

    Overdoses of fentanyl, thought to be the knockout agent pumped into a Russian theater full of hostages, can kill.


    Then the team looked for overlap between PBC cells governing respiration and those controlling pain. They examined whether PBC cells expressing 5-HT4(a)—which, like other PBC cells, are critical to respiratory control—also sported μ-opioid receptors, which react to drugs such as fentanyl. All the 5-HT4(a) cells also expressed the opioid receptor. The researchers theorized that they could tinker with 5-HT4(a) receptors to counteract fentanyl's effects on the opioid receptors. And because many cells in other brain regions have μ- opioid receptors, Richter and his colleagues believed that interfering with fentanyl in the PBC wouldn't affect pain control.

    To test their hunch, the researchers turned to surreal systems in which mouse brainstems were hooked up to heart vessels and spinal cord tissue. Immersing the neurons in both fentanyl and an experimental drug that activates 5-HT4(a), they saw that the drug appeared to counter fentanyl.

    Next came the big question: Did the 5-HT4(a) activator inhibit fentanyl's painkilling effects? Richter and his colleagues dosed rats with so much fentanyl that they needed to be put on ventilators. Then the researchers added the 5-HT4(a) drug. The animals' breathing returned to roughly 80% of normal. And fentanyl remained so effective that the rats didn't flick their tails away from a hot lamp.

    “If it turns out that they can use this [clinically], it's a phenomenal breakthrough,” says Jack Feldman, a neurobiologist at the University of California, Los Angeles, who helped identify the PBC. He points out that it's not clear exactly how the 5-HT4(a) activator serves as an antidote to fentanyl, although he says the authors' theory that the two drugs function through converging pathways that enable one to cancel out the other is plausible.

    Another concern is that 5-HT4(a) receptors are poorly understood: “We don't know what else they do besides stimulating respiration,” says Patrice Guyenet, a neuropharmacologist at the University of Virginia in Charlottesville. Of particular concern is the drug's impact on heart rate and blood pressure, because both are intertwined with respiration. But stimulating 5-HT4(a) receptors might have an additional benefit, Paton points out, if they inhibit the low blood pressure caused by fentanyl.


    Fraud Investigations Come Up Dry

    1. Gretchen Vogel

    BERLIN—Investigating committees have failed to find evidence of scientific misconduct in two high-profile cases in Germany. On 3 July, the DFG, Germany's science funding agency, announced that its scientific misconduct commission had uncovered only technical flaws in a paper by neuroscientist Heinz Breer of the University of Hohenheim, near Stuttgart. A day later, a committee at the University of Konstanz examining the work of disgraced physicist Jan Hendrik Schön found inconsistencies in several papers Schön published during his studies there, but no proof that he had deliberately manipulated data.

    Charges against Breer, a well-known neuroscientist who studies how nerve cells detect scents, first came to light in April when the Sueddeutsche Zeitung newspaper outlined allegations by a former lab member that figures and data in two publications had been manipulated. Breer acknowledged at the time that something was “not quite right” with the papers, but he denied any wrongdoing.

    The committee concluded that one of the papers, published in the Journal of Biological Chemistry in 2000, “contains technical flaws,” but not enough to invalidate the study's conclusions or to justify a charge of scientific misconduct. The DFG chided Breer, however, noting “the necessity of making young scientists familiar with the rules of good scientific practice.”

    An investigation into the second paper, published in the Journal of Neurochemistry in 1998, will begin when the committee meets again this fall, says DFG spokesperson Eva-Maria Streier. She says the committee will decide then whether a wider investigation is warranted.

    The commission will take up the Schön case at the same time, Streier says. A panel formed by Lucent Technologies' Bell Labs in New Jersey found in September 2002 that Schön had fabricated data in 16 of 24 papers it investigated (Science, 4 October 2002, p. 30). The DFG, which partially funded Schön's work at Bell Labs, had been waiting for the results of the Konstanz investigation before proceeding with its own inquiry. The agency will decide whether Schön must pay back the funding he received or face other penalties.


    Ancient Planet Turns Back the Clock

    1. Robert Irion

    The discovery of a giant planet amid a cluster of primitive stars is challenging one of astronomers' pet notions. The planet, which orbits a tight pair of stars— including a rapidly spinning pulsar— suggests that some planetary systems were born billions of years before most astrophysicists thought the universe had spawned the raw materials needed to make them. “It's a big shock,” says astrophysicist Steinn Sigurdsson of Pennsylvania State University, University Park, lead author of a report that appears on p. 193.

    The formation of rocky planets is supposed to require a healthy dollop of “metals”—elements heavier than hydrogen and helium—swirling in the gas and dust around a baby star. Even giant planets like Jupiter assemble a rocky core of silicon, iron, and other such elements before they gather gas, according to the most popular model of planetary formation. Metals arise in the nuclear furnaces of stars, whose death throes, notably supernova explosions, then spew them into space. New stars incorporate this debris, and over several generations enough metals build up to form the rocky grains from which planets arise.

    By that logic, globular clusters, swarms of metal-poor stars as old as our galaxy, are the last place you'd expect to find planets. Indeed, a recent search of more than 34,000 stars in the globular cluster 47 Tucanae exposed no giant planets (Science, 23 June 2000, p. 2121).

    Unlikely home.

    Globular cluster M4 hosts a pulsar circled by a white dwarf (arrow, bottom) and a Jupiter-sized planet orbiting both.


    But Sigurdsson and colleagues think they have clinched the case for a planet in M4, a 12.7-billion-year-old globular cluster with just 1/30th the metal content of our sun. The team reports observations of the stars near a famous pulsar in M4, called PSR B1620-26. The pulsar itself is invisible to optical telescopes, but radio telescopes see it as a compact neutron star—the stellar corpse left by a long-ago supernova—that whirls nearly 100 times each second.

    By timing the pulses from PSR B1620-26 with exquisite precision, astronomers realized about a decade ago that two companions tug the pulsar to and fro. One is a white dwarf in a tight 191-day orbit. Its more distant partner, according to images from the Hubble Space Telescope newly analyzed by Sigurdsson's team, is an unseen planet with about 2.5 times the mass of Jupiter.

    The white dwarf's color and brightness suggest that it's a star that ran out of fuel and lost its outer atmosphere only about 500 million years ago. The dwarf's youth and tight orbit, says Sigurdsson, argue that it probably started out as an independent star with a primordial gas-giant planet orbiting it. Then, the star strayed too close to the old neutron star, which orbited with its own binary companion somewhere in the cluster's crowded core. Computer models suggest that the interloper cast out the neutron star's original companion and settled into a tight orbit. The planet hung on in a large, century-long orbit around the new binary while the recoil from the interaction hurled the system into M4's more sparsely populated outskirts. When the planet's parent star ran out of fuel, it expanded and shed gas onto the old neutron star—spinning it up into the whirling dervish seen today.

    Theorists find this scenario plausible, and they are delighted with the inferred ancient planet. “If you find one, there must be large numbers of them,” says astrophysicist Frederic Rasio of Northwestern University in Evanston, Illinois. “Clearly this would suggest that planet formation does not require high-metal environments.” One controversial theory posits that giant planets might not need rocky cores if they form directly from unstable whorls of gas in the nebula around a young star (Science, 6 June, p. 1498). M4 is so metal-poor that theorists may have to swallow hard and take that model seriously, Sigurdsson notes. What's more, he adds, ancient planets would mean that life has had 5 billion or 6 billion years longer to appear than astronomers expected.


    Championing a 17th Century Underdog

    1. Richard Stone

    LONDON—A quick quiz: Through meticulous observations with a 20-meter-long telescope that vibrated in the slightest breeze, this 17th century scientist was the first to describe the shadow that Saturn's ring cast on the planet and to make detailed maps of the moon's craters. He was an accomplished surveyor and architect who helped rebuild London after the Great Fire of 1666 and an avid inventor whose creations include the balance spring watch and the compound microscope. This “Leonardo da Vinci of England” was also a maverick thinker who was one of the first to articulate the concept of extinction and who suggested evolution 2 centuries before Charles Darwin. Who was this polymath?

    If you guessed Robert Hooke (1635–1703), you know your history better than many of your peers. Scholars have long argued that Hooke has received far less credit for his insights and inventions than he deserves. Set against the brilliance of his contemporary, Isaac Newton, Hooke has tended to shine like a 60-watt light bulb. It didn't help that Newton, upon Hooke's death, set out to destroy his reputation. Newton “denied many of Hooke's contributions and did all that he could to obliterate them from history,” says science historian Michael Cooper, a leading Hooke scholar at City University of London.

    But Hooke is undergoing a remarkable rehabilitation. A clutch of new books about his life and achievements have appeared in the past few years or are about to be published. “Hooke is fashionable,” says physicist Robert Purrington of Tulane University in New Orleans. And as indicated at a conference* earlier this week to mark the tercentenary of Hooke's death, scholars are gravitating to him for a variety of reasons, from the philosophical underpinnings of his work to his bitter quarrels with Newton and subsequent downfall. “Hooke was one of the most prolific and inventive scientists of all time,” says Michael Nauenberg, a physicist at the University of California, Santa Cruz.

    Hooke made his mark early. As a student at Oxford in the late 1650s, he joined the laboratory of Robert Boyle, where, working with springs, he discovered that stress is directly proportional to strain—Hooke's law. He also devised an air pump and performed experiments on gases that led to the formulation of Boyle's law. Hooke so impressed his Oxford colleagues that in 1662 he was named Curator of Experiments at the newly formed Royal Society of London, tasked with demonstrating experiments at the society's weekly meetings.

    Newton's nemesis.

    A portrait by Guy Heyden, a former student at the London Institute, was the winning entry in a competition to mark the tercentenary of Hooke's death; Hooke's weight-driven equatorial mounting (top).


    In 1665, at the age of 30, Hooke published Micrographia, a bestseller based on his observations and intricate drawings of the natural world through his compound microscope. Along with the drawings, he made striking insights into the processes of nature. Based on his observations of the cliff shores of the Isle of Wight, where he grew up, Hooke believed that the biblical flood could not be taken literally. “He knew that erosion needed more time,” says Ellen Tan Drake of Oregon State University in Corvallis. Even more profound were his insights into extinction. In the mid-17th century, fossils and minerals were thought a trick of nature, formed by magic, although a handful of scholars believed that fossils were relics of Noah's flood. After examining seashore fossils, he wrote in 1667: “There have been many other Species of Creatures in former Ages of which we can find none at present.” He even anticipated Charles Darwin, suggesting that new species could arise from the pressure of environmental change.

    A year after Micrographia appeared, fires ravaged London; during the 1670s, Hooke, trained as a surveyor, worked with his best friend, architect Christopher Wren (far more famous than Hooke today), to rebuild the city. In his spare time, he drew up plans for many ingenious devices, including a quadrant mounted to follow a chosen star as it moves across the sky. Allan Mills and his team at the University of Leicester have recreated the key part of this weight-driven equatorial mounting—a reversible worm-and-wheel gear—and “it really does work,” says Mills.

    One area of intense interest among scholars is how greatly Hooke contributed to Newton's theory of gravitation. In a 1679 letter, Hooke sought Newton's help in solving the problem of planetary motion. Hooke had determined the physical principles of celestial mechanics, but it appears that he did not know how to calculate the general orbital motion in a central field of force, says Nauenberg. Asking Newton for help turned out to be Hooke's “capital mistake,” Nauenberg says.

    Newton went on to solve the problem, proving that gravitational force drops off with the square of the distance between planet and sun, one of the cornerstones of his landmark 1687 treatise Principia. Hooke “had neither the leisure, the focus, nor the mathematical skills to have done what Newton did,” argues Purrington. Nevertheless, Hooke accused Newton of misappropriating his ideas. In response, says Nauenberg, “Newton vehemently denied that he learned anything from Hooke.” His health in decline after habitual use of cannabis, poppy water, and caffeine, Hooke could muster little energy to counter that impression. To this day, Nauenberg says, “Hooke's fundamental insights into orbital dynamics and his important influence on Newton's work are generally neglected in physics textbooks.”

    Earning Newton's enmity proved to be disastrous for Hooke. “None of the thousands of instruments and models he constructed or the fossil specimens he collected survived Newton's presidency of the Royal Society,” notes Drake. As further insult, Hooke's only known portrait was reputedly destroyed after his death by Newton supporters. The Royal Society of Chartered Surveyors held a competition to create a new portrait; the winner was announced at the meeting. It has taken 3 centuries—and a hardy band of revisionist scholars—to coax Hooke from the shadows.

    • *“Hooke 2003,” sponsored by the Royal Society and Gresham College, 7 to 8 July.


    Evidence for 'Pentaquark' Particle Sets Theorists Re-Joyce-ing

    1. Charles Seife

    Three quarks for Muster Mark? Every physicist's favorite Finnegans Wake passage might need a little updating. Several experiments around the world seem to have created an exotic particle containing five quarks rather than the two or three that make up all other quarky matter. If true, this new particle, dubbed the theta-plus (Θ+), might help physicists banish the last remaining shadows in quantum chromodynamics (QCD), the theory that describes quarks and the forces that bind them together.

    QCD does not forbid five-quark particles. But all known quarky matter is made up of three-quark ensembles known as baryons or quark-antiquark pairs known as mesons, and years of looking for bizarre four-and five-quark ensembles left scientists empty-handed and puzzled. “Where are the collections of quarks not organized into [three-quark baryons] or mesons?” asks Terrance Goldman, a physicist at Los Alamos National Laboratory in New Mexico.

    Now scientists at three laboratories think they finally have spotted a five-quark beastie. The first experiment, at the SPring-8 accelerator facility near Osaka, zaps a carbon target with high-energy light. A second, at the Jefferson National Accelerator Facility (JLab) in Newport News, Virginia, sends light into deuterium or hydrogen targets. The third, at the Institute of Theoretical and Experimental Physics (ITEP) in Moscow, smashes mesons into xenon nuclei. In each case, researchers hope jolted quarks inside atomic nuclei will fleetingly recombine into new species of particles that will leave their signature on the more conventional baryons and mesons that come into being when they decay. All three groups report that the debris from the collisions point back toward Θ+ particles.


    Jolted by collisions, quarks inside atomic nuclei recombined into bursts of particles that appear to include exotic five-quark specimens.


    “The fact that all the labs are reporting similar results is a relief,” says Takashi Nakano, who heads the Japanese effort. “I have been feeling much better since I heard about JLab and ITEP results. But I cannot be 100% sure for a while until we get more experimental evidence.” Goldman is similarly cautious. “It looks to be a very strong case. One is tempted to believe these things, but it is still possible that there's an error,” he says.

    Ken Hicks, a physicist at Ohio University, Athens, who works on both the Japanese and American experiments, says it's possible that the new particle might be some sort of bound “molecule” made up of a two-quark meson and a three-quark baryon. He hopes that scattering electrons off the new particle—a tricky prospect that's beyond the reach of current experiments—will eventually give physicists a clear enough picture of the particle's shape to prove that the Θ+ is indeed a single particle rather than a composite.

    According to Goldman, figuring out the precise shapes of exotic particles like the Θ+ can “fill in the last major chink” in the armor of quantum chromodynamics. Physicists know that quark-matter particles aren't always spherical, yet they routinely ignore that change of curvature in their QCD calculations. If “pentaquark” states are not spherical, Goldman says, then physicists can finally figure out what their models were getting wrong and fill in the missing details.

    And although it might disappoint those who like the nice, neat three-quark rule, physicists are pleased that quarks are finally showing their quirky side.


    Laser Labs Race for the Petawatt

    1. Robert F. Service

    Four years after the world's most powerful laser was dismantled, about a dozen similar beams are set to fire for the record books. Their work will affect research on everything from fusion to astrophysics

    A few hundred joules of energy doesn't easily impress. It's gone almost as soon as you flip on a light switch. And it isn't enough to brew your morning pot of coffee. But this paltry amount of juice can still do plenty—just ask Michael Perry. On 23 May 1996, Perry and his colleagues at Lawrence Livermore National Laboratory in California packed this energy into a single laser pulse and created a burst of light the likes of which had never been seen before. By compressing the pulse so that it lasted just 500 femtoseconds (or 0.000, 000,000,000,500 seconds), Perry's team created a pulse that packed a whopping 1.25 petawatts of power. That's 1.25 quadrillion watts, more than 1200 times that produced by all the power plants in the United States at the time. Forget for a moment that those power plants turn out their power continually. For those 500 femtoseconds, Perry and his team could have done some serious damage, or science, if they'd cared to. They did both.

    Over the course of the next 3 years, Livermore researchers used their petawatt laser to split and fuse atoms, accelerate electrons to near light speed, create pressures 300- billion-fold higher than that found at sea level, and produce intense pulses of x-rays, gamma rays, and energetic protons. But the petawatt laser's life was short. It was built as an add-on to Livermore's laser fusion testbed called Nova, which was decommissioned in 1999 to make way for the bigger and better National Ignition Facility (NIF), now being built at Livermore. When Nova went down, Livermore's petawatt went with it. After the laser's demise, key pieces of it were shipped off to the United Kingdom and Japan. The world hasn't seen a petawatt pulse since.

    Now the wait is almost over. Nearly a dozen laser facilities around the globe either have finished or are nearing completion of petawatt lasers. Last year, for example, researchers at the Rutherford Appleton Laboratory in Oxfordshire, U.K., finished upgrading their Vulcan laser from 100 terawatts to a petawatt of power, and they are gearing up to fire petawatt pulses. Another facility at the Japan Atomic Energy Research Institute's Kansai Research Establishment in Kyoto fired an 850-terawatt pulse last year—a mere 15% shy of the petawatt goal—and is poised to go higher. Other lasers in France and the United States are in the midst of upgrades, and researchers say they'll be turning out petawatts by the middle of next year if not sooner.

    All this power stands to transform high-field laser science. When fired at targets, the petawatts will re-create conditions that were likely present at the birth of the universe and at the edge of black holes, thereby giving rise to a new discipline of laboratory astrophysics. They're already breathing new life into hopes for laser-driven fusion. Small petawatt lasers may even produce biomedical advances, including rapid-fire ultrashort x-ray pulses capable of imaging the dance of atoms in proteins as they fold, as well as beams of protons useful for cancer therapy.

    High beam.

    Kansai's Ti:sapphire laser is the most powerful of its breed.


    “This is a very, very exciting time in high-field science,” says Gérard Mourou, a laser physicist who runs a team building a petawatt laser at the University of Michigan, Ann Arbor. Barry Walker, a physicist at the University of Delaware in Newark, agrees. “A lot of countries are convinced that a lot of discoveries are just over the horizon,” he says. And because breaking barriers in physics often leads to Nobel Prizes, the race is on to get the new machines running and begin searching for novel effects.

    Power hungry

    Laser makers have been racing to build ever-more-powerful beams since T. Harold Maiman of Hughes Research Laboratories made the first optical laser in 1960. Maiman and other laser pioneers staked their hopes on the quantum-mechanical behavior of light. Unlike a typical flashlight that sends out light in all directions, lasers stimulate a cascade of photons at specific frequencies all packed in a tight stream. A laser's power measures not how much energy is in a pulse, but the rate at which that energy is delivered. To increase the power, then, researchers can either jack up the amount of energy in each pulse of a given length or compress a steady amount of energy into shorter and shorter pulses.

    In the early days of laser development, researchers made quick progress on both fronts. New laser materials allowed them to whittle down the duration of pulses, and light amplifiers—which increase the number of photons in a laser pulse—enabled them to hike pulses' overall energy. By the late 1960s, laser makers regularly turned out pulses with a gigawatt, or a billion watts, of power. But then progress stalled, particularly with the small, tabletop lasers accessible to most optics researchers. Beyond a gigawatt, the light flowing through a laser's amplifier grew so intense that it essentially vaporized the material. Researchers could lower the intensity by dispersing the beams across bigger mirrors and lenses, but that approach would make lasers too big and expensive for most optics labs.

    In 1985, Mourou—then at the University of Rochester in New York—and colleagues broke the logjam with a novel scheme for boosting a laser's power called chirped pulse amplification (CPA). The technique starts with firing a short pulse of high-intensity light at a diffraction grating that spreads out the pulse over time as much as 10,000-fold. Like a rubber band that thins as it is stretched, the stretched pulse's intensity drops enough for it to pass through conventional amplifiers. Another pair of diffraction gratings then recompresses the pulse into a final strongly amplified short pulse (see diagram). The technique yielded a dramatic rise in laser power throughout the late 1980s and 1990s, with individual pulses skyrocketing from a gigawatt to a terawatt. By the mid-1990s, CPA and advances in lenses, mirrors, and gratings were incorporated into Livermore's Nova, making it possible to create the petawatt. As a result, laser intensities—the amount of power that could be zapped at a given area of a target—surged from about 1015 watts per square centimeter (W/cm2) in the mid-1980s to 1021 W/cm2 from the Livermore petawatt, a record that still stands today. “We were able to dramatically increase the intensity of pulses and cross new frontiers in laser physics,” Mourou says. “It produced a revolution in the field.”

    The biggest revolution came in the mid-1990s, when laser intensities first topped 1018 W/cm2, the intensity needed to create a condition known as relativistic optics. As waves of laser light pass through a target, the electric component of their electromagnetic field whips electrons in the target back and forth at nearly the speed of light. Just as Einstein had predicted, laser researchers found that as electrons approach light speed, their mass rises. At 1018 W/cm2, electrons have twice their normal “rest” mass, a number that jumps to 1000 times at an intensity of 1021 W/cm2. Laser researchers only dreamed of producing such an effect in the 1960s, Mourou says.

    Heavy electrons are just the beginning. At relativistic intensities, the light's magnetic field component—normally a bit player in the way it affects electrons— begins to play a major role as well, propelling electrons rapidly forward through the material. High-intensity light also has a dramatic effect when it strikes an ionized gas, or plasma, of electrons and positive ions: It instantly pushes the electrons forward, leaving the heavier ions behind. This separation of charges creates a secondary electric field that pulls the ions behind the light at relativistic frequencies like a water skier trailing behind a boat. The phenomenon, known as “laser wake-field acceleration,” can be used to accelerate charged particles to hundreds of millions of electron volts in the space of a millimeter.

    Booster shot.

    By spreading out, amplifying, and then recompressing a laser pulse, chirped pulse amplification achieved unheard-of power.


    That's still well below the billions of electron volts that stadium-sized particle accelerators reach routinely. But it's still potentially very useful, says Jean-Paul Chambaret, a high-field laser researcher at the National Institute of Advanced Technologies in Palaiseau, France. For one thing, a high-powered laser beam colliding with a high-energy electron beam produces high-energy photons called gamma rays. And researchers at several linear particle accelerators are working on laser add-ons to help them create twin gamma ray beams that can then be collided to liberate pairs of particles and their antimatter twins, a reaction that conventional accelerators can't produce.

    Petawatt-scale lasers should also enable researchers to create another hard-to-find substance, called a dense electron-positron plasma. Such plasmas are thought to play a role in gamma ray bursts. Conventional accelerators can't produce them, because their colliding beams contain too few particles. But because high-intensity lasers create plasmas by vaporizing targets made up of 1023 atoms per cubic centimeter, they will likely offer researchers their first good look at dense electron-positron plasmas. That effect, plus others such as the ability of petawatts to generate pressures equivalent to those found in the heart of neutron stars, puts the new lasers on the doorstep of creating a new laboratory branch of astrophysics. “We can reproduce in laboratories what is happening in the heart of stars,” Chambaret says. He and others are pursuing more practical dreams as well. High-intensity lasers might someday generate rapid-fire x-ray pulses that can be used to create movies of proteins as they fold and unfold, a long-sought goal in x-ray crystallography. Researchers also hope to use their machines to create beams of accelerated protons that can be used directly for cancer therapy or to produce short-lived isotopes for medical diagnostics.

    Perhaps the biggest hopes for high-intensity lasers rest with laser-driven fusion. Facilities such as Nova and NIF have spent decades pursuing the dream of training multiple beams of high-powered lasers on a fuel-filled pellet to ignite a fusion reaction. That effort faces numerous hurdles, including the challenge of imploding the target with near perfect symmetry to produce high enough temperatures to trigger fusion. In the late 1990s, Nova tested a novel twist in laser fusion: using a petawatt laser to instantly bore into the heart of a pellet and unleash an ultrahigh-intensity shock wave that triggers a “fast” fusion ignition. Initial experiments showed that the idea had potential, and last year a higher-energy test in the U.K. proved even more promising. Several of the new petawatt lasers are aiming to continue the breakthroughs in this area.

    With these and numerous other applications taking shape, Chambaret and others say that high-field laser physics stands to benefit greatly from the coming class of petawatts. “This is now becoming a field in its own right, like high-energy physics,” says Todd Ditmire, who is building a petawatt laser at the University of Texas, Austin. Adds Chambaret: “Some of the applications that will arise, we don't even know yet.”

    Building boom

    All this scientific potential has set off an international petawatt building boom, with new machines either recently completed or under construction in the United States, France, Germany, the United Kingdom, and Japan. All these high-powered lasers face the same tradeoff: They can fire either intermittent long pulses (each about 400 femtoseconds) that pack hundreds of joules of energy, or a staccato of short pulses (about 20 femtoseconds apiece), each with a couple of tens of joules. The long-pulse petawatts, typically built around a core of glass doped with the rare metal neodymium, are the best at delivering the large amounts of energy needed for fusion research. But the extra equipment needed to reach that high energy raises the price tag for these machines to $10 million or more and causes their footprint to swell to hundreds of square meters, a combination that makes them primarily the domain of central government research labs such as Livermore.

    On the flip side of the energy tradeoff are the short-pulse systems that contain a crystalline core made from titanium-doped sapphire (Ti:sapphire). With less energy needed per pulse, these lasers require only about 100 square meters—the size of an average lab—and can cost less than $10 million each. And because these lasers can still generate ultrahigh intensities, they remain well suited for research on everything from x-ray science and particle acceleration to materials physics and medical applications. The United States, France, and Japan are all building university center-scale Ti:sapphire petawatt lasers, and dozens of universities around the globe are building smaller terawatt-scale systems. That has many university laser researchers salivating at the prospects of competing with the large-scale facilities. “We're going to be able to do big-time science at small universities,” says Jeff Squire, a high-field laser physicist at the Colorado School of Mines in Golden.

    First, though, comes the rush to boost the lasers' ultrahigh-intensity fields. At a conference last month,* Victor Yanovsky, a laser physicist at the University of Michigan, Ann Arbor, reported that his team had used a deformable mirror similar to those on modern adaptive-optics telescopes to create a sharply focused pulse of 1021 W/cm2. But others at the conference pointed out that Yanovsky, Mourou, and colleagues had estimated the intensity indirectly, by measuring a low-intensity beam, and they want to see more definitive proof of the results. Mourou shrugs off the controversy; one way or another, he says, many groups around the world will soon be producing laser fields of 1021 W/cm2 and beyond—intensities that will propel researchers through the ultrarelativistic-optics regime and into realms where they can investigate subatomic particles and the fundamental forces that glue atoms together (see sidebar). If so, the high-field laser researchers will be in for plenty of intense times ahead.

    • *23rd Annual Conference on Lasers and Electro-Optics and the 11th Annual Quantum Electronics and Laser Science Conference, Baltimore, Maryland, 1 to 6 June.


    How High Can They Go?

    1. Robert F. Service

    With laser research groups around the globe poised to break the petawatt barrier, how much higher can laser power go? The answer from at least two prominent laser experts: plenty.

    Last year, Toshiki Tajima of the University of Texas, Austin, and Gérard Mourou of the University of Michigan, Ann Arbor, sketched a scheme for boosting laser power outputs to an exawatt or even a zetawatt of power, 1000 and 1,000,000 petawatts respectively. In the 7 March 2002 issue of Physical Review Special Topics— Acccelerators and Beams, the pair said that the generation of small tabletop petawatt lasers now being built can be expected to increase laser intensities from the range of 1020 watts/cm2 to 1023 W/cm2. But such lasers will top out at that point, because any higher laser intensity will destroy their components.

    To go higher, Tajima and Mourou believe it should be possible to link together an array of these tabletop lasers. Such lasers are built around a core laser material made from a crystal of titanium and sapphire. Ti:sapphire lasers are adept at producing short, power-packed pulses. But they can't generate the initial laser light they work with. Rather, they must be fed, or pumped, with laser light from other laser amplifiers, typically ones made from neodymium-doped glass (Nd:glass). They then take this laser energy and pack it more tightly into shorter packets of energy.

    Onward and upward.

    Spurred by the advent of chirped pulse amplification (CPA), laser intensities show no sign of slowing their increase.


    Massive Nd:glass laser systems, such as the National Ignition Facility (NIF) in the United States and the Laser Megajoule in France, are already being built for laser fusion studies (Science, 18 August 2000, p. 1126). And Tajima and Mourou predict that rerouting one of these massive Nd:glass laser systems to feed all its energy to a matrix of Ti:sapphire lasers could yield pulses with at least an exawatt of power. “It shows the field has some room to grow,” Mourou says.

    Other experts are cautious. “It's not technologically out of the question,” says Todd Ditmire, who helped build the first petawatt laser at Lawrence Livermore National Laboratory in California and is now at the University of Texas, Austin. “If you took the entire NIF and used it to pump a Ti:sapphire, you could get to 1 exawatt. But talking about zetawatt lasers is pure flight of fancy in my opinion,” he says.

    Assuming that such high laser power could be reached, the resulting laser field intensities that hit targets can be expected to rise as high as 1028 W/cm2, a number certain to produce revolutionary physics, Tajima and Mourou argue. At that intensity, the electric field of a laser should be high enough to accelerate electrons to more than a petavolt, an energy range that dwarfs what is possible in even the largest particle accelerators today. It could replicate astrophysical conditions associated with black holes, gamma ray bursts, and high-energy cosmic rays. And it could produce beams of high-energy protons, energetic particles called pions, and neutrinos. “It's still too early to say” exactly what would come from such an intense laser, Ditmire says. Nevertheless, he adds, “it's fun to think about the limits of possibility.”


    Can Well-Timed Jolts Keep Out Unwanted Exotic Fish?

    1. Erik Stokstad

    In a desperate bid to prevent the spread of invasive fish, researchers have erected an unprecedented barrier that's nearing its first real test

    The water of the Chicago Sanitary and Ship Canal may look tranquil, but underneath the surface at Romeoville, Illinois, a battle is brewing. Some 40 kilometers downstream, Asian carp are steadily advancing toward Lake Michigan. Scientists fear that if these voracious fish get there, they could eventually upset ecosystems in all the Great Lakes. To block the attack, a consortium of researchers is testing an electric barrier that they hope will repel the carp when they arrive, likely later this summer.

    Asian carp will be the first challenge to the barrier, but not the last. Troublemakers are lined up on both sides of the barrier: The carp and other species further downstream are trying to breach the Great Lakes, and exotic fishes in the lakes could gain access to the entire Mississippi River system. “This canal is a choke point,” says David Lodge of the University of Notre Dame in Indiana, who is not involved in the project. And that strategic location makes the barrier “vitally important.” Despite limitations, it's also a promising model for how to prevent the flow of aquatic invaders between water bodies that were once naturally isolated from each other, Lodge says. As the fish draw nearer, researchers are still trying to figure out exactly how well the barrier will work—and how to make it stronger.

    Choke point.

    The barrier controls passage between the Great Lakes and the Mississippi River system, via the Illinois River.


    Fish dislike electric fields, a fact that biologists have exploited for years to guide, stun, and collect them. In the United States, small electric barriers keep grass carp in ponds. They also prevent sea lampreys from invading streams that flow into the Great Lakes. “But [a barrier] this big is something new,” says Phil Moy, a fisheries biologist at Wisconsin Sea Grant at Manitowoc, who chairs the Chicago barrier's advisory panel. A consortium of federal, state, and university researchers hatched plans for the barrier in the 1990s, as an invasive fish called the round goby was making its way south along the shore of Lake Michigan. In 1996, as part of the National Invasive Species Act, Congress authorized the Army Corps of Engineers to build the experimental barrier. Smaller than originally envisioned and designed with only a 3-year life span, the barrier was switched on in April 2002. Researchers have been testing it ever since.

    Located about 50 kilometers southwest of downtown Chicago, the barrier consists of a series of thick metal cables that span the bottom of the 50-meter-wide canal. The electrodes could easily carry enough current to stun fish, but that's a bad idea, biologists say, because unconscious exotics might float downstream through the barrier and wake up on the other side. So the field gradually strengthens over a 15-meter-long stretch of the canal, giving the fish ever nastier warning shocks and a chance to swim away.

    Unfortunately, the prototype came too late for the round goby, which passed through the Chicago canal several years before the barrier's completion. But Asian carp are fast approaching. After escaping from aquaculture farms in southern states in the 1980s, these exotics have been steadily wending their way up the Mississippi. The carp could harm native fish because their diets overlap, and invasive populations have tended to explode, says John Chick of the Illinois Natural History Survey (INHS), who is studying the biology of the invaders. By the mid-1990s, they reached the Illinois River. Two species of Asian carp—bighead and silver carp—are now closest to the barrier.

    To see how well the barrier will work, fisheries biologist Mark Pegg of INHS has been running tests at a fish hatchery. He and his colleagues place bighead carp on one side of an electric barrier, then monitor their location 6 hours a day for 3 days. When the barrier is off, the carp swim up and down the channel. When it's energized, they don't venture across. Buoyed by that success, Pegg nonetheless cautions that the experiment has used only moderately sized carp; younger and smaller fish are less sensitive and may be able to dart through the electric field as it pulses. Pegg also warns that the hatchery raceway is much narrower and shallower than the canal and lacks complicating factors such as ship traffic.

    On patrol.

    Scientists track experimental fish to see how well an electric barrier will repel Asian carp (bottom).


    John Dettmers of INHS and Richard Sparks of the University of Illinois, Urbana-Champaign, have been testing surrogate fish, the naturalized common carp, against the real barrier. They outfitted 72 fish with antennae and placed them downstream of the barrier. Radio and acoustic receivers continuously monitor their position. Most of the carp don't probe the barrier, they say, and since November only one has crossed it. That happened on 3 April, at the same time a barge chugged past the barrier. When traffic is heavy, up to 20 barges pass through the canal in a day. Dettmers speculates that the fish might have been pulled along by the strong wash from the propellers as the barge maneuvered, or that the steel barge somehow weakened or altered the electric field. After the fish escaped, the researchers cranked up the strength of the electric field by 50%; no tagged fish have crossed since.

    However effective the barrier proves to be against the Asian carp, researchers have long argued that it is not enough. For starters, they say, a second barrier is needed to catch any fish that might slip through, to allow maintenance, and to make the system more failsafe in case of freak weather or power outages—as happened in April. (Funds for a second barrier are included in the reauthorization of the invasive species act, now before Congress.)

    Another drawback with the current iteration is that electric barriers deter only fish. They won't do anything to prevent the movement of fish larvae, insects, algae, or zooplankton, for example. In May, 63 scientists gathered in Chicago to brainstorm ways to improve the barrier. Ideas included adding a curtain of bubbles or sound that would help deter fish, stripping oxygen from the water or adding nitrogen to create a zone lethal for all animal life, or even completely filling part of the canal—an idea under discussion for a smaller canal at Lake Champlain, but unlikely for heavily trafficked waterways.

    Biologists stress the importance of blocking all the routes of entry into the Great Lakes or the river—a “daunting task,” says Dettmers. The barrier, he notes, must be part of a comprehensive plan to fight invasives. And it's a good investment, Lodge adds: The $2 million spent on the barrier so far will protect a multibillion-dollar fishing industry in the Great Lakes. Whatever the technological and political challenges in constructing barriers, he says, they're nothing compared to the nearly impossible task of eradicating invaders that get loose.


    Turning Sweet on Cancer

    1. Joe Alper

    A focus on how cancer cells make the complex sugars that dot their surfaces suggests novel chemical tricks that can stop metastasis and perhaps tag cancer cells in the body

    Call it sweet-talk. One of the key ways cells communicate with each other is by sticking sugars onto a variety of proteins on the cell surface. These carbohydrate groups help negotiate the cell's relationships with its external environment; they can determine, for example, whether an interaction with another cell or protein is standoffish or ends up in a tight embrace. Cancer cells speak a different sugar dialect than do normal cells, a difference noted 35 years ago and one that researchers have spent 2 decades trying to parlay into ways of selectively targeting cancer cells for destruction. But after some tantalizing results in the early 1990s failed to translate into clinical success, sugars went out of favor as drug targets du jour. Now, armed with new chemical tools for studying how cells construct and use the complex sugars on their surfaces, investigators are again exploring sugar-related processes in the war on cancer.

    It's become clear over the past decade that glycosylation, the biochemical process of putting sugars onto proteins and other molecules, is “critically important to many of the signaling pathways involved in turning a normal cell into a cancer cell,” explains Harvard Medical School biochemist Norbert Perrimon, who studies the role that polysaccharides play in signal transduction. “If you were able to inhibit specific glycosylation reactions, you might be able to alter these pathways and turn off the cancer cell.” Adds Ken Irvine, a developmental biologist at Rutgers University in Piscataway, New Jersey, who's been studying a key cancer pathway that turns out to be regulated by a sugar-containing protein: “What [sugars] are providing is a completely different way of targeting cancer cells.”

    One promising approach, says James Paulson, a glycobiologist at the Scripps Research Institute in La Jolla, California, involves diverting specific glycosylation pathways into a metabolic dead end. Jeffrey Esko, a glycobiologist at the University of California, San Diego, whom Paulson cites as the main innovator in this area, hit upon an idea for doing this after studying how cells put together multisugar chains, particularly those that contain a type of sugar called sialic acid.


    Complex sugars on a cancer cell's surface may enable wayward malignant cells to stick to blood platelets and migrate to distant sites in the body.


    A sialic acid-rich carbohydrate known as sialyl Lewis X juts out from many cells, especially cancer cells, and binds to molecules known as selectins that are found on the surfaces of platelets and endothelial cells. This binding enables cancer cells to spread, or metastasize, beyond their point of origin. Ten years of experimental data from numerous groups worldwide have shown that patients whose cancer cells express sialyl Lewis X—about 25% to 35% of patients with breast, colon, thyroid, and gastric cancers—have a much poorer prognosis for survival.

    Esko and his co-workers established that specific two-sugar units, known as disaccharides, serve as primers for cells to start making sialyl Lewis X. By modifying these disaccharides with various chemical groups and adding the modified primers to cell cultures as decoys, the researchers found that they could shunt at least some of a cancer cell's carbohydrate-forming reactions away from the pathway that makes sialyl Lewis X on proteins. Although the cells still made some sialyl Lewis X, they bound less avidly to selectin-containing cells. The reason, explains Esko, is that the selectins must bind to multiple sialyl Lewis X chains simultaneously, “so you only have to knock its level down by a factor of 2 or so to have a big effect on binding.”

    In subsequent experiments, published last month in Cancer Research, Esko showed that tumor cells treated with one of the decoys failed to form lung tumors when they were injected into immune-compromised mice. He obtained the same result when the decoys were administered by infusion pump directly into the animal, rather than as a pretreatment. Moreover, for reasons that are still unclear, the injected tumor cells were more susceptible to attack by immune system cells.

    Sweet tricks.

    UCSD's Jeff Esko and UC Berkeley's Carolyn Bertozzi are pioneering sugar trickery in the cancer field.


    Additional experiments have not found any adverse immune system response in animals treated with the decoy, a concern because sialyl Lewis X is thought to play a role in various inflammatory responses. “So far, the data look good, but we need to improve the solubility and other physical properties of the decoys themselves in order for them to be potential anticancer drugs,” says Esko. Paulson, who co-directs the Consortium for Functional Glycomics—an academic collaboration funded by the National Institute of General Medical Sciences—has seen the data from Esko's lab and is surprised at his colleague's reticence. Although the approach could still fail for several reasons, “the fact is, it stops the cancers from metastasizing and it does so with relatively small amounts of compound. I'm impressed.”

    Tagging cancer

    Decoys are just one of the tricks that glycobiologists are developing in an attempt to fight cancer. Chemical biologist Carolyn Bertozzi of the University of California, Berkeley, is pioneering efforts to exploit the extraordinary turnover of sugar-coated molecules characteristic of cancer cells. Her goal is to slip modified sugars onto the surfaces of cancer cells. These modified sugars contain chemical tags that, with the right touch, could serve as a homing beacon for both diagnostic and therapeutic applications. Paulson and others call her work “elegant.”

    Bertozzi and her colleagues start with modified mannosamine analogs, sugars that cells naturally turn into sialic acid. When injected into mice, these sugars wind up on the surface of cells. The modified sugars contain an activated nitrogen group called an azide, which can then react with various phosphorous-containing chemicals that the researchers administer several days later. “We can do this in animals with no apparent ill effects,” says Bertozzi.

    Bertozzi's group is hoping to use this reaction to label tumor cells for in vivo imaging—a Holy Grail for cancer researchers. The idea is to administer the nitrogen-containing sugars and then inject a compound containing a reactive tag that is visible by a standard imaging technique, such as magnetic resonance imaging. The tag would then get incorporated on the surface of any cell with the modified sugar. “Since tumor cells have a higher metabolic flux, you get turnover of glycoproteins on the cell surface and the modified sialic acids are preferentially incorporated there, more so than on most healthy cells,” explains Bertozzi. If the approach works in animals, experiments that should begin shortly, it could provide an in vivo signpost singling out cancer cells over all others. “That's a big if,” says Perrimon of the planned studies, “But if Bertozzi can pull this off, that would be a breakthrough.”

    Although Esko's and Bertozzi's results are still preliminary, pharmaceutical interest in tinkering with the ways in which cancer cells use sugars is once again heating up, says Paulson. One of his consortium colleagues, he reports, just raised $19 million to start a biotech company to capitalize on sugar-altering methods. The time may finally be ripe for sugars to bring some sweetness to cancer drug development.


    New Attention to ADHD Genes

    1. Kathryn Brown

    Researchers are trying to tease apart the genetic and environmental contributions to childhood's most common mental disorder

    “Passionate, deviant, spiteful, and lacking inhibitory volition.” That's how one pediatrician, George Frederick Still, described children with symptoms of attention deficit hyperactivity disorder (ADHD). The year was 1902, and Still puzzled over his young patients in The Lancet. What was their disorder, exactly—and what caused it?

    A century later, scientists would still like to know. As skyrocketing numbers of children are diagnosed with ADHD and prescribed drugs, researchers are falling under increasing pressure to explain the disorder (Science, 14 March, p. 1646). Is ADHD the result of faulty brain wiring? Which genes are to blame? And how much does environment matter? Emerging studies, harnessing genome scans and other high-tech tools, promise new insights. But as Judith Rapoport, chief of child psychiatry at the National Institute of Mental Health (NIMH) puts it, “there's been no home run yet.”

    Like any complex disorder, ADHD is a one-two punch of susceptibility genes and environmental risks. Together they cause severe inattention, hyperactivity, or both, says clinical psychologist Stephen Faraone of Harvard University. “My hope is that once we've discovered those genes, we'll be able to do a prospective study of kids at high versus low genetic risk,” Faraone says. “That's when you'll see environmental factors at work.” Eventually, he adds, environmental changes could play an important role in treating some ADHD patients.

    In the family

    Since the 1800s, doctors have labeled some children as overwhelmingly distracted or fidgety. Depending on the diagnosis of the day, they suffered from “childhood hyperactivity,” “hyperkinetic syndrome,” or “minimal brain dysfunction.”

    In the 1980s, the American Psychiatric Association defined ADHD in the Diagnostic and Statistical Manual of Mental Disorders (DSM) for the first time. According to the current, revised DSM definition, a person with ADHD has been severely inattentive (forgetful, careless, distracted, etc.) or hyperactive/impulsive (restless, impatient, aggressive, etc.) for at least 6 months. To qualify as ADHD, those symptoms must emerge before 7 years of age, be maladaptive and inconsistent with developmental level, and impair social or work routines in at least two settings, usually home and school.

    According to NIMH, ADHD is the most commonly diagnosed mental disorder in childhood, estimated to affect 3% to 5% of school-age children. The proportion of children diagnosed with ADHD has risen steadily over the past 15 years, but researchers argue over whether this represents a real increase, overdiagnosis, or better recognition of ADHD after years of underdiagnosing the condition.

    Skeptics still question whether ADHD is an authentic disorder and not simply a pathological label for normal, if exasperating, childhood behavior. But most scientists who study the condition are convinced. Over the past decade, more than 10 studies of twins in far-flung locations have suggested that ADHD has a strong genetic component. Heritability for ADHD—meaning that if one identical twin has it, the other will, too—ranges from 65% to 90%, comparable to schizophrenia and bipolar disorder, Faraone says.

    In fact, researchers know more about ADHD genes than those behind several other complex behavioral disorders, such as Tourette's syndrome, asserts molecular geneticist Cathy Barr of Toronto Western Hospital. “We've made good progress, replicating studies on several genes,” Barr says. “At the very least, this new research contributes to the idea that ADHD is biologically based—that there's something here.”

    That “something” likely includes the neurotransmitter dopamine. Paradoxically, stimulants, including methylphenidate (Ritalin), calm rather than excite children with ADHD. Researchers have long suspected that such drugs work by indirectly regulating dopamine levels in the brain. Based on that hunch, they are hunting genes that affect dopamine communication, notably a receptor (DRD4), a transporter (DAT), and a protein called synaptosomal-associated protein 25 (SNAP-25) that helps trigger the release of neurotransmitters from nerve cells. Genetic linkage studies of each gene have associated variants with ADHD symptoms. But more research is needed to explain the underlying biochemistry.

    Some researchers even suggest that ADHD may be too much of a good thing. Last year in the Proceedings of the National Academy of Sciences, for instance, Robert Moyzis of the University of California, Irvine, and his colleagues reported that one variety of the DRD4 gene associated with ADHD—the so-called seven-repeat allele, or DRD4 7R—appears to have been selected for in human evolution, suggesting that it supported an adaptive trait.

    “Kids with this gene version may have inherited faster reaction times or different attention spans, and now we're calling this a disorder,” Moyzis says. “Maybe all you need to do is steer those kids into a different educational situation.”

    Scanning scores.

    Researchers led by Susan Smalley of UCLA are scanning whole genomes for ADHD genes. One such gene may lie on chromosome 16, roughly 20 to 30 centimorgans from the tip of the chromosome's short arm. Here, three curves represent samples of sibling pairs with ADHD, with the strongest genetic linkage (shown by high peaks) in this chromosomal region. The banded chromosome 16 picture (bottom) shows the possible ADHD risk gene, illustrated by the black band near region 16p13.


    One thing seems clear: No matter how these purported ADHD genes affect dopamine, they cannot cause the disorder by themselves. So far, scientists estimate that each gene confers a very low added risk—roughly 1% to 3%—of developing ADHD. How are other neurotransmitters involved? Some scientists are investigating genes that regulate norepinephrine and nicotine levels in the brain. At Washington University in St. Louis, Missouri, for instance, molecular geneticist Richard Todd and his colleagues report that twins with ADHD often share a form of nicotinic acetylcholine receptor alpha 4, although the link is preliminary.

    Scanning the future

    Meanwhile, a few researchers are betting that critical ADHD genes, contributing far greater risk, remain undiscovered. They've begun using genome scans to randomly hunt for these susceptibility genes. Genome scans compare hundreds of known DNA markers between two siblings sharing the disorder. Any DNA regions with unusually high (more than 50%) similarity between those siblings may contain risk genes.

    Susan Smalley, a geneticist at the University of California, Los Angeles, recently led the first genome scans of siblings sharing ADHD. “We're operating on the idea that a couple of genes may add 10% to 25% of ADHD risk,” Smalley says. In the May issue of the American Journal of Human Genetics, Smalley's group described scanning the genomes of 270 sibling pairs with ADHD. They found hints of ADHD genes on chromosomes 5, 6, 16, and 17.

    “What's intriguing is that several of these gene regions overlap with those implicated in autism and dyslexia,” Smalley says. She suspects that these disorders may share a neurological glitch that disrupts the brain's “executive function” system—neural networks that govern tasks such as problem-solving, planning, and attention. Faraone calls the study a “very impressive” step toward isolating promising genes.

    In the same journal issue, medical geneticist Richard Sinke of the University Medical Center in Utrecht, the Netherlands, and colleagues reported an ADHD genome scan with fewer children. It mostly highlighted different chromosomal regions but overlapped with the site that Smalley's team found on chromosome 5. Now the two teams are analyzing their data together.

    Environmental risks, researchers predict, will be even harder to pin down than genes contributing to ADHD. They've begun linking ADHD symptoms to lifestyle factors, from maternal smoking during pregnancy to chronic family conflict. But because such adversities boost the risk of many disorders, the links are hard to interpret.

    Still, genes promise an analytical starting point. In the next 10 years, Faraone and others hope to use genetic tests to identify preschoolers at high or low risk of ADHD. Tracking both sets of children, they could look for specific environmental factors that trigger ADHD symptoms. “How do genes work in different environments?” muses Smalley. “How do genes lead to impairment? That's the next step.”


    Everything You Wanted to Know About Children, for $2.7 Billion

    1. Jocelyn Kaiser

    Researchers are planning a major study of mothers and children; after 2 years they've narrowed the possible objectives of the study down to 70

    Two years from now, the U.S. government hopes to launch the most ambitious study of American children ever. The plan is remarkable: Researchers aim to enroll 100,000 pregnant women and follow their children from birth to their 21st birthday, measuring many factors along the way—from infections during pregnancy to pollutant exposures to signs of psychosocial growth. It would be a sort of Framingham Heart Study, which tracked men in a Massachusetts town for over 50 years, only bigger, like “40 Framinghams across the nation,” says study director Peter Scheidt of the National Institute of Child Health and Human Development (NICHD). And like that famous study, proponents say, the National Children's Study would yield a gold mine of data on how lifestyle and other exposures contribute to disease.

    There is a hitch, though: The project—which could cost $2.7 billion over 25 years—is still struggling to get off the ground. After over 2 years of talks, researchers in a score of planning groups are now trying to define its scope and objectives. Their efforts have been hobbled, observers say, by a congressional mandate that is almost impossibly broad. Originated during the Clinton Administration, the study would investigate many facets of a child's “environment”—from chemicals to parenting—and learn how they affect disease and normal development. And it is being designed not by one agency, but by hundreds of federal and academic scientists at many institutions—a process that is laudable but, organizers admit, unwieldy.

    Social and biomedical scientists are still wrangling over what hypotheses to study and how subjects should be recruited. The work is mired in “turf battles over what's going to be the heart and soul of the study,” says pediatrician and epidemiologist Lynn Goldman of Johns Hopkins University in Baltimore. Even if scientists can pin down the design, supporters still have to convince the White House and Congress to come up with the money. “It's important that we do this,” says NICHD director Duane Alexander. “We'll be able to answer many questions about proposed influences on childhood development that have never really been testable before.”

    Long term.

    NIH plans to enroll 100,000 pregnant women and study their children for 21 years.


    Federal scientists began pondering a large, long-term study of children in the mid-1990s amid concerns raised by advocacy groups that children are more susceptible than adults to toxic chemicals. A Clinton Administration task force led by the Environmental Protection Agency (EPA) and the Department of Health and Human Services (HHS) formed a panel to scope out a big study looking at toxicants and more. Congress passed a bill in 2000 instructing NICHD to work with EPA, the Centers for Disease Control and Prevention, and other agencies to “conduct a national longitudinal study of environmental influences (including physical, chemical, biological, and psychosocial) on children's health and development.”

    Although popular, the congressional mandate has proved a challenge to put into practice. NICHD began by forming 22 working groups of federal and outside scientists to look at everything from fertility studies to collecting biological samples. Last year, it added an advisory committee that includes community and health activists. The process is “trying to be very democratic,” says Johns Hopkins epidemiologist Jonathan Samet, and is distinctly different from the one that produced Framingham, a narrowly focused study of heart disease proposed by heart experts. The process also differs from the one that led to another big National Institutes of Health (NIH) study, the Women's Health Initiative, which found last year that hormone replacement therapy doesn't work as expected. Although the Women's Health Initiative is also a complex set of overlapping studies, it was planned by NIH staff.

    The National Children's Study, by contrast, is being designed with input from 300 scientists in assorted fields. They have managed to narrow the study's sweep to five “outcomes” of interest, based largely on the number of people affected by key diseases: Birth defects and preterm birth, neurobehavioral disabilities, injuries, asthma, and obesity and hormonal disorders such as diabetes. (Others, such as childhood cancer, are too rare to study in a sample of 100,000.) But planners have struggled for a year to rein in a ballooning number of proposed hypotheses—from air pollution's impact on asthma risk to breast-feeding's role in obesity—holding them to about 70.

    Tensions boiled over at last month's meeting of the study's advisory committee. In a letter from one working group, epidemiologist Nigel Paneth of Michigan State University in East Lansing and NIH asthma researcher Peter Gergen wrote that planning is at an “impasse.” The problem, they said, is that “no hypotheses or group of hypotheses has emerged as the central driving theme.” Many of those selected so far “do not seem to justify the … study. Some are highly specific, but not of great significance, while others are broad, but as yet unfocused,” the letter said.

    At the meeting, advisory committee members also clashed with federal scientists over how to recruit volunteers—whether through large medical centers or, at the other extreme, use of household addresses to get a diverse, probability-based sample representing the U.S. population. The latter approach, some feel, could compromise the medical aspects of the study. “We don't want to dumb down the protocol [just] to do it in a way that's representative,” epidemiologist Mark Klebanoff of NICHD told the committee, which nevertheless voted in favor of a representative sample.

    The underlying problem, researchers say, is that two different communities—social and medical scientists—often disagree. To uncover links between exposures and disease, medical researchers prefer to test very specific hypotheses, for example, whether folic acid in a mother's diet reduces neural tube defects. They also try to select high-risk subgroups to monitor intensively. “You can't monitor everybody,” Gergen explains, because that would get too expensive. Medical researchers also might have trouble collecting certain biological samples, such as placentas, if volunteers come from community hospitals and not medical research centers.

    View this table:

    But social scientists are interested in normal development, and they want findings to apply widely. Patients recruited through research centers tend to reflect the ethnic mix of the local population, not the whole country, and they tend to have medical care already. This could make it difficult to measure, for example, whether racial disparities affect access to care, says economist Robert Michael of the University of Chicago. Besides, he says, social and medical factors can't be separated: “Is obesity medical, or social? It's partly each.”

    On that point, there's wide agreement. Building a picture of how the social and physical environment contribute to specific diseases would set this children's study apart from others, such as several under way in Europe (see sidebar). These tend to collect data first and generate hypotheses later and may not capture key factors that influence a disease, says Samet. European populations also tend to be less ethnically diverse, and their exposures to pollutants monitored less. The U.S. study hopes to go into homes to test for toxins and measure exposures before a woman is pregnant. The U.S. approach is “obviously unprecedented,” Samet says.

    No one expected that the U.S. children's study would be easy to design, says Don Mattison, adviser to NICHD director Alexander: “It is messy.” Mattison believes NICHD is almost over the hump, however, and may have a protocol ready in a year. The institute is now hiring more permanent staff to speed along what has been a largely volunteer effort. In addition, a subgroup of study planners will meet this fall with outside biostatistics experts to decide on the sampling strategy. “My sense is that we are converging,” says Phil Landrigan of the Mount Sinai School of Medicine in New York City, an advisory committee member.

    Looming on the horizon is one of the toughest questions: Who will pay? So far, four agencies have been paying for planning and pilot studies. Study planners hoped to receive $26 million in 2004 to ramp up toward enrollment, but they may get only $12.5 million, according to Scheidt. That would mean a delay of up to 6 months. Scheidt says the study needs $150 million “probably somewhere in the HHS budget” for enrollment to begin in 2005; the annual price tag would then rise to over $200 million for several years before tapering down to about $90 million.

    Reaching that level depends on another big unknown: Will President George W. Bush back the project? Twice, he has renewed the children's health task force started by Bill Clinton. Optimists see this as a promising sign. “I will be surprised if it's not funded in some form,” says Johns Hopkins's Samet. “There's so much interest,” says Alexander, both in the United States and beyond; Canada, he notes, is pondering a smaller version of the U.S. study. “I'm really extremely hopeful that we're … going to get the funding that we need.”


    The Epidemiologist's Dream: Denmark

    1. Lone Frank*
    1. Lone Frank is a science writer in Copenhagen.

    If the planners of a U.S. study of children's health could work in an ideal world, it might be Denmark. Epidemiologists there finished enrolling a cohort of 100,000 pregnant women into a mother-and-child research project last September and expect to finish collecting data from the children over the next year. The entire survey—which is large for this country of 70,000 annual births—is to be completed in 2005 for about $15 million, a tiny fraction of what the cost would be in the United States.

    The Danes didn't design their Better Health for Mother and Child cohort study to answer specific questions or conduct long-term follow-up, as the Americans plan to do (see main text). Instead, they aim to create a databank that generations of researchers can mine and use as a starting point for studies of how medications, infections, nutrition, and even psychological factors affect pregnancy and child health.

    Physicians have recruited volunteers among women making their first pregnancy visit. Participants give two blood samples during pregnancy and cord blood when the baby is born. The samples are saved for later use, including possibly for genetic studies. The mothers also answer a detailed questionnaire concerning nutrition; in an 18-month follow-up, they give information on their health and environmental exposures. The public health system is funding the study, with support from private and public foundations.

    “Because the Danish population is probably the world's best registered, Denmark is the ideal place for such studies,” says epidemiologist Mads Melbye, a steering group member from Statens Serum Institute in Copenhagen. Each citizen has a personal identification number that can be used to track data in centralized health care records, disease registries, and a population registry. Even centralized school records may be used. “It's an epidemiologist's dream,” says Mark Klebanoff of the U.S. National Institute of Child Health and Human Development, who says tracking subjects is one of the costliest aspects of long-term U.S. studies.

    Norway, which has a system like Denmark's, is launching a mother-child study that will pool data with the Danish group's. Both benefit from streamlined management. It's difficult to get things done with too many decision-makers, says Melbye: “Running such a large study has taught us many things, but the chief lesson is that it is essential to put a very small group of people in charge.”

    Results are already beginning to trickle out of the Danish study. For example, one group published an article in The Lancet last November that disproved the existing consensus view that a fever early in pregnancy increases the risk for miscarriage. That's just the beginning: Denmark's scientific ethics committee has so far given the green light to more than 70 research protocols based on the mother-child study.