News this Week

Science  14 Mar 2003:
Vol. 299, Issue 5613, pp. 173

    Judge Turns Rochester's Golden Patent Into Lead

    1. David Malakoff

    The University of Rochester's dream of reaping a billion-dollar windfall from a drug patent has suffered a serious setback. A federal judge last week ruled invalid the university's sweeping claim to some lucrative anti-inflammatory drugs, dismissing the patent as “little more than a research plan.” The university will appeal the ruling, which legal experts say could rein in inventors making similarly broad claims based on basic research findings.

    The 5 March decision* comes more than a decade after Rochester applied for a patent based on discoveries by cancer researcher Donald Young and two colleagues. Young's team found and cloned the gene that produces cyclooxygenase-2 (COX-2), an enzyme that promotes inflammation. In its 1992 patent filing, the university described a technique for identifying compounds that inhibit COX-2 and suggested that they could be used to treat pain without causing the side effects, such as upset stomach and internal bleeding, that can accompany aspirin and other widely used painkillers. The university finally won a patent in April 2000—years after several drug companies had already developed COX-2 inhibitors and earned billions of dollars selling them.

    Within hours of receiving the patent, University of Rochester officials sued drug giants Pharmacia and Pfizer, the makers of the top-selling COX-2 drug Celebrex, claiming that the school was entitled to significant royalties (Science, 21 April 2000, p. 410). Some university officials predicted that the patent would become the most valuable intellectual property ever held by a U.S. university, generating $1 billion or more over the patent's 17-year lifetime for the inventors, their department, and the school. (The current record holder, a patent on basic gene-engineering techniques, has earned the University of California about $250 million.) To argue its case, the University of Rochester hired attorney Gerald Dodson of Morrison and Foerster in San Francisco, a veteran of high-stakes battles over academic patents.

    But U.S. District Judge David Larimer didn't buy the university's claim that its scientists had paved the way for the drug companies. In a sometimes colorful 32-page opinion, the Rochester, New York, jurist ruled that although the university's patent does describe a process for finding COX-2 inhibitors, it never describes a specific “invention,” meaning a specific compound or drug. “It means little to ‘invent’ a method [of painkilling] if one does not have possession of a substance that is essential to practicing that method,” he wrote.

    Pharma comes up big.

    Philip Needleman and others at Pharmacia deserve credit for developing Celebrex, says New York judge.


    Larimer compared the university's patent to one that a 14th century English monarch granted for a “philosopher's stone” that could turn lead into gold. “While the Court does not mean to suggest that the inventors' significant work in this field is on par with alchemy, the fact remains that without [a] compound … the inventors could no more be said to have possessed [a] complete invention … than the alchemists possessed a method of turning base metals into gold.”

    The ruling confirms Pharmacia's view that the patent was “invalid on its face,” says company general counsel Richard Collier, who added that the University of Rochester had “no role” in developing Celebrex, which grossed $3 billion in sales last year.

    “This was a big, but not surprising, decision,” says patent attorney Rochelle Seide of Baker Botts in New York City. She notes that U.S. patent officials currently frown on granting similar “reach through” patents, which lay broad claims to biochemical pathways or mechanisms in the absence of specific products. “We probably wouldn't even see this case filed today,” she says, adding that she would be surprised if the university prevailed in its appeal.


    Star-Crossed Comet Chaser Eyes New Target

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    UTRECHT, THE NETHERLANDS—It may be grounded, but at least Rosetta, Europe's groundbreaking mission to chase and land on a comet, now has its sights set on a new target: comet Churyumov-Gerasimenko. Originally due for a launch in January to comet Wirtanen, the $1 billion Rosetta was put on ice due to concerns over the Ariane 5 launcher (Science, 24 January, p. 486). According to project scientist Gerhard Schwehm of the European Space Technology Centre in Noordwijk, the Netherlands, the Rosetta team has now thrown its weight behind the new rendezvous, with a launch date in February 2004.

    Far and away.

    Rosetta will take 10 years to catch up with its comet.


    Rosetta's new itinerary still must be approved by the European Space Agency's Science Programme Committee, which will meet in mid-May. But Schwehm says flying to Churyumov-Gerasimenko is the safest option. One alternative was to stick with Wirtanen but launch a year later in January 2004 with the more powerful Russian Proton rocket. But the team decided there was not enough time to adapt the Proton, says Schwehm. All Ariane 5 launchers were grounded after the failure of an upgraded version last December, but the regular Ariane 5 is expected to fly again later this year.

    The new target, discovered in 1969 by two Ukrainian astronomers, orbits the sun every 6.6 years and last visited the inner solar system in August 2002. Like Wirtanen, it's a small, not particularly active comet, making it suitable for a soft landing. Rosetta will catch up to the comet in late 2014 and later will release a small lander to touch down on the icy surface. Churyumov-Gerasimenko is larger than Wirtanen and thus has a stronger gravitational pull. “In the next few weeks, we will study the landing scenario in detail,” Schwehm says.


    Resistant Staph Finds New Niches

    1. Martin Enserink

    Huge, painful boils and abscesses that must be cut open and drained before they can heal: Those are the scary symptoms of a major outbreak of a drug-resistant microbe among hundreds of jail inmates and gay men in the United States. A recent surge in cases has epidemiologists scrambling to understand what's going on—and wondering whether a well-known pathogen is on the verge of becoming a much bigger problem.

    Experts on the microbe, methicillin- resistant Staphylococcus aureus (MRSA), discussed the latest developments last week at a meeting in San Antonio, Texas—only to conclude that they have more questions than answers. But already, says Peter Ruane, a physician at Tower Infectious Diseases Medical Associates in Los Angeles, the outbreak suggests that MRSA “may become much more widespread in the general population.”

    For many years, MRSA has been an important cause of infections mainly in hospitals and nursing homes, where it's often resistant to all antibiotics except vancomycin, considered the last resort. Recently, epidemiologists started seeing more cases of MRSA among people who had no specific connection to hospitals—for instance, among Native Americans, athletic teams, intravenous drug users, and schoolchildren.

    Initially, some researchers speculated that those “community-acquired” microbes might have “escaped” from hospitals. But a genetic comparison of community- and hospital-acquired MRSA strains by Patrick Schlievert of the University of Minnesota, Twin Cities, and colleagues, published in January, suggests that the community variety arose independently and not long ago, when a wild strain picked up a so-called cassette chromosome—a mobile genetic element—containing a gene for methicillin resistance.

    Like regular staph, MRSA can be carried on the skin or in the nasal cavity, where many people never notice it. But sometimes MRSAs can cause severe skin and soft tissue infections and, when they reach the lungs, pneumonia—with fatal consequences if treatment comes too late. Toxic shock syndrome is another potentially serious result. In contrast to hospital strains, community-acquired MRSA is usually susceptible to a range of antibiotics, making treatment relatively easy once it's diagnosed. MRSA infection is not a reportable disease in the United States, so firm numbers are hard to come by. But Schlievert suspects that there are thousands of cases every year in this country and probably hundreds of deaths; five to 10 children die from MRSA in the Minneapolis metropolitan area alone.

    On the move.

    Methicillin-resistant Staphylococcus infections are increasingly occurring outside hospitals.


    Now, community-acquired strains of the microbe seem to be moving into higher gear. In 2002, more than 900 people in Los Angeles county jails became infected, L.A. health authorities announced in January—by far the biggest single outbreak ever recorded. Late last year, Los Angeles was the first city to record cases among gay men—at least 50 so far, says Ruane. Many, but not all, were HIV infected, but it's unclear whether this made them more susceptible. Boils and abscesses turned up on hands, legs, chests, buttocks, and genitalia, apparently infecting even intact skin. Since then, the infection has cropped up in gay communities in cities such as San Francisco, Boston, Atlanta, and Washington, D.C., suggesting that the microbe has found a new niche in which it can travel fast.

    Researchers still don't know whether all these cases are related or how they were spread; nor is it clear to what extent the apparent jump in the number of cases might be due to increased reporting as a result of publicity. “But everybody seems to agree there's something new going on,” says Ruane.

    Adding to the concerns are two cases of aggressive MRSA infections that occurred recently in gay men in the Netherlands, says Wim Wannet of the National Institute of Public Health and the Environment in Bilthoven. If the two strains prove to be genetically identical to the U.S. strains, it could mean that the outbreak has crossed the Atlantic.

    One reason the current strain is more ferocious than most hospital-acquired strains may be that it produces a particular combination of toxins—although researchers disagree on which ones are important. But whatever the roots of their pathogenicity, epidemiologists fear that if aggressive MRSA strains become firmly established in the community, the bugs might pick up resistance to additional antibiotics, says Scott Fridkin of the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta. Another worry, says Schlievert, is that the strains will move into hospitals, replacing the more benign strains now residing there.

    As a step toward finding answers, Matthew Boulton, Michigan's state epidemiologist, has proposed that resistant Staph infections be reported to CDC, as are about 50 other diseases. If the Council of State and Territorial Epidemiologists, the body that advises CDC on this matter, passes Boulton's proposal this summer, he says, “we would at least get a much better idea what's going on here.”


    A Thermometer Beyond Compare?

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Park, Michigan.

    AUSTIN, TEXAS—A person who has one thermometer knows the temperature, physicists sometimes quip, but someone who has two is never sure. That adage may not remain true for long, however, now that researchers have developed a type of electronic thermometer that needs no calibration.

    A thermometer is only as accurate as its scale, which is set by comparison with better thermometers and ultimately with several natural reference points. By international agreement, researchers at national standards institutes set the universally accepted scale using the point at which water, ice, and steam can peacefully coexist—defined as 273.16 kelvin—and other references. They then interpolate between these points with a variety of elaborate thermometers. From 3.0 to 24.5561 kelvin, for example, the standards czars set the temperature scale by tracking the increasing pressure of a fixed volume of helium. Between 13.8033 and 1234.93 kelvin, they follow the increasing electrical resistance of a platinum wire. “It's a kind of Frankenstein monster of all sorts of scales put together,” says Lafe Spietz of Yale University. “They're never going to turn this into a real thermometer that you're going to have in the lab.”

    Thermometer manufacturers use the elaborate standards to set the scales for the practical thermometers used in research. If two lab thermometers disagree, it's impossible to tell which is right without resorting to the standards again.

    How cool?

    A new thermometer might replace several reference standards.


    But a tiny gadget called a tunnel junction could change that, say Spietz, Robert Schoelkopf, and colleagues at Yale. The junction consists of two bits of aluminum sandwiching a layer of aluminum oxide only a few atoms thick. Electrons cannot pass through the oxide freely but must burrow through it one by one. As they do, the current jitters up and down. This jitter is called “shot noise,” and it increases in a precisely predictable way as researchers ramp up the voltage across the junction. The way the noise changes with voltage depends only on the temperature and on fundamental constants such as the charge of the electron, Spietz told scientists at a meeting of the American Physical Society* here last week. That means researchers can determine the exact temperature by simply tracing the variation of the shot noise. Spietz and colleagues showed that the device could track temperature from less than a degree above absolute zero to room temperature.

    The new thermometer could potentially replace the elaborate collection of reference points, says Samuel Benz of the National Institute of Standards and Technology in Boulder, Colorado. “This is a technique that allows you to avoid that interpolation,” he says. But first the researchers “need to push down the uncertainties,” Benz says. The device is accurate to within 0.1%, but the researchers think they can make it many times more accurate.

    • *March Meeting 2003, 3–7 March.


    How to Make Sense of Sequence

    1. Jennifer Couzin

    As the final bases are mapped onto the human genome in the next month, a familiar refrain is growing louder: What comes next? At an unusual workshop last week, representatives of the National Human Genome Research Institute (NHGRI) sought to answer that question for potential grantees gathered at the institute's home base in Bethesda, Maryland.

    In late February, NHGRI invited applications for its newest genome project: the Encyclopedia of DNA Elements, or ENCODE. Now that the sequence's 3 billion letters have been spelled out, ENCODE's goal is to locate and identify all of the genome's components. These include DNA that codes for proteins and DNA that doesn't, as well as elements that mediate gene expression.

    Because ENCODE is a novel venture for NHGRI, and because its details are still malleable, the institute took the unusual step of inviting interested applicants to ask questions and offer feedback to help shape the project. As the meeting opened, the sense that NHGRI was undertaking a daunting mission pervaded the room. “Everyone's looking really glazed at the moment, like ‘Are we really going to do this?’” said institute director Francis Collins, surveying an audience of about 70.

    In focus.

    A new initiative aims to identify genes (as depicted in this array) and other features of the human genome.


    NHGRI plans to commit $36 million to ENCODE over the next 3 years. The project will initially focus on just 1% of the genome. Most of the money will be parceled out among five to 15 groups, each of which may use computer-based analyses and lab experiments to identify all the functional elements in that same 1%. When finished, the teams will compare notes to determine which technologies proved the most efficient and accurate. NHGRI envisions then scaling up the project to annotate the other 99% of the genome.

    Despite uncertainty over how grants will be awarded and how ENCODE will proceed, participants say the venture is the first opportunity to dig deep into the human genome. “In an understated way, I think this is an extraordinarily significant event,” said John Stamatoyannopoulos of the Dana-Farber Cancer Institute in Boston. “They could spend $1 billion on this; [it's] much bigger than the genome in terms of scope.”

    ENCODE's organizers hope that industry will participate in the venture. A minority of the funding—$6 million—is intended to spur academics and private companies to develop technologies that can identify and verify the genome's components. ENCODE may also include international members; Tim Hubbard, head of genome analysis at the Wellcome Trust Sanger Institute in Hinxton, U.K., said that Sanger will “definitely participate.”

    Coordinating ENCODE will be a challenge for NHGRI. For one, the institute will have to settle on quality standards soon. They're still up in the air for this type of data, and the project requires that findings be quickly released into the public domain. But the most difficult job for NHGRI and the eventual grantees, who will be announced in September, lies in making sense of the mass of interpretations of the DNA sequence. “We're looking for things we've never found before,” says Eric Green, chief of the genome technology branch at NHGRI.


    For Precarious Populations, Pollutants Present New Perils

    1. Paul Webster*
    1. Paul Webster is a writer in Moscow.

    The biggest ever study of Arctic pollutants paints a picture of an ecosystem under siege—with potentially grave consequences for denizens of Earth's northernmost reaches. For the first time, long-standing villains such as pesticides, polychlorinated biphenyls (PCBs), and mercury have been linked to weakened immune systems and developmental deficits in Inuit children. “We believe we're beginning to see early human effects,” says oceanographer David Stone, northern research director for the Canadian government. And efforts to clamp down on the release of these pollutants may not be enough: The study has identified a slew of other compounds that could pose a long-term threat to humans and wildlife.

    The findings, presented at a symposium on the second Canadian Arctic Contaminants Assessment Report (CACAR II) last week in Ottawa, are the latest fruits of Canada's 10-year, $38 million Northern Contaminants Program (NCP). In 1997, CACAR I found that a bevy of pollutants—including the pesticides DDT and chlordane, as well as PCBs and other industrial chemicals—are building up in the Arctic. Because these compounds degrade less readily in frigid conditions, levels in the Arctic are among the highest in the world. Thus, for example, the Canadian government estimates that three-fourths of Inuit women have blood PCB levels that are up to five times higher than those deemed safe. Experts fear they are seeing only the tip of the iceberg. “We do not know what the effects of long-term exposure will be on the human population and on the ecosystem,” says Peter Johnson, chair of the Canadian Polar Commission.

    CACAR II, perhaps the last installment of the acclaimed NCP, ratchets up the level of concern. Particularly disturbing are early findings from a study in Nunavik, Quebec, funded by NCP and the U.S. National Institutes of Health. Toxicologist Éric Dewailly of the University of Quebec Medical Center (CHUQ) tracked infections in 199 infants from birth to 12 months. His team found that the risks of two infections—upper respiratory and gastrointestinal—were significantly elevated in babies whose mothers had the highest blood levels of DDE, a pesticide.

    Hardscrabble life.

    Canada's Inuit are vulnerable to pollutants building up in the Arctic.


    Another study led by CHUQ developmental psychologist Gina Muckle found subtle deficits in memory—the ability to recognize objects—in infants whose mothers had higher levels of PCBs, and heavy metals such as lead and mercury appeared to lengthen the amount of time the babies needed to remember information and to reduce their ability to remember while distracted. “Older studies looked at one contaminant at a time,” says toxicologist Deborah Rice of the U.S. Environmental Protection Agency. By examining several contaminants as well as dietary factors, she says, Muckle's approach “represents a real step forward.” But although the findings are intriguing, Stone cautions that “much more research is required.”

    That could be hard, as scientists are aiming at a moving target. Although CACAR II notes that the levels of certain pollutants, including DDT and chlordane, are waning in parts of the Arctic, previously unidentified compounds are on the rise. The study has revealed “a huge sweep of new contaminants” washing into Arctic ecosystems, says Lars Otto Reiersen, Oslo-based director of the Arctic Monitoring and Assessment Program. Of greatest concern, perhaps, is a rapid rise in polybrominated diphenyl ethers (PBDEs), a group of chemicals used as fire retardants in electronics that have been linked to thyroid defects in lab rats. Between 1997 and 2002, PBDE levels doubled in Arctic char, says chemist Derek Muir of Environment Canada, and PBDEs are starting to accumulate in mammals, from whales to humans. Although PBDEs were banned this year in Europe, Muir notes, their use in North America remains unchecked.

    Another type of flame retardant showing up in the Arctic is perfluorinated acids. These chemicals, used in firefighting foams, herbicides, and paints, are classified as cancer promoters and don't appear to degrade at all in the environment. CACAR II flags the rapid buildup of one in particular, perfluorooctane sulfonate, in polar bears. “It has stealthily achieved high levels,” says Muir, who adds that concentrations are approaching levels known to cause weight loss in monkeys; that could prove to be a mortal blow for some individuals in a species whose bulk allows it to survive on the ice.

    Despite the emergence of new threats, it appears unlikely that Canada will extend NCP, which is set to expire at the end of this month. Stone says that officials have assured him that they are “committed to finding the funds” to continue human health studies in the Arctic, but most other studies seem destined for the chopping block. An “NCP lite” was not palatable to some high-level delegates at the Ottawa conference. “The government is walking away from its key environmental and health research program in northern Canada,” asserts Terry Fenge, research director for the Inuit Circumpolar Conference, which promotes indigenous interests. Loss of the NCP, adds Reiersen, “will be a real pity.” The pollutants, on the other hand, are not going away any time soon.


    Pentagon Wants Out From 'Green' Rules

    1. David Malakoff

    Even as they prepare for a war in Iraq, Pentagon strategists are also drawing up political battle plans on the home front. The Department of Defense this week asked Congress to exempt the nation's armed services from a host of environmental requirements said to be a growing hindrance to combat readiness. But the push is meeting resistance not just from scientists but also from some Bush Administration officials.

    “There are problems, but the evidence doesn't justify the kinds of changes they are proposing,” says Brock Evans of the Washington, D.C.-based Endangered Species Coalition, an advocacy group. Even Environmental Protection Agency chief Christine Todd Whitman has raised doubts about the move, telling a Senate panel last month that she wasn't aware of any environmental laws that target military training.

    All sides agree that the Pentagon—the third largest landowner in the United States—controls some of the nation's most ecologically sensitive real estate. Its 10 million hectares of military bases are home to an estimated 300 endangered species, ranging from fragile flowers and fairy shrimp to showy woodpeckers and heavily armored desert tortoises. The bases also include some of the last large tracts of undeveloped land in heavily populated areas, such as southern California.

    But as urban sprawl has hemmed in some bases, neighbors have begun to complain about everything from noise to pollution. And species displaced by development have taken up residence on Pentagon lands, requiring the military to take evasive maneuvers. Navy forces, for instance, schedule some beach exercises around the time birds nest.

    Military planners say the regulations have become too burdensome. So last week they unveiled a 15-page package of changes to five major environmental laws. An amendment to the Endangered Species Act would bar federal biologists from designating regulated “critical habitat” on bases, for instance, if the military already has its own management plan. Pentagon officials say such designations could hamper activity on up to 60% of some bases. Another change would exempt military vehicles and armaments used in training exercises from certain clean air and waste rules.

    The Pentagon also wants changes to a marine mammal protection law, which it says has hampered efforts to field new sonar systems and conduct naval exercises. One change could allow the military to drop required monitoring studies that have produced useful information about human interactions with whales and seals, says Nina Young of the Ocean Conservancy in Washington, D.C. Young was scheduled to testify this week at a House hearing on the topic.

    The Pentagon failed to win approval last year for a similar package of changes, although it did win modifications to a migratory bird law. But with war looming, conservationists fear that lawmakers will be more pliant. Bill Schlesinger, head of the Nicholas School for the Environment at Duke University in Durham, North Carolina, urges caution: “This isn't the time to make changes that could have some very long-term environmental consequences.”


    Who Pushed Whom Out of the Last Ice Age?

    1. Richard A. Kerr

    By most accounts, the North Atlantic called all the shots as the world staggered out of the depths of the previous ice age 20,000 years ago. When warm water surged back into the North Atlantic, the world began to melt out of its deep freeze. When the flow of warm water shut down, the world shivered again. Only after several such reversals did the world break the ice's hold. But now there's an upstart challenging the North Atlantic as a dominant global climate shifter.

    On page 1709 of this issue, modelers Andrew Weaver and Oleg Saenko of the University of Victoria, British Columbia, glaciologist Peter Clark of the University of Washington, Seattle, and geophysicist Jerry X. Mitrovica of the University of Toronto propose that the Southern Ocean, which encircles Antarctica, played a pivotal role in deglaciation. At one point 14,700 years ago, they suggest, a pulse of meltwater from Antarctica into the Southern Ocean influenced events 10,000 kilometers away, triggering the biggest climate shift of the past 25,000 years. The Southern Ocean may have shifted climate worldwide before that and could yet again, says glaciologist Richard Alley of Pennsylvania State University, University Park, but the evidence isn't yet strong enough to topple the north from its climatic dominance.

    As told in ice drilled from the Greenland Ice Sheet, the story of deglaciation has two somewhat repetitive chapters: a slight warming followed by a climatic reversal back to glacial cold, and an abrupt and massive warming followed by a final cold reversal before modern warmth gained and kept the upper hand. Close at the heels of the central massive warming—called the Bølling-Allerød warming—came a huge gush of meltwater into the North Atlantic from North America's ice sheet, according to the prevailing interpretation of the record. Half a million cubic meters of meltwater filled the seas every second for several hundred years, the equivalent of five Amazon Rivers, driving sea level up 20 meters.

    For more than a decade that order of events, which suggests that the Bølling-Allerød warming caused the pulse of meltwater, has relied on a single data point. In 1989, the meltwater-induced rise in sea level—as recorded by the depth below present sea level of a now-dead coral offshore of Barbados that once grew near the surface—was dated to 14,235 years ago, during the Bølling-Allerød.

    Weaver and his colleagues think that the meltwater pulse came earlier, however. They point to several recent findings, including work published by paleoceanographer Markus Kienast of Woods Hole Oceanographic Institution in Massachusetts and his colleagues in the January issue of Geology. Kienast reported that a sediment core from the South China Sea indicates that an abrupt sea surface warming, as gauged by a temperature-sensitive compound produced by plankton, occurred 14,700 years ago. That temperature spike coincided precisely with a sharp rise in sea level, as recorded in the amount of organic matter washed off the land. The implication: The meltwater pulse occurred just as the Bølling-Allerød warming began, not when it was well along.

    A big thaw.

    Antarctic ice may have melted 14,700 years ago and indirectly warmed the far North Atlantic.


    Last year in Science, Clark, Mitrovica, and colleagues raised a different challenge to the traditional deglaciation story. When they examined the way meltwater drove up sea level at different places around the globe, they found that North America's ice sheet bordering the far North Atlantic did not produce most of the meltwater pulse, as most had assumed. Instead, the Antarctic ice partially collapsed to make a substantial, perhaps dominant, contribution of meltwater.

    Those findings led Weaver and his colleagues to propose a more intriguing deglaciation story: Perhaps the meltwater pulse caused the North Atlantic warming, rather than vice versa. To find out, Weaver and his colleagues ran a model of ocean circulation, adding slugs of meltwater to the surface ocean. When they put the meltwater in the Southern Ocean, between South America and Antarctica, it turned on the warm-water supply and warmed the North Atlantic, just as Weaver believes happened 14,700 years ago. The meltwater slowed the sinking of surface water near Antarctica that otherwise would slide northward at mid-depths in the ocean and dampen the flow of warm water into the far North Atlantic. Oddly enough, the melting in the south might have been in part the result of manipulation by the north, where cooling can send a signal through the ocean triggering warming in the south.

    Scientists are giving the new version of the deglaciation story a mixed reception, depending on their specialty. Modeler Thomas Stocker of the University of Bern, Switzerland, says the new work suggests that “paleoclimate science has clearly matured from a qualitative science, in which mechanisms were postulated by hand-waving arguments, to a science which is quantitative and utilizes cleverly designed numerical models.” Paleoceanographer Konrad Hughen of Woods Hole finds that “Barbados is still a thorn in [Weaver's] side. Barbados is probably the best evidence for when the meltwater pulse occurred.” Alley, however, is encouraged. The inversion of the meltwater story “gets us away from control in the north or control in the south toward a coupled system” in which climate signals can bounce back and forth through the ocean. “My guess is they're pointing us in the right direction.”


    The Medication Merry-Go-Round

    1. Kathryn Brown

    With more children than ever taking psychiatric drugs, researchers find the scientific questions hard to swallow

    At age 5, when his world began to collapse, John* struggled to gain control. Every morning, he kissed his dad at the door exactly eight times then touched the painted star on the wall. He avoided his kindergarten teacher's gaze. He bossed friends around then sank into safe silence.

    But the spiral of mental illness had started—and after 2 years of special diets, sticker rewards, therapy, and inconclusive diagnoses, John began what his mother calls “the medication merry-go-round.” Over the past 6 years, doctors have prescribed 20 different medications for John, who has bipolar disorder, or manic depression. Some of the drugs have soothed John's symptoms; others have not.

    But all share one trait: uncertainty. Most drugs prescribed to children—including those for schizophrenia and bipolar and anxiety disorders—have not been tested in young children. Pursuing the simplest route to market, drug companies usually seek U.S. Food and Drug Administration (FDA) approval for adult use of medications only. Still, doctors are prescribing psychiatric drugs to children at ever-increasing rates. According to recent estimates, 3 million to 4 million Americans under 18 take psychiatric drugs—at least double the number a decade ago. Even toddlers have become patients.

    This trend alarms researchers. Although these drugs can work well, they say, pediatric patients deserve better science. How do mind medications affect growing children? And what happens when children take them for years? “We don't know as much as we'd like,” concedes Benedetto Vitiello, head of child and adolescent studies at the National Institute of Mental Health (NIMH) in Bethesda, Maryland. Adds Yale University child psychiatrist James Leckman: “In the absence of strong science, I think we're taking some risks.”

    Slowly, that science is building. In fiscal year 2002, NIMH spent $36 million on child and adolescent “psychopharmacology,” from basic drug biology to clinical trials comparing drugs and therapy—triple the budget of 5 years earlier. Today, more than 20 NIMH-sponsored trials are under way or recruiting patients.

    Meanwhile, FDA and the U.S. Department of Health and Human Services (HHS) are trying to inspire—or require—drugmakers to step up pediatric studies. And in a small yet growing number of labs, scientists are doing brain imaging and animal research to unravel the biology underlying psychiatric disorders and drugs. “We owe it to our kids to learn how these drugs affect their brains,” Leckman says. “This is a start.”

    Therapeutic orphans

    There's much to learn. About 40 years ago, pediatrician Harry Shirkey described young patients as “therapeutic orphans.” In 1970, FDA decreed that “drugs used for children must be tested in children.” But the agency has limited power. And today, FDA estimates that 70% to 80% of all drugs prescribed to children, including medications for mental disorders, are used “off-label”: for treatments that have not been approved by FDA. In scattered studies, researchers agree, most psychiatric drugs appear safe for most children. But that's where the science ends and each patient's personal experiment begins.

    On the menu.

    An estimated 4 million American youths swallow stimulants, antidepressants, or other psychiatric drugs as part of their diets.


    The number of those experiments is soaring. In a January study published in the Archives of Pediatrics and Adolescent Medicine, pharmacoepidemiologist Julie Magno Zito of the University of Maryland, Baltimore, and her colleagues reviewed data on roughly 900,000 youths enrolled in a health maintenance organization (HMO) in the Northwest and in two Medicaid programs in the Midwest and mid-Atlantic.

    From 1987 to 1996, the researchers found, psychiatric drug use tripled among patients in the HMO and in the midwestern Medicaid program; it doubled for those in the mid-Atlantic program. By 1996, about 6% of the patients were taking stimulants, mood stabilizers, and other mind medications. That finding mirrors an earlier study in which Zito reported that even the youngest patients, ages 2 to 4, tripled consumption of psychiatric medicines between 1991 and 1995.

    “If you look across the past 30 years, we have gone from very little use of emotional medication in kids to having almost the same rate as occurs in 20- to 45-year-old adults,” Zito says. More than ever before, pediatricians and family practitioners—not just psychiatrists—are dispensing these drugs. And although worldwide numbers are sketchy, psychiatric drug use among children is rising in Canada, the United Kingdom, and Australia as well.

    But which doctors will prescribe what—and why—can be hard to guess. A surprising number of children in the United States are still taking older, less effective tricyclic antidepressants, for instance, rather than newer alternatives, Zito and her colleagues found. And in various parts of the country, doctors behave differently, at least when it comes to prescribing stimulants, such as Ritalin, to treat attention deficit/hyperactivity disorder (ADHD). In work reported in the February issue of the journal Pediatrics, researchers at Express Scripts Inc., a pharmacy benefit management company based in Missouri, reviewed commercial insurance prescription claims for a national sample of roughly 180,000 children in 1999. Stimulants ranged from just 1.6% of pediatric prescriptions reviewed in Washington, D.C., to 6.5% in Louisiana. Children in the South were almost twice as likely to receive stimulant prescriptions for ADHD as were those in the West, according to the study.

    Numbers offer little insight into a doctor's thinking. But the increase in prescriptions likely reflects a mix of trends, from better diagnoses and higher expectations of child behavior at school to psychiatry's current emphasis on biology and limited medical insurance benefits for behavioral therapy. It's unclear whether doctors are over- or underprescribing psychiatric medications to children, although researchers suspect both, depending on the drug (Science, 4 August 2000, p. 721). “We don't know whether the right kids are getting the right meds,” says Michael Jellinek, chief of child psychiatry at Massachusetts General Hospital in Boston.

    Risks versus rewards

    In fact, researchers don't know enough about the “right meds,” period, from function to dose to risks. They do know that children are not just little adults: Children respond to medication differently than do adults. The question, for each child and drug, is how.

    Children have less body fat, more body water, and faster metabolism than adults do, so they absorb drugs at different rates. But when doctors have little pediatric information on a drug, they often estimate the starting dose for a child based on adult studies. Children may end up taking too little or too much medication.

    View this table:

    Research on the antipsychotics chlorpromazine and haloperidol, for instance, shows that children actually need higher doses of these drugs per kilogram of body weight than adults do to reach the same plasma concentrations. Hormonal changes around puberty further complicate drug dosage. What's more, some drugs that work well in adults, such as oral tricyclic antidepressants, don't appear to help children.

    A bigger worry is the harm psychiatric drugs might do. In January, for example, FDA sounded a note of caution when it approved Eli Lilly's Prozac (fluoxetine) to treat depression and obsessive-compulsive disorder (OCD) in children ages 7 to 17. Prozac is one of several popular selective serotonin reuptake inhibitors (SSRIs), drugs that maintain high levels of serotonin in the brain. After a 19-week clinical trial, FDA reported, children who took Prozac had grown an average of 1.1 centimeters fewer and gained 1.1 kilograms fewer than those who took a placebo. “We don't know if this is a temporary effect or will become more accentuated over time,” says Thomas Laughren, who heads FDA's team that evaluates psychiatric drugs. “That's one of the problems with the use of drugs in kids: We don't know the long-term risks.”

    Prozac is the first FDA-approved drug for depression in children. Eli Lilly has agreed to do postmarketing, or phase IV, studies of the drug in children and animals to better assess its toxicity and long-term effects. But the developmental questions extend to other drugs as well, including some ADHD-fighting stimulants such as Adderall, which are amphetamines that can suppress appetite, causing children to eat less and grow more slowly. Other medications do just the opposite: Risperdal (risperidone), an atypical antipsychotic, causes some children to gain so much weight that doctors halt medication.

    And those are just the obvious effects. Doctors can't tell whether a drug is quietly damaging a child's brain. As adults take aspirin for chronic pain, children may take stimulants or antidepressants indefinitely. “We're talking about years of exposure,” says Vitiello. Adds Leckman, “We're constantly hearing about how plastic the brain is, and these drugs transform the environment of developing brains.”

    But doctors see little alternative. Despite the risks, medication often promises real rewards to children with moderate to severe conditions. In some cases, an unmedicated child struggles to learn, read, or relate to other children—crucial childhood experiences that definitely shape a young brain forever. “If you've got a child who's got severe depression, and a medication that I don't fully understand can alleviate that depression and allow the child to develop, I'm going to take that risk,” says Jellinek. “The risks are abstract, and the suffering is right in front of me.”

    Parents such as Peg Nichols agree. “I don't know how my son's medication may affect his brain and body in the years to come,” says Nichols, head of communications for Children and Adults With Attention Deficit Disorder, a nonprofit group promoting research on ADHD. “But I do know that 5 years ago, he couldn't read. Now he can. He also writes, plays on a baseball team, and has friends. He can't do that without special intervention, including medication.” Still, adds Nichols, “I'd like to know [that] the medication is being researched, that it's safe and effective. It's bad enough to have the disorder, but to deny kids the right to safe treatment options is very disturbing.”

    Critical questions

    With that in mind, a growing number of researchers are putting psychiatric drugs to the test in clinical trials, brain scans, and the occasional animal study. At least 2 dozen clinical trials, mostly funded by NIMH and the National Institute of Child Health and Human Development, are enrolling thousands of children, from preschoolers to adolescents (Science, 17 November 2000, p. 1280).

    “We're trying to get at some of the critical questions, comparing medications to each other and to psychotherapy, to learn what's safest and most effective in children,” says Vitiello. The effort is full of hurdles, from the practical—getting squirmy children to hold still during brain scans—to ethical, such as medicating some children but not others.

    Over the next several years, these trials promise insight into a range of conditions and their treatments, including ADHD, OCD, schizophrenia, and depression. At Duke University and the University of Pennsylvania, for instance, the Pediatric OCD Treatment Study (POTS) is testing about 120 children ages 7 to 17 with OCD. The children receive the SSRI Zoloft (sertraline), cognitive behavior therapy, a combination, or a placebo pill. Preliminary results could be released within a few weeks.

    To uncover the biology behind mental disorders and the drugs used to treat them, other researchers have turned to brain imaging. With magnetic resonance imaging (MRI) scans, scientists can spot brain differences between normally developing children and those with a mental illness or between children taking a psychiatric drug and those who are not. Early imaging studies, for example, suggest that children with ADHD tend to have a smaller cerebellum than normal, and severely anxious children have an unusually active amygdala, a brain region associated with emotion and fear.

    Child psychiatrist Bradley Peterson of Columbia University in New York City calls recent technological advances in MRI scans “revolutionary” for psychiatry. “Every few years, some new technological leap broadens the tools we use to understand brain structure and function in children,” says Peterson, who studies Tourette's syndrome (TS).

    View this table:

    Usually emerging at about age 6 or 7, the symptoms of TS, such as involuntary tics, often improve with age. And Peterson's research hints at why. Using MRI, he and his colleagues have found that children with mild TS have a strikingly larger prefrontal cortex—a brain region involving both memory and behavior—than children with severe symptoms. By contrast, brain scans of adults whose symptoms persist reveal an unusually small prefrontal cortex. “We suspect that in a child with TS, the brain actually grows tissues to help reduce symptoms, enlarging the frontal cortex,” says Peterson. Perhaps, he adds, this brain compensation hasn't happened in adults whose symptoms linger.

    Brain scans also hint at how psychiatric drugs work. David Rosenberg, a child psychiatrist at Wayne State University in Detroit, Michigan, is using MRI to study childhood OCD and depression. In several MRI studies, Rosenberg and his colleagues found that after taking a common SSRI, Paxil (paroxetine), improved OCD patients had a smaller thalamus. By contrast, children with OCD who benefited from behavioral therapy alone showed no thalamic changes on brain scans. “Medication and therapy appear to act on different neural circuits,” Rosenberg suggests. His lab has begun experiments to pinpoint how medication, therapy, or their combination changes brain activity in depressed and obsessive-compulsive children.

    Although preliminary, Rosenberg's findings highlight the delicate risk-versus-benefit calculations doctors must make. Patients who don't need SSRIs to relieve OCD symptoms, he says, could suffer from the drugs. “Doctors don't have blood or brain tests to diagnose OCD, and all too often, we fear, they prescribe medication for something that just looks like OCD,” Rosenberg says. “This research suggests that after just 12 weeks of medication, the size of the thalamus decreases significantly. If you treat someone whose thalamus is just fine, that could be a problem.” Schizophrenia and other conditions, for instance, may involve an abnormally small thalamus.

    So far, at least one imaging study seems reassuring. Last October in the Journal of the American Medical Association, F. Xavier Castellanos, director of New York University's Institute for Pediatric Neuroscience, and his colleagues compared a decade of brain scans of children with ADHD and those without the disorder. Among their questions: Do ADHD stimulants affect normal brain development?

    The preliminary answer, Castellanos says, is no. On average, children with ADHD did have slightly smaller brains—and a 6% smaller cerebellum, a structure that helps control motor coordination—than normal children. But those brain differences showed up in both untreated ADHD children and those taking stimulants. “These are terrific techniques, because we're finally able to quantify something about the brain in vivo,” Castellanos says. His lab is planning a follow-up study to explore more subtle stimulant effects in the brain.

    In contrast to imaging inroads, child psychiatrists have barely tapped another classic source of drug data: animal studies. Drug companies must test a medication's toxicity first in animals before humans, according to FDA rules. Beyond that, however, few researchers have used rat or mouse models to test psychiatric drug effects over long periods or at different developmental stages.

    In a rare exception, psychiatrist Normand Carrey of Dalhousie University in Halifax, Nova Scotia, and his colleagues recently questioned whether young and mature rats would respond similarly to the SSRI Zoloft and the less commonly prescribed tricyclic antidepressant Norpramin (desipramine). For 2 weeks, the researchers gave 168 prepubescent, pubertal, and postpubertal rats either Zoloft, Norpramin, or plain salt solution. Next, the team did a “neuroendocrine challenge test”: giving each group a drug known to spike serotonin or norepinephrine levels and, as a consequence, measurable hormone levels. One group was given only this challenge drug, without any pretreatment.

    The rats did, indeed, respond differently by age, according to the study published last August in the Journal of the American Academy of Child and Adolescent Psychiatry. After receiving serotonin-boosting Zoloft, adult rats reacted only mildly to the challenge drug, slightly increasing serotonin levels further. By contrast, younger rats treated with Zoloft still responded to the challenge drug with a serotonin spike. Why? Carrey speculates that immature brains can't process the extra serotonin made available by Zoloft.

    Something similar may happen in humans, he adds. Researchers have noted that children who take Zoloft, Paxil, and other SSRIs tend to become highly agitated more often than adults do. “It could be that kids' existing serotonin levels are so high that they're more prone to agitation,” Carrey suggests. Alternatively, their young brains may have not yet developed the wiring needed to handle such a serotonin boost. “These hormone provocation tests are very gross measures,” he cautions. “There's a real need for more research.”

    Changing labels

    In the policy arena, FDA has finally found a way to inspire more drugmakers to test medications in children: money. In 1997, the Food and Drug Administration Modernization Act (FDAMA) included a “voluntary pediatric exclusivity provision,” permitting FDA to ask a drug company to test its patented medication in children. If the company complies, it wins six additional months of patent protection on that drug.

    Since the exclusivity provision kicked in 5 years ago, the labels of more than 40 drugs—including Luvox (fluvoxamine) for OCD and BuSpar (buspirone) for anxiety—have added pediatric information to guide doctors on dosage and side effects. Sometimes, as with BuSpar, the revised label warns pediatricians away from drugs not proven safe or effective in children.

    FDA has tried to go further. In 1998, the agency declared a “pediatric rule” policy giving it power to require companies to study drugs in children. But a lawsuit—filed by the Competitive Enterprise Institute, the Association of American Physicians and Surgeons, and Consumer Alert—challenged FDA's legal authority to do so. Last fall, a federal court agreed, overturning the pediatric rule.

    Yet the fight continues. Although FDA accepted the court's ruling, two outside organizations—the American Academy of Pediatrics and the Elizabeth Glaser Pediatric AIDS Foundation—stepped in as new defendants to appeal the decision. The case will likely drag on for months.

    Meanwhile, HHS is trying another tack: pushing Congress to enact the pediatric rule into law. Legislators are expected to propose such a bill this month. At the same time, HHS and FDA have pledged up to $93 million through fiscal year 2004 for tests of a dozen older drugs often given to children, including lithium, long used to treat bipolar disorder.

    Amid the growing concern over psychiatric drugs, safety has become a selling point. In January, when Eli Lilly released Strattera—a drug that treats ADHD by maintaining high levels of norepinephrine in the brain—the company touted its safety in four pediatric clinical trials. Slowly, the medical maze facing children with mental disorders is improving, notes Duke child psychiatrist John March, who co-chairs the POTS trial. Still, he adds, researchers desperately need data on long-term drug risks and medication combinations. “Until we have that data,” March says, “we're conducting an experiment with the lives of American children.”

    • *Identity kept private at family's request.


    Liquid-Mirror Telescope Set to Give Stargazing a New Spin

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Biggest of its breed, the new Large Zenith Telescope harnesses simple physics to probe space at a fraction of the cost of conventional models

    In an isolated building deep in a forest east of Vancouver, Canada, lab-coated technicians maneuver sealed containers of toxic liquid, their faces covered by breathing masks to avoid inhaling the hazardous vapors that slowly damage the human nervous system. No, this is not a group of terrorists preparing a poison gas attack in North America. It's the Liquid Mirror Observatory, where a revolutionary 6-meter telescope with a surface of liquid mercury is about to peer into the heavens for the first time. One of only a handful of such liquid-mirror telescopes in existence, the Large Zenith Telescope (LZT) will be by far the largest. In fact, it will be the third-largest optical telescope in North America. Yet hardly anyone knows of its existence.

    “We don't have a large number of people,” explains project director Paul Hickson of the University of British Columbia in Vancouver, Canada, “so our PR budget is comparably low.” In fact, the project's whole budget was a pittance. Building the telescope and observatory cost a mere $500,000—a fiftieth as much as a conventional 6-meter telescope. Little wonder that Hickson is now planning a giant successor: an array of liquid-mirror telescopes with an effective diameter of 50 meters, which could be completed in 8 years for less than $100 million.

    The savings stem from the novel way telescopes such as the LZT collect and focus starlight. Astronomical telescopes traditionally do that with curved mirrors, which must be painstakingly ground and polished into a parabolic shape with nanometer accuracy at great expense. In the LZT, the laws of nature take care of the mirror's curvature. Centrifugal forces smooth out any rotating liquid into a parabolic shape. By using a reflective liquid, such as mercury, it's easy and cheap to spin the universe into focus.

    Although the idea dates back to Isaac Newton, the first small liquid-mirror telescope was not built until 1909. Even then, the technology didn't work well until Ermanno Borra of Laval University in Quebec City, Canada, improved the design in a landmark paper published in 1982. Borra built a 1.5-meter prototype, and in the late 1980s, Hickson decided to start working on a 2.7-meter liquid-mirror telescope. “I had to build it in a garage,” he says. “It was too big for my lab.” A slightly larger follow-up effort formed the centerpiece of NASA's Orbital Debris Observatory in New Mexico, and in 1994 Hickson started work on the LZT, which is due to see its first light later this month. “I've been a bit skeptical because the technology started off looking pretty clunky,” says Tim Hawarden of the Royal Observatory in Edinburgh, U.K. “But they now seem to have got most of the bugs out, so no tech-skepticism is allowed any more.”

    The only way is up.

    Thirty liters of mercury form the surface of North America's third largest scope.


    The 6-meter mirror consists of a segmented parabolic dish, shaped to an accuracy of about a tenth of a millimeter, made of expanded polystyrene and polyester in an aluminum and Kevlar frame and filled with 30 liters of mercury. The dish spins around some seven times per minute, precisely enough to distribute the mercury into a 1-millimeter-thick parabolic layer. The outer edge of the mirror has a velocity of just more than 2 meters per second. “If you stand next to the mirror, you feel a bit of wind,” says Hickson, “but you don't hear anything at all. It's very quiet.”

    The wafer-thin mercury layer is essential to prevent surface ripples due to wind. In developing their scopes, however, Hickson and his team found that it was impossible to pour out a 1-millimeter-thick layer, because surface tension breaks the mercury into droplets. Their solution, repeated every few weeks when the mercury is removed for cleaning, is to fill the dish with 60 liters of mercury to make a thicker layer and then slowly drain half of it away through a hole in the center.

    Liquid-mirror telescopes suffer from one big limitation. Because the dish of rotating liquid must lie flat, they can look in only one direction: straight up. But that's fine for studying objects such as remote galaxies, distant supernova explosions, or pieces of space junk: After all, they are found all over the sky. “For cosmology, liquid-mirror telescopes look very attractive,” says Hawarden. The next generation of liquid-mirror telescopes will have movable secondary mirrors so they can see up to 4 degrees away from the zenith. Depending on latitude, that modification will bring a few percent of the whole sky into the telescope's view. Nevertheless, says Torben Andersen of Lund University, Sweden, “my guess is that [the successor of the LZT] will remain a ‘niche’ telescope, but that it has the potential of becoming a very good one.” Gerry Gilmore of Cambridge University, U.K. agrees: “[These telescopes] can do a very few things well but can never be general purpose.”

    Still, Hickson has big plans for his niche scopes. Together with a number of institutes in New York state and Australia, the University of British Columbia is now trying to raise up to $8 million to build a 10- or 12-meter liquid mirror telescope. Even that will still be just a prototype for their ultimate goal, the Large-Aperture Mirror Array (LAMA): 18 identical telescopes in a 60-meter-wide circular array, to be sited in either Chile or New Mexico. LAMA is expected to cost between $50 million and $100 million. “It's a lot of money,” says Hickson, “but it's still only 10% of the cost of the currently planned 30-meter telescopes” such as the California Extremely Large Telescope and the Giant Segmented Mirror Telescope (Science, 8 November 2002, p. 1151; 20 December 2002, p. 2311).

    Although distant galaxies and supernovas will be the staple diet of both the LZT and LAMA, the proposed array is also ideally suited to hunt and study extrasolar planets crossing in front of their parent stars and maybe even to measure the composition of their atmospheres. Says Hawarden: “It's likely to be so much cheaper [than other extremely large telescopes] that it might get going faster and cream off most of the good stuff.”


    New Rule Triggers Debate Over Best Way to Test Drugs

    1. Jennifer Couzin

    The U.S. Food and Drug Administration (FDA) is sidestepping human efficacy tests on a controversial drug that wards off the effects of chemical attacks

    Plans to protect U.S. military forces against the effects of nerve gas attacks are focusing attention on a new government policy that exempts some drugs from being tested for efficacy on humans. Critics worry that the policy will turn people into guinea pigs, but government officials and drug manufacturers see it as a safe way to approve products that would otherwise never reach the market.

    Last month, the U.S. Food and Drug Administration (FDA) exercised for the first time its power to approve critical drugs and vaccines based on evidence of their effectiveness in animal tests rather than on extensive human studies. This so-called animal rule permits scaled-back human trials, especially for products to counter biological and chemical weapons. It's meant to apply when full-scale clinical testing would expose humans to a life-threatening disease such as the plague or smallpox or when the size or cost of the trial would be impractical. Its adoption was spurred by the 11 September terrorist attacks and the anthrax-filled letters that followed.

    Under the policy, FDA may permit researchers and companies to skip human efficacy studies but not human safety tests. FDA officials emphasize that the rule is not a shortcut—it mandates expanded studies in animal models—nor is it to be applied broadly. “The reason you're doing this is to save someone's life,” says Robert Temple, director of one of FDA's offices of drug evaluation, of the animal rule's use. Adds Roy Gulick, who oversees HIV clinical trials at Cornell University's Weill Medical College in New York City and directs FDA's advisory committee on antiviral drugs: “Animals may react differently” than humans to a drug, but “animal data is better than no data at all.”

    Although the rule was generally seen as a step forward when it was adopted 8 months ago, FDA stirred the pot on 5 February by choosing to apply it first to a drug called pyridostigmine bromide (PB). FDA said that PB could be used by military troops facing an imminent threat from the nerve gas soman. First approved in 1955 to treat the neuromuscular disorder myasthenia gravis, PB was given in tablet form to an estimated 250,000 Gulf War troops to protect against soman, which was never used by Iraq. Later, as troops fell mysteriously ill with what came to be known as Gulf War syndrome, PB emerged as one of many possible causes (Science, 2 February 2001, p. 812).

    Having PB fall under the new rule “might lead to confusions about safety and effectiveness,” says Beatrice Golomb of the University of California, San Diego, who has studied the drug extensively. “There are substantial residual concerns” about the drug, she adds. The American Legion, which represents 2.8 million veterans, has urged Congress to investigate FDA's approval process. “They might as well have said FDA approves gasoline as a mouthwash,” says Steve Robinson, executive director of the National Gulf War Resource Center in Silver Spring, Maryland. Still, even its toughest critics acknowledge that PB (which must be taken several hours before toxin exposure) offers the only known protection against lethal doses of soman.

    Risky business?

    U.S. Army soldiers like these, conducting a military exercise in Kuwait last month, may receive a nerve gas drug that can't be tested in humans.


    PB's role in Gulf War syndrome is not the only thing on the table. Animal studies suggest that PB can cause some disabling side effects, including brain damage and muscular and reproductive problems, according to Mohamed Abou-Donia, a neurotoxicologist at Duke University in Durham, North Carolina. Over the last year, Abou-Donia has found that when animals are exposed to PB in combination with either pesticides, such as DEET insect repellant, or stress—both of which can affect the nervous system—the drug kills brain cells and testicular cells that produce sperm.

    FDA's Temple says that the agency approved PB for military use after a thorough review of tests in several animal species, many of them by the military. Temple discounts as unreliable the animal studies that show that pesticides can combine with PB to cause neuron damage.

    The debate extends beyond the drug itself. Several observers also criticize FDA for failing to get an outside opinion before approving PB. “You need tough peer review” for products under the animal rule, says Arthur Caplan, a medical ethicist at the University of Pennsylvania in Philadelphia, who believes that FDA should have consulted advisers. Advisory committees are only used when FDA is unsure how to proceed, counters Temple, which was not the case with PB.

    One unresolved issue is where to draw the line in forsaking human efficacy tests. Robert Haley, chief of epidemiology at the University of Texas Southwestern Medical Center in Dallas, says he and his colleagues faced a tough decision, for example, when they recently prepared to test a new tularemia treatment. Although tularemia can kill, UT Southwestern researchers felt that they could safely infect volunteers with the bacteria, gather immune-response data, and treat them with existing antibiotics if the experimental antibiotic failed. The university's institutional review board initially balked and contacted Caplan for advice. But after Caplan gave his cautious support, the university approved the trial and submitted its request for funding to the National Institutes of Health, where it awaits review.

    Another group of tularemia researchers—at DynPort Vaccine Co. in Frederick, Maryland—takes an opposing view. The scientists believe it wouldn't be ethical to test their tularemia vaccine for efficacy in humans, says Virginia Johnson, vice president of regulatory affairs. DynPort plans to study efficacy only in animals and seek FDA approval under the animal rule.

    Meanwhile, a handful of companies are quietly exploring the rule to see if it also can be applied to products that neutralize toxins unlikely to be used as weapons. One AIDS vaccine developer, for example, has inquired about testing that vaccine under the rule.

    Karen Goldenthal, who oversees applications for vaccine research and approval at FDA, says she flatly rejected the request. But she acknowledges that FDA is considering extending the rule beyond bioweapons. Acambis, a Cambridge, Massachusetts-based company, wants to apply the animal rule not just to its work on a new smallpox vaccine but also to vaccines that would protect against West Nile virus and Japanese encephalitis. Acambis's West Nile vaccine has been tested in mice, hamsters, monkeys, and horses; next month, company officials will meet with FDA to lobby to put the vaccine under the animal rule. Because West Nile is rare, “it would take field trials involving several million people” to test a vaccine, says Thomas Monath, the company's chief scientific officer.

    There's no consensus on how the rule applies to cases like this, in which the threat to human subjects is not the main concern. “You don't want to remove the strong incentive to do difficult human observational studies,” says Haley. But Monath argues that some products can't make it to market without something like the animal rule: “There are many situations where it would just simply have been impossible to use the regular recipe for getting a drug or vaccine licensed.”

    Both concur that safety testing in humans should reduce the number of unpleasant surprises once a product is approved. Still, the animal rule demands a new calculus of benefits and risks that no one quite knows how to perform.


    Impending War, Dam Hinder Iraqi Preservation Efforts

    1. Andrew Lawler

    The looming conflict in Iraq is hurting attempts to rescue more than 60 important archaeological sites, including the spiritual center of the ancient Assyrians

    After a lengthy siege, the foreign invaders stormed the sprawling city on the banks of the Tigris River, sweeping aside street barricades and beheading defenders before pillaging and burning the city. No, that's not a worst-case scenario for Pentagon war planners. Instead, it's a description of how Ashur, the ancient capital of Assyria just north of modern-day Baghdad, came to a fiery end in 614 B.C.

    Today, Ashur is again imperiled, this time by water. The threat posed by a nearby dam under construction is complicated by an anticipated U.S. invasion, wrangling between Iraqi ministries, and the country's desperate need for water and irrigated land. “In Iraq, the political situation turns archaeological emergencies into catastrophes,” says John Russell, a Massachusetts Institute of Art archaeologist. The crisis has drawn the attention of the U.S. State Department, which is organizing a special panel to advise a new regime in Baghdad on cultural heritage matters, assuming that the government of Saddam Hussein falls. But preserving Ashur may be a low priority for a new government keen on getting the country back on its feet.

    Ashur was the first capital and longtime spiritual center of the Assyrian empire before the Medes, people from the Iranian plateau, put an end to the city. But it's only in the past few years that archaeologists have begun to piece together the first hard evidence of those final days. In excavations that began in 2000, Heidelberg University archaeologist Peter Miglus has uncovered stark evidence of that destruction, including beheaded skeletons, street barricades, arrowheads, burnt rooms, and an unfinished tunnel that was likely built by the besiegers to circumvent the high walls. Miglus says that his team also found evidence that palace rooms and basements had been converted to grain-storage rooms, capable of holding enough barley to feed approximately 20,000 people for a month. “That's completely new,” says Russell, who adds that the stockpiled food suggests that the attack was not a surprise.

    Miglus suspended his excavations last fall because of the political uncertainties. But work continues on the Makhool Dam downstream, a project that will create a lake a kilometer wide and 45 kilometers long (Science, 22 March 2002, p. 2189). The dam is designed to ensure water and electricity for farmers in the region and millions of Baghdad residents suffering from the effects of a prolonged drought and of Turkish dams blocking the Tigris upstream.

    The dam is scheduled to be completed by 2006, sending waters within a few meters of Ashur's bluffs and partially covering lower parts of the city. A rising water table would destroy much of what remains underground, including mud cuneiform tablets, structures, and statues. After inviting a UNESCO delegation to view the site last fall, Iraqi Culture Minister Yousif Hammadi pledged to protect Ashur from the new lake. “It's clear they want to save Ashur,” says Arnulf Hausleiter, a member of the UNESCO team and archaeologist at the Free University in Berlin who has dug at the Assyrian site.

    Muddy waters.

    A new dam would turn the Tigris River (right) into a vast lake that would inundate parts of Ashur as well as dozens of unexcavated sites.


    One way to do that is with a coffer dam surrounding the entire site. But the 1.5-kilometer-long structure would likely cost as much as the Makhool Dam itself, says Lucio Cavazza, a Rome-based engineer who was part of the UNESCO team. A cheaper alternative, he adds, is to use Ashur's bluffs as part of a protective system, built in part of impermeable materials that would limit the rise of the water table.

    Geological and soil studies are needed to plan such a protective dam. But the Ministry of Irrigation was unwilling to work with UNESCO engineers and scientists during their recent visit. “They didn't tell us anything,” complains Cavazza, adding that there was “an absence of any collaboration on their part.” He found no evidence that the necessary geological or soil work had been done to assess what kind of protection system would best suit Ashur. “They should have already started this,” he says.

    When Cavazza visited the Makhool Dam, where construction of the foundations is well under way, he was shown a schematic drawing but not permitted to wander the site. And the situation has since grown more dire. UNESCO officials have put a hold on additional visits, and Western officials see time running out on Ashur as work on the dam continues. They also fear that the Iraqi government could not afford the costs associated with saving the city.

    The Ministry of Culture is still hoping international support can turn the tide, however. Last fall, Iraq asked the United Nations to name the city as a World Heritage Site, which requires a clear plan to preserve the ruins for researchers and the public. UNESCO's Giovanni Boccardi says his organization is willing to help with planning and executing protective measures for Ashur, but he adds, “we're not the World Bank; we don't have millions of dollars to spend.” A more likely role, he says, is for UNESCO to help mobilize international support for both Ashur and for salvage excavations in the region.

    Along with threatening Ashur and drowning dozens of small villages, the lake would inundate more than 60 known important archaeological sites in Assyria's heartland, including a former capital city—Kar-Tikulti-Ninurta—that has never been extensively surveyed. There could be many more, adds Hausleiter, who visited some of the endangered sites in November. Iraqi teams are working at 10 sites, says Donny George, research director for the State Antiquities Board of Iraq. But although foreign researchers were actively involved in preserving other sites threatened by previous dams, the prospect of war has made the government reluctant to seek their assistance in the Makhool project.

    The dilemma has caught the attention of the U.S. State Department, which is busy planning a post-Saddam Iraq. “Yes, we are aware of the seriousness of this issue,” says State Department spokesperson Greg Sullivan. The department is organizing a panel on antiquities and preservation of Iraq's cultural heritage as part of the Future of Iraq Project (Science, 31 January, p. 643). Sullivan says the panel, which would include “mostly Iraqi” expatriates as well as some “international experts,” would work with a subcommittee responsible for water and agriculture issues to “frame the issues” around the Makhool Dam and Ashur and propose a series of options.

    An extended conflict would likely halt dam construction. But once fighting ended, Ashur and the dam could pose a thorny problem for any U.S.-backed regime in Baghdad. Building the dam would provide jobs for an impoverished region, and completing it could temporarily lessen the area's severe water problems. At the same time, a large community of Iraqis in the United States who claim descent from the ancient Assyrians supports efforts to protect Ashur. Should there be new rulers of Iraq, they may have to decide whether to save the ancient city—or pick up where the Medes left off some 2700 years ago.


    Tailor-Made Vision Descends to the Eye of the Beholder

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Designed to help telescopes see clearly through Earth's turbulent atmosphere, adaptive optics can also reveal, and correct, subtle flaws in the human eye

    For nearly 200 years, optometrists have boiled the eye's optical quality down to two numbers. The “sphere” describes how near- or farsighted the eye is, and the “cylinder” measures the degree of astigmatism. If the lens of the eye were a simple geometric object, those numbers would provide all the information needed to compensate for its failings. In fact, real lenses in living eyes are much more complicated than that, with imperfections that make each eye as unique as a fingerprint—and a good deal more changeable. For the most part, eye doctors have ignored the higher order aberrations of the eye, which go by obscure names such as coma, trefoil, and spherical aberration.

    In the past 3 years, however, higher order aberrations have seized the optometric spotlight. Borrowing technology astronomers use to unscramble starlight, optometrists can now measure dozens of different types of aberrations with a single instrument, called an aberrometer, that is no bigger than a lunch box. The technique, an offshoot of the adaptive optics used in telecopes, may help eye surgeons deliver on laser eye surgery's frustratingly elusive promise of perfect vision for all (see sidebar).

    Many vision scientists, however, are more excited by the possibility of turning adaptive optics around—peering into the eye itself with an instrument that corrects the aberrations in the lens. “My feeling is that the potential benefit of looking into the eye will greatly outweigh applications that go the other way,” says Larry Thibos of Indiana University Bloomington's School of Optometry. Already, researchers have used the pinpoint resolution of adaptive optics to see cells in the living eye for the first time, including individual cone cells in the retina and individual white blood cells moving through the eye's capillaries. “I don't want to oversell it, because we aren't there yet,” says David Williams of the University of Rochester in New York, “but this shows tremendous potential for understanding the whole spectrum of retinal diseases”—from colorblindness to glaucoma.

    Although Williams pioneered adaptive optics for retinal imaging in the United States, the field's leading technical wizard is probably Austin Roorda of the University of Houston, Texas. As a postdoctoral fellow with Williams, he produced false-color images that not only showed the eye's cone cells but also distinguished among the three types, which are sensitive to blue, red, and green light, respectively (see figure). Since moving to Houston, Roorda has developed the world's first—and so far only—scanning laser ophthalmoscope with adaptive optics that can image the retina at different depths. Each layer of the retina tells its own story. Glaucoma targets the ganglion cells and their nerve fibers; diabetes causes “microaneurysms” in the blood vessels; and the causes of colorblindness may become visible in the photoreceptor layer.

    Eye spy.

    Adaptive-optics views of the retina, such as this false-color image, show individual photoreceptors and even reveal their color specialization.


    Achieving these results took some doing. “The eye is designed to make it difficult to see single cells,” Roorda says. Lenses, he explains, have evolved to focus light onto at least two photoreceptors at a time, the minimum number needed to avoid transmission errors. And what's true of light going into the eye is also true of light coming out: With conventional optics, nothing smaller than about two photoreceptors can be made out clearly.

    Adaptive optics, however, can do better. Astronomers developed the technique to peer through the “lens” of Earth's atmosphere, which rumples the wavefronts of incoming light from distant stars, making the stars twinkle and blurring their telescope images. In the 1980s, astronomers learned to compensate by making segmented mirrors that move as fast as the atmosphere does, tilting the various pieces separately to catch stray photons and refocus them to a point. In the eye, some aberrations, due to the contraction of muscles and the pulsing of blood through the retina, change over time. Others are permanent, such as astigmatism and spherical aberration—the same imperfection that crippled the Hubble Space Telescope until it received its version of “LASIK surgery” in 1993. By applying adaptive optics to both types of imperfections, Roorda says, he can improve the lens's resolution fourfold—enough to see single retinal cells.

    Roorda's primary interest at the moment is blood flow, which has been implicated in both glaucoma and diabetes. “Glaucoma is poorly understood,” he says. “Does the high pressure in the eye directly attack the ganglion cell's nerve fiber, or does it pinch off the blood vessels that nourish these fibers? The mechanism is not known, and yet drug companies are developing drugs that alter blood flow.” A close-up look at the retina might reveal what is going wrong and enable doctors to diagnose glaucoma much earlier.

    At Rochester, graduate student Heidi Hofer and postdoctoral researcher Joe Carroll are following up Roorda's earlier work by studying the cone cells of colorblind people. Together with Jay and Maureen Neitz of the Medical College of Wisconsin in Milwaukee, they are trying to arbitrate between two competing theories of colorblindness. All red-green colorblind people are functionally missing one class of cones, either red or green. But the mystery has been whether the receptors are physically absent, or whether they are intact but all of the same type. Hofer's and Carroll's work is still preliminary and unpublished, but they believe they are seeing evidence that each theory describes part of the colorblind population. “By pure luck, the first two retinas we saw fell into the two groups,” Carroll says.

    Costing more than $100,000 and as big as an entire desktop, Roorda's adaptive-optics ophthalmoscope is still far too unwieldy and expensive for clinical use. To bring the size down, he says, the device needs mirrors smaller than its current 46-millimeter behemoths. A flexible mirror only 3 millimeters wide is being tested at the University of Rochester, which, with five other institutions, has received a $10 million grant from the National Eye Institute to develop a user-friendly version of the ophthalmoscope. But Williams emphasizes that the future of adaptive optics is not tied to any single optical instrument. “Adaptive optics is an enabler,” he says. “It can enhance any of the technologies we have today.”


    Coming Soon: 'Wavefront Eye Surgery'?

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Since it was introduced in the 1990s, laser-assisted in situ keratomileusis, better known by its acronym LASIK, has become one of the most popular—and heavily marketed—forms of elective surgery. In 2000, its peak year to date, more than a million people jumped at the chance to enjoy a life without spectacles.

    Most patients were delighted with LASIK, but a minority, variously estimated at 3% to 10%, had disastrous results. “My windows to the world [have been] ruined,” one patient wrote on the Web. “I have traded my God-given lousy vision, which I had for 30 years, for man-made lousy vision that I will probably have for the rest of my life,” wrote another. Doctors wondered why many of these patients tested normally for focus and astigmatism and could read the 20/20 line on an eye chart, yet they complained of poor night vision, ghosts, and haloes around bright objects.

    Now, adaptive optics is helping to finger the culprit in many of these cases. Although the surgery corrected the eye's principal aberrations—sphere and cylinder—it dramatically worsened higher order aberrations. When doctors bounced light off the retina, the wavefronts that marked the leading edge of the rebounding rays—which should have come back flat as a pancake—instead returned shaped like potato chips, three-cornered hats, or sombreros.

    Flat improvement.

    Distortions due to off-center LASIK (top) were ironed out by wavefront surgery.


    In retrospect, the reasons are painfully obvious. Surgeons had no way to see during the operation what they were doing to the wavefront. “They were flying blind,” says Larry Thibos, a vision researcher at Indiana University, Bloomington. On top of that, the eye's own healing process would often change the shape of the cornea. “I frequently state that we measure at 0.01 microns, treat to 0.25 microns, but heal at 5 microns,” says John Doane, an eye surgeon at Discover Vision Centers in Kansas City, Missouri.

    LASIK systems equipped with aberrometers and “closed-loop” tracking systems that react automatically to the eye's motions are just beginning to hit the market. Alcon Laboratories of Fort Worth, Texas, received the first Food and Drug Administration approval for a wavefront-guided LASIK system last October. A second company, Visx Inc. of Santa Clara, California, is expected to receive approval for a similar system this month. “Visx has two-thirds of the market share in the U.S.,” says Doane. “Once [their system] has been out for 3 months, we'll know if wavefront technology is working.”

    Don't expect to see failed LASIK procedures mentioned in the advertising, though. Instead, the technique will be sold as “custom LASIK,” tailored to the individual aberrations of each patient's eyes and perhaps offering the possibility of 20/16 or 20/12 super-vision. But ophthalmologists say that the real advantage of custom LASIK is the opportunity to prevent or correct surgery-induced aberrations. According to Sanford Feldman, a surgeon at One-to-One LASIK in San Diego, California, “Almost every LASIK surgeon has at least a couple [of] patients he or she would like to make better, and everyone is hoping that wavefront ablation will be the magic wand we can wave over these patients to make them happier.”

  15. Connecting the Dots to Custom Catalysts

    1. Adrian Cho*
    1. Adrian Cho is a freelance writer in Grosse Pointe Park, Michigan.

    They may not glitter, but tiny dots of gold possess a surprising knack for speeding along chemical reactions. Surface science may soon explain why—and lead to tailor-made catalysts

    Like true love, the shine on your gold wedding ring will last forever. That's because gold is the least reactive of all metals. So whereas copper and silver combine with oxygen to turn dull green and grayish black, gold resists all chemical suitors and retains its purity and luster.

    Shrink it, however, and gold shows a wanton streak. When they are spread on certain surfaces, nanometer-sized specks of gold eagerly join in a chemical ménage à trois with oxygen and toxic carbon monoxide to convert the latter into relatively harmless carbon dioxide. Tiny dots of gold will also play matchmaker to oxygen and a hydrocarbon called propylene to produce propylene oxide, a key ingredient in polyurethane foams. Those reactions and others could make gold catalysts surprisingly valuable in combating pollution, synthesizing chemicals, and even prolonging the life of hydrogen fuel cells.

    Yet chemists still aren't sure how gold nanoparticles work their catalytic tricks. To find out, they're employing a battery of cutting-edge surface-science techniques to control the size of the dots to within a single atom, determine the precise shapes of the dots, and even watch individual molecules stick to them. “You have state-of-the-art control over the surface, so you can do a definitive experiment,” says Charles Campbell, a chemist at the University of Washington, Seattle. Coupled with powerful new theoretical methods, the surface-science approach promises to lay bare gold's curious chemical legerdemain within a few years.

    The same techniques are elucidating the inner workings of countless chemical interactions involving surfaces adorned with platinum, palladium, ruthenium, silver, copper, nickel, and other metals. Researchers hope to pin down general principles that govern catalysis on surfaces, or heterogeneous catalysis, and use them to design catalysts for specific chemical reactions, says Wayne Goodman, a chemist at Texas A&M University in College Station. “The challenge is to understand how this chemistry is working and then project it onto other types of chemistry,” Goodman says.

    Good as.

    Gold nanoparticles, ripe for catalysis, adorn a titanium oxide substrate.


    A golden mean

    To be a good catalyst, a substance must steer a middle course. It must bind with other molecules tightly enough to help them come together but loosely enough so that they can react with one another. For decades, chemists thought that gold couldn't make a good catalyst because it doesn't bind molecules that land on its surface.

    Gold becomes considerably more sticky, however, when it's spread in tiny dots on certain metal-and-oxygen compounds. That's what Masatake Haruta, a chemist now at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan, discovered in the late 1980s. Haruta and colleagues found that gold dispersed on substances such as titanium oxide could catalyze the oxidation of carbon monoxide to carbon dioxide. They have since shown that gold nanoparticles will catalyze a surprising variety of chemical reactions.

    Haruta and others deposit the catalytic metal in the nooks and crannies of a high-surface area material, such as a powder or porous pellets, then study its ability to speed various chemical reactions. They may also study the structure of the concoction by viewing small samples with electron microscopes and other tools. Although this approach has produced the vast majority of real-world heterogeneous catalysts, it reveals little about how individual molecules interact with the tiny metal dots.

    To probe these interactions, researchers have mounted a three-pronged attack, using the powerful tools of surface science that have emerged in the past 20 years. First, they fashion tiny slabs that serve as precisely controlled mockups of the real-world catalysts. Using the latest materials science technologies, they can control the atom-by-atom architecture of the underlying metal-oxide surface, as well as the size and distribution of the gold particles in these model catalysts.

    Second, researchers employ exquisitely sensitive devices to study their models. The most powerful of these is the scanning tunneling microscope (STM), in which researchers pass a minuscule finger only a few nanometers wide over the sample's surface. A tiny current flows between the finger and the sample. By adjusting the height of the finger to keep the current steady, researchers can map out the surface one atom at a time. Researchers are also perfecting ways to use the STM, electron microscopes, and other probes in the presence of reacting gases, says Poul Hansen of Haldor Topsøe A/S, a leading catalyst manufacturer in Lyngby, Denmark. “We can actually start to observe phenomena at the atomic scale under realistic conditions,” Hansen says.

    Last but not least, theorists are employing a scheme called density functional theory to calculate how molecules will stick to the nanoparticles and interact. All of chemistry revolves around swapping electrons, and density functional theory forecasts how atoms and molecules will rearrange themselves and bond as the electrons they share shift to minimize their energy. All it takes is a few hundred computers and about a teraflop of computing power.

    On golden bonds

    Even with these tools, researchers haven't quite figured out why gold nanoparticles are such good catalysts. Haruta and others suspect that the chemistry gets licentious at spots where gold meets metal oxide, perhaps because the underlying surface readily releases highly reactive individual oxygen atoms. For a given amount of gold, the total number of such “perimeter sites” grows as the size of the particles shrinks, just as 16 pounds of marbles have a lot more surface area than a single bowling ball. The increase may explain why gold nanoparticles are much better catalysts than bulk gold is, and why they behave differently on different metal oxides.

    Other researchers argue, however, that the secret lies with the geometry of the gold nanoparticles. Gold consists of layer upon layer of atoms packed into a hexagonal pattern, like billiard balls crammed into a rack. Ordinarily, a lumpy gold nugget shows one of these inert flat planes on just about every surface. Gold nanoparticles, however, have a smooth surface only on top, says Manos Mavrikakis, a chemical engineer at the University of Wisconsin, Madison. The sides of the nanoparticles consist instead of jagged “step edges,” along which one layer of atoms juts out from under the next. Atoms in these step edges have fewer gold neighbors than do those in the smooth surface, and density functional calculations show that they're stickier, Mavrikakis says. And, like the number of perimeter sites, the number of step edges increases as the nanoparticles shrink.

    A third theory holds that the basic physical properties of gold change when it is dished out in tiny dollops. For example, Texas A&M's Goodman and colleagues used an STM to measure the electrical conductivity of gold nanoparticles on titanium oxide. They found that when the particle diameter shrank below roughly 3 nanometers and the thickness dropped to two atomic layers, the particles stopped passing electricity as freely as metals normally do. “By all indications, these particles are not metallic,” Goodman says. “They've lost their metallic properties and have become more like individual molecules.” If so, researchers must look beyond the foibles of a few atoms on the sides and edges of a nanoparticle and consider the complicated behavior of the particle as a whole.

    The smallest nanoparticles may obtain their catalytic power by soaking up electrical charge, argue physical chemist Ulrich Heiz of the University of Ulm, Germany, and physicist Uzi Landman of the Georgia Institute of Technology in Atlanta. Using a mass spectrometer, Heiz selected gold clusters containing a specific number of atoms, eased them onto a magnesium oxide surface, and measured the tiny amount of carbon monoxide they oxidize. Only clusters with eight or more atoms catalyzed the reaction, he found. These nanoparticles absorb a key extra electron from the surface, Landman calculated, and that electron helps cleave a captured oxygen molecule into two individual oxygen atoms.

    Researchers may soon get a front-row view of how the dots work, thanks to the STM virtuosity of Flemming Besenbacher, a physicist at the University of Aarhus in Denmark. Besenbacher and colleagues have developed techniques that allow them to see not only the atom-by-atom details of a surface, but also oxygen and carbon monoxide molecules bound to it. The researchers can scan a small area as often as once a second to make movies of it changing. They have “filmed” oxygen and carbon monoxide interacting on titanium oxide, and they've recently added gold nanoparticles to the cast. Besenbacher suspects that no single theory will explain all the properties of the particles. “I would not be surprised if there were two or three effects that were playing together,” he says.

    Many models.

    Chemists have proposed several possible mechanisms to explain why ordinarily inert gold turns into a powerful catalyst when shrunk to nanoparticle size.


    Worth its weight?

    Gold nanoparticles have already found some odd niches. For instance, they're used in Japan to break down foul toilet odors. Whether they will prove their worth in industrial production lines, however, remains to be seen. Dow Chemicals is studying their potential to profitably turn propylene into propylene oxide, although that may be as much a question of economics as of chemistry.

    Gold nanoparticles may also help keep the pep in the hydrogen fuel cells that might power cars of the future. Within a fuel cell, a platinum catalyst converts hydrogen and oxygen into water and energy. But carbon monoxide gums up the catalyst and ruins the cell. Gold nanoparticles might excel at getting rid of such contamination, especially as they can oxidize the carbon monoxide without burning the hydrogen at the same time.

    For gold nanoparticles to become truly practical, however, researchers must overcome the particles' inherent tendency to clump together, especially when heated or exposed to reactive gases. Known as sintering, this process causes the particles to lose their catalytic powers, even as they work their magic. Researchers are exploring ways to avoid sintering by modifying the underlying support. One strategy might be to strand each gold nanoparticle on its own nano-island of a relatively sticky substance to discourage it from wandering—an approach that merges cutting-edge surface science and nanotechnology. “Those two things will come together, I promise you,” says the University of Washington's Campbell. “It will just take a while.”

    As for the grander goal of designing catalysts from first principles, some chemists are wary of predictions that tailor-made catalysts are right around the corner. “Don't forget, you can find that kind of statement in the literature 50 years ago,” says Bernard Nieuwenhuys, a physical chemist at Leiden University in the Netherlands. Others, however, think that recent progress augurs a chemical revolution. Jens Nørskov, a physicist at the Technical University of Denmark in Lyngby, has discovered a general relationship governing the catalysis of a whole class of reactions on specific surface features and has used density functional theory to help custom-design two catalysts—one for making ammonia and another for making hydrogen from methane. “We're beginning to see the first molecular design of catalysts,” Nørskov says. “We're on the verge of something big.”

    Whatever its industrial future, gold's uncanny knack for catalysis has already provided a fascinating scientific puzzle. For most researchers, nothing could be more precious.

  16. Water Splitting Goes Au Naturel

    1. Joe Alper*
    1. Joe Alper is a writer in Louisville, Colorado.

    In living organisms, enzymes called hydrogenases harness plentiful metals to turn water into hydrogen and vice versa. Stripped of their proteins, they may show chemists a surprising shortcut to producing the fuel of the future

    In his recent State of the Union address, President George W. Bush touted hydrogen as the automotive fuel of the future, one that will reduce U.S. dependence on imported oil while cutting greenhouse-gas and other noxious emissions. “A single chemical reaction between hydrogen and oxygen generates energy, which can be used to power a car producing only water, not exhaust fumes,” he said in requesting $1.2 billion for research on making an affordable hydrogen-powered automobile. Although the president got his chemistry correct, he was putting the cart before the horse; as yet, there is no energy- and cost-efficient method for making hydrogen, let alone burning it as a fuel.

    Today, petrochemical plants generate millions of tons of hydrogen for a variety of industrial purposes, largely from methane. They do so in a process that uses expensive platinum catalysts, requires more energy than it makes as hydrogen, and produces 3 tons of carbon dioxide for each ton of hydrogen. “You might as well stick to burning petrol in a car as make hydrogen this way,” says chemist Chris Pickett of the John Innes Centre in Norwich, U.K. What's needed, he says, are better catalysts, ones that don't require rare metals such as platinum and can make hydrogen from water, rather than natural gas. For inspiration, he and others are turning to the most efficient producers of hydrogen on Earth: bacteria, including the Escherichia coli that live in the human gut.

    Over a billion years ago, ancient bacteria evolved the ability to use iron and nickel to make hydrogen from water and then oxidize it as fuel, incorporating these metals into the enzymes now known as hydrogenases. “If we could mimic the hydrogenases, we would change the economics of hydrogen production tremendously,” says Pickett. That has been easier said than done, however, as few of nature's catalysts have turned out to be as surprising and confounding to chemists as hydrogenases. As described by chemist Thomas Rauchfuss of the University of Illinois, Urbana-Champaign, “over the few billion years that nature has been refining the hydrogenases, she's come up with a few clever tricks that we've only recently figured out.” Chemists are making progress, however, and they are using those tricks in developing synthetic catalysts that come tantalizingly close to success. “We haven't yet created a catalyst that you could use to make hydrogen commercially, but we're on the right track,” says Rauchfuss.

    A tough nut to crack

    Present throughout the microbial world, as well as in some green algae, the hydrogenases play a variety of roles in energy metabolism and fermentation, largely in environments where oxygen is scarce, such as the mammalian digestive system and subsurface soil. Although the first hydrogenase was discovered in 1930, it wasn't until the mid-1990s that researchers learned to crystallize samples of the two major types of hydrogenase—a major hurdle—and use these crystals to determine the three- dimensional structures of these proteins. Even then, the exact nature of their metal-containing catalytic centers, or active sites, remained controversial, largely because investigators had a hard time believing what the data were telling them. “These enzymes are unlike any that we see in nature,” explains Edward Stiefel, a chemist at Princeton University who investigates metal-containing enzymes and has followed the hydrogenase field vicariously for many years.

    Two features make the hydrogenases unique in nature—and a surprise to chemists. First, their active sites contain two metal atoms connected by chemical bonds. These can be either one atom each of iron and nickel or two atoms of iron, depending on the particular hydrogenase. And second, the metal atoms are surrounded by carbon monoxide and cyanide, molecules previously associated only with dead creatures, not thriving bacteria. Although such features are not unusual in laboratory-produced catalysts, they were hard to fathom in an enzyme. “We actually sat on the data for a long time, until we were certain that it was correct and that we had interpreted it correctly,” says Kimberly Bagley, a biophysical chemist at Buffalo State College in New York. In 1999, Bagley published work done in her lab proving that carbon monoxide and cyanide were essential components of these enzymes' active sites.

    Once their disbelief wore off, chemists began asking what roles the carbon monoxide and cyanide played in catalysis and how the metal atoms accepted and gave up electrons, the key to efficiently splitting water into hydrogen and oxygen and vice versa. The first clue came from the physical structures of the proteins surrounding the active sites. Although the protein structures of the nickel-iron and iron-only versions of hydrogenase differ markedly, each one sports tunnels that shuttle electrons and molecular hydrogen between the surrounding environment and one of the two metal atoms buried deep within the enzymes, indicating which is the business end, or terminal end, of the active site.

    Both forms of hydrogenase also contain several sulfur atoms—part of the side chain of the amino acid cysteine—projecting into the active site, where they can form chemical bonds with the iron and nickel atoms. Such sulfur-metal bonds are common features of many different kinds of industrially important catalysts, and their presence showed chemists that the hydrogenases weren't doing something radically out of the realm of traditional inorganic chemistry. “Chemists aren't going to have to start from scratch as they try to make catalysts,” says crystallographer John Peters of Montana State University in Bozeman, who, along with Juan Fontecilla-Camps of the French basic research agency CNRS in Grenoble, France, determined the structure of an iron-only hydrogenase.

    Coming soon.

    New catalysts may help power fuel-cell cars like this Mercedes, set to go on sale later this year.


    The next insight came from spectroscopic studies showing that the hydrogenases use a method for making hydrogen that is totally different from the way platinum catalysts do it. It involves taking two protons and two electrons to make one molecule of hydrogen. Platinum adds one electron to each proton, making two neutral atoms of hydrogen that join together. In contrast, the hydrogenases stick both electrons on the same proton, making a negatively charged hydride that then reacts with a positively charged proton to make hydrogen (see figure, below). “This was another surprise, since metal hydrides are not seen in nature,” says Marcetta Darensbourg, an inorganic chemist at Texas A&M University in College Station. “But from the data, there is no doubt that this is what's happening in these enzymes.”

    Darensbourg's research indicates that the combination of sulfur atoms, carbon monoxide, and cyanide —the so-called ligands that surround the metal atoms in the active site—work together to help move electrons from one metal atom to protons. For example, in the iron-only active site, these ligands appear to stabilize a “torqued geometry, in which the iron-iron bond is twisted about its axis,” Darensbourg explains. The twist appears to keep the two iron atoms in an unusual electronic state, in which the terminal iron—the one closest to the electron and hydrogen tunnels leading to the active site—has one more electron than its partner does. Theoretical calculations by physical inorganic chemist Michael Hall and his colleagues at Texas A&M first suggested that the unusual ligands coordinated to the iron were key to forming such a mixed oxidation state and transferring the necessary electrons and protons to the metal atoms. Exactly how this transfer occurs is still under investigation.


    A hydrogenase (top) funnels in protons from water and electrons and combines them into hydrogen molecules (bottom).


    Spectroscopic studies show, too, that at least one of the carbon monoxides in the active site moves between the two iron atoms as the reaction occurs. Both the Darensbourg and Pickett labs have shown that this flip-flop creates a transient chemical intermediate in which the terminal iron atom is bonded to one of the carbon monoxides. This chemical arrangement puts the iron atom in a high- energy state, which can easily transfer two electrons to a single proton, a process known as reduction, to make a hydride.

    Less is known about how the nickel-iron enzyme works. “Certainly, something different is going on in the active site of the nickel-iron and iron-iron enzymes, and we think it has something to do with both the metal atoms and the arrangement of the ligands around it,” says Peters. “We need to better understand those differences in order to help those investigators who are trying to use these enzymes as models for making synthetic catalysts.”

    Yet even without that information, chemists are making substantial progress in creating catalysts that mimic what the enzymes' active sites do, but without all the surrounding protein. Rauchfuss, for example, has been working with some success on catalysts based on the iron-only active site. He and his co-workers started with an iron-iron core surrounded by carbon monoxide, cyanide, and a third molecule containing both sulfur and nitrogen. Adding a small amount of acid —a source of protons —led to some hydrogen gas production, but that reactivity died quickly.

    Reasoning that they needed to tweak the electronic properties of the iron atoms in the catalyst, the Illinois researchers replaced one of the cyanides with an analogous phosphorus-containing ligand. The resulting compound was “a rugged and efficient hydrogen-production catalyst,” says Rauchfuss. Its only drawback was that it required too much energy to drive the reaction, making it unsuitable for commercial development. Still, he says, “this is the first time we've gotten steady, significant hydrogen production from one of our model systems, so it's an encouraging first step.” His group is now attempting to substitute other ligands for some of the carbon monoxides and cyanides in order to better reproduce the active site's geometry around the iron-iron center.

    Meanwhile, Pickett's team is trying to reproduce the mixed iron oxidation states seen in the enzyme. The team's approach is to create molecules containing an iron-iron combo in which the carbon monoxide ligands are free to swing back and forth between the iron atoms in a way that mimics what's going on in the enzyme active site. One such system is able to copy the enzyme's function, but only at temperatures below −40°C. Still, Pickett is encouraged: “I think we're on the right track, because we're getting the right geometry and electronic state.” Pickett says his team will soon report an improved version of the catalyst.

    Eventually, chemists hope to create stable bimetal species that they can coat onto the surface of an electrode, in order to feed electrons to or harvest them from the catalyst. For making hydrogen, they hope to scale up the classic high school electrolysis experiment: running electricity through two electrodes stuck in a beaker of water to produce hydrogen at one electrode and oxygen at the other. Today, electrolysis uses too much electricity to be an economical source of hydrogen. But coating the hydrogen-producing electrode with a hydrogenase-derived catalyst might be just the ticket for making it cost-effective. Similarly, catalysts based on the nickel-iron hydrogenase could improve the hydrogen-oxidizing process that's central to hydrogen fuel cells, the presumed power plant for President Bush's car of the future.

    Last year, chemist Fraser Armstrong and his colleagues at Oxford University demonstrated that this approach works using purified nickel-iron hydrogenase. In work that Darensbourg calls elegant, the Oxford team showed that the enzyme absorbed onto a graphite electrode was as good a hydrogen-splitting catalyst as platinum is. “It was a great proof of principle,” says Darensbourg. “It adds to the excitement of the field.”

    It's been only 5 years since chemists gained a clear picture of the unique active sites in the hydrogenases, but “we now have a good road map that synthetic chemists can use to make catalysts,” Darensbourg says. “This is now a doable problem.”

Log in to view full text