News this Week

Science  11 Nov 2005:
Vol. 310, Issue 5750, pp. 952
  1. PUBLIC HEALTH

    Pandemic or Not, Experts Welcome Bush Flu Plan

    1. Jocelyn Kaiser

    The Bush Administration's proposed flu plan, calling for $7.1 billion to help prepare the nation for a deadly influenza pandemic, is generally winning plaudits from public health experts—but not necessarily because they think a pandemic is imminent. Even if no such disaster materializes, they say, the plan will finance a much-needed overhaul of the nation's regular flu vaccine infrastructure.

    When he announced the initiative last week, President George W. Bush noted growing concerns that the H5N1 avian influenza now spreading west from Asia could acquire the ability to be transmitted from human to human. In caveats sometimes lost in general press accounts, Bush and other officials emphasized that H5N1 might not morph into a pandemic strain. A plan is needed, they say, to combat the emergence of any superstrain of human influenza, an event that has happened three times since 1900 and that many think is inevitable in the next few years.

    The biggest chunk of the money, $2.8 billion, would be spent on what Bush called a “crash program” to speed cell-based vaccine technology. The goal is to be able by 2010 to manufacture a new vaccine for all Americans within 6 months should a pandemic strike. Another $2.5 billion would be used to stockpile existing vaccines and antiflu drugs. The two other components of the strategy are global surveillance and helping federal and state agencies prepare (see table). The funds would be appropriated all at once but spent over several years.

    The existing method for manufacturing flu vaccines is outmoded and slow. Virus used to make flu vaccine is grown in eggs, which have to be ordered in advance, and the entire process takes 9 months. The Administration's plan aims to accelerate the production of flu vaccines by growing seed viruses in cell culture instead of eggs. A half-dozen companies are working on cell-based flu vaccines, and one, Sanofi Pasteur, has already received $97 million from the Department of Health and Human Services (HHS). Bringing them to market and building capacity to make pandemic vaccine for all Americans will take 5 years, the Bush plan says. The challenges involve finding a cell line in which the virus grows well and optimizing it for growing high yields in 100,000-liter fermenters. “A lot of it is empirical,” says Gary Nabel of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland. Any vaccine would also have to go through clinical trials for safety and efficacy, and the production process must meet regulatory standards.

    Precautionary principle.

    President Bush is calling for $7.1 billion to prepare the nation to deal with a possible influenza pandemic, which some experts think is inevitable.

    CREDITS (TOP TO BOTTOM): SOURCE: WHITE HOUSE; EVAN VUCCI/AP PHOTO

    Although “it's arguable” whether cell-based technology will shave much off the production time, it will allow “surge capacity” to make larger quantities, says Bruce Gellin, director of the National Vaccine Program Office at HHS, because companies won't be limited by the available supply of eggs. The target is 600 million vaccine doses, two per person.

    The plan also calls for stockpiling available vaccines and drugs. The government has already funded two companies to manufacture an experimental human H5N1 vaccine. Depending on how much the virus changes, this vaccine might offer some protection should H5N1 acquire the ability to infect people easily. About $1.5 billion is slated for HHS to buy 40 million doses (enough for 20 million people) by 2009 and for the Defense Department to buy vaccine as well. Bush also wants Congress to pass legislation to shield vaccine companies from lawsuits. Some Democrats oppose that step, but “you will never get companies to make hundreds of millions of doses” of vaccine without it, says immunologist Paul Offit of the University of Pennsylvania.

    Another $1 billion would buy enough of the antiviral drugs Tamiflu (oseltamivir) and Relenza (zanamivir) for 25% of the population. It is unclear how well Tamiflu would work against H5N1, HHS Secretary Michael Leavitt notes, but it is the only stopgap measure until a vaccine is ready. Similarly, the 25% figure is arbitrary—“pulled out of a hat,” says modeler Ira Longini of Emory University in Atlanta, Georgia. However, he says, because only one-third of the population would probably get sick overall, “for purely therapeutic use, 25% would probably be enough.” The Infectious Disease Society of America in Alexandria, Virginia, has recommended a stockpile covering 40% of the population.

    Many other countries plan to stockpile Tamiflu as well, and it's unclear whether Roche, the only manufacturer, can meet demand. Some law-makers are calling for Roche to allow other companies to license its Tamiflu technology. Roche said last month it will work with other companies to meet the orders. It says it can fill the U.S. order by summer 2007 and plans to build its first U.S. plant.

    Outmoded.

    Existing methods to make flu vaccine, which involve growing the virus in eggs, are too slow; the plan would support a cell-based alternative.

    CREDIT: AVENTIS PASTEUR

    Another $251 million will be spent on helping other countries build their capacity to detect and respond to outbreaks. And $644 million will help federal agencies and states prepare.

    A separate, 396-page HHS plan, released on 2 November, describes the country's broad public health strategy and spells out who would receive scarce supplies of vaccines and antivirals. Health care workers, the elderly, others at high risk for influenza, and pregnant women would be among the first in line. If certain groups, such as young adults, prove to be more vulnerable to a pandemic strain, the list would be revised, says epidemiologist Arnold Monto of the University of Michigan, Ann Arbor, who advised HHS in developing the plan. The plan also updates draft scenarios, released in August 2004, that predict health care costs alone to the United States of $181 billion for a moderate pandemic. In a severe pandemic, 10 million Americans could be hospitalized, and 1.9 million could die.

    The Administration's plan quickly drew congressional fire. In several hearings, lawmakers complained that it would provide insufficient money to states, which are expected to help pay for the antiviral drugs. Others have expressed concerned that the Department of Homeland Security will lead the response, rather than HHS, which has the appropriate public health expertise.

    The plan arrives as some experts are questioning whether the likelihood of a devastating pandemic is being exaggerated. Off it, for example, suggests that H5N1, which has sickened 120 people over the past 8 years and killed about half, would have spread from human to human by now if it was going to happen. But he and others are praising the Bush plan anyway because it will help reduce the toll from seasonal influenza. “We're seduced into this tsunami mentality,” Off it says. But if you add up annual deaths from influenza, he says the numbers quickly approach pandemic estimates. Longini agrees: “I'm glad all this is happening, but not because of pandemic flu.”

  2. HURRICANE KATRINA

    Levees Came Up Short, Researchers Tell Congress

    1. Eli Kintisch

    When the levees protecting New Orleans gave way under the onslaught of Hurricane Katrina, the most common explanation at the time was that they simply weren't built to withstand a storm of such ferocity. But several teams of engineers told a Senate panel last week that poor design or construction bears much of the blame.

    The preliminary reports, from research teams supported by the National Science Foundation (NSF), the American Society of Civil Engineers, and the state of Louisiana, paint a clear picture of how the city's vital flood-protection system failed miserably. “If the levees had done what they were designed to do, a lot of the flooding would not have happened,” said civil engineer Raymond Seed of the University of California, Berkeley.

    The U.S. Army Corps of Engineers built most of the New Orleans flood-control system in the 1960s, including levees to withstand a Category 3 storm. Katrina blew in as a Category 4, packing winds up to 217 kilometers per hour. But although the storm surge on the city's east side, closer to the hurricane's eye, sent water over the tops of the levees, downtown areas to the west faced winds and surges consistent with a Category 1 hurricane, with a recorded peak of 139 km/h. Two teams of independent civil engineers presented field evidence that the western levees gave way before water reached their tops, supporting early speculation along those lines. “This was a preventable disaster,” said Ivor van Heerden, a hurricane expert at Louisiana State University (LSU) in Baton Rouge who is investigating the disaster for the state.

    Through the breach.

    Engineers say the 142-meter gap in the 17th Street levee was caused by a damaged foundation. It took 5 days to close.

    CREDIT: U.S. ARMY CORPS OF ENGINEERS

    The engineers told legislators that they couldn't determine whether the failures occurred because of poor design or bad construction. But their data are sure to play a role in any political recriminations and the expected surge of civil suits. “Many of the widespread failures throughout the levee system were not solely the result of Mother Nature,” said Senator Susan Collins (R-ME), chair of the Senate committee on Homeland Security and Governmental Affairs, which conducted the 2 November hearing. “Rather, they were the result, it appears, of human error in the form of design and construction flaws.” The corps says that it's too early to draw any conclusions.

    LSU scientists issued forecasts on the night before the hurricane struck that New Orleans would flood (Science, 9 September, p. 1656). But they assumed that the levees would not fail and predicted water only in the eastern areas. Van Heerden, using computer models, and the engineers, citing observations, concluded that water reached only 3.7 m up the 4.3-m levee walls lining the 17th Street and London Avenue canals. Independent modelers, led by civil engineer Joannes Westerink of the University of Notre Dame in Indiana, give similar initial results. “The water should be able to be filled chock-ablock to the top of the wall,” said coastal engineer and team member Tony Dalrymple. “[The levees] didn't fill, [but] they failed anyway.”

    The 17th Street Canal burst through its banks at about 10:30 Monday morning, possibly after water penetrated, eroded, or lubricated the soil below the walls. “It's kind of like a layer cake, and the whole thing slid,” says civil engineer Thomas Zimmie of Rensselaer Polytechnic Institute in Troy, New York, a member of the NSF effort. The levee became a bulldozer as the embankment slid 14 meters laterally, lifting and shoving trees, a shed, and a fence as water rushed in all around.

    Washout.

    An investigative team with the state of Louisiana has proposed three ways in which the 17th Street levee in New Orleans was undermined and breached during Hurricane Katrina.

    CREDIT: ADAPTED FROM DIAGRAM BY I. VAN HEERDEN

    Evidence found at the London Avenue Canal, which breached at about 9:30 a.m., suggested that sand deep below the concrete levee wall had become saturated and unstable, causing the levee to tip. Soil movement, the corps acknowledged in a prepared statement, “could have been a factor” in the breaches. Those failures led to flooding in areas including Lakeview, Gentilly, and downtown. Other breaches, caused by overtopping, led to more inundation.

    A fundamental factor in the strength of a levee—especially in swampy soil—is the depth of metal sheet piling driven deep below the levee as an anchor. Documents suggested that the sheet piling at the London Avenue Canal went about 5 meters down—half the depth found in other areas. But engineers said they lacked definitive data.

    Those testifying proposed several low-cost improvements including filling gaps between levee sections, more consistent construction standards, and a national board to inspect levees. Van Heerden also called for strengthening the levees to withstand a Category 5 storm, a more expensive fix. A joint report on the levee system by the corps and other federal agencies is due out in July 2006.

  3. ITER

    Fusion Leaders Make a Diplomatic Choice

    1. Daniel Clery

    CAMBRIDGE, U.K.—A Japanese diplomat has been chosen to head the International Thermonuclear Experimental Reactor (ITER) project, the world's most expensive scientific collaboration. Meeting in Vienna this week, representatives of the six international partners in the project—China, the European Union (E.U.), Japan, Korea, Russia, and the United States—tapped Kaname Ikeda to lead the $12 billion fusion project, which aims to build a reactor to recreate the sun's power source.

    Ikeda, currently Japan's ambassador to Croatia, has a degree in nuclear engineering and has held numerous positions in the Atomic Energy Bureau of Japan's Science and Technology Agency, the Ministry of International Trade and Industry, and the National Space Development Agency. “He has wide experience and seems to be an excellent choice,” says Chris Llewellyn Smith, head of U.K. fusion research.

    Since choosing a site earlier this year (Science, 1 July, p. 28), ITER negotiators have been drawing up an international agreement. Although this delicate process may continue well into next year, an E.U. source says that construction could begin at Cadarache in France as soon as a few weeks from now.

    However, delegates in Vienna failed to agree on the inclusion of India as a partner in the project. India had asked to join in July, but sources say that some ITER partners do not want India to have a prominent role because of its failure to sign the Nuclear Non-Proliferation Treaty.

  4. ASTROPHYSICS

    Surprise Neutron Star Suggests Black Holes Are Hard to Make

    1. Govert Schilling*

    Black holes may be harder to create than previously believed, according to an unexpected discovery made with NASA's Chandra X-ray Observatory. Researchers have long thought that any star more than 25 times the mass of our sun will end its life as a black hole. But Chandra's finding suggests that even a star of 40 solar masses may fail to create one. Because such massive stars are extremely rare, that raises the question of how stellar black holes form at all. “It's a surprising find,” says Gertjan Savonije of the University of Amsterdam, the Netherlands.

    When a massive star exhausts its nuclear fuel and goes supernova, it blasts its outer mantle into space. The remaining core collapses into a small, dense neutron star or a black hole: a region of space where gravity is so strong that not even light can escape it. Astrophysicists used theories of stellar evolution to peg 25 solar masses as the threshold above which anything will end up as a black hole. “There's a lot of guesswork involved,” says Savonije, because stars blow off varying amounts of gas into space during their lifetimes. Even so, the figure gave astronomers some idea of what to expect when viewing stellar corpses.

    Confounding fate.

    The stellar giant that produced this neutron star should have ended as a black hole, researchers say.

    CREDIT: NASA/CXC/UCLA/M. MUNO ET AL.

    So a team led by Michael Muno of the University of California, Los Angeles, was surprised to find the pulsating x-ray emission of a neutron star in a young, massive, compact star cluster known as Westerlund 1. In such clusters, the stars are thought to have been born all at the same time, in this case, about 4 million years ago. More-massive stars have shorter lives because they burn more fiercely, so the progenitor of the neutron star (designated CXO J164710.2- 455216) must have been one of the most massive stars in the cluster. As there are 35-solarmass stars still around in the cluster, the researchers guess the neutron star progenitor must have been at least 40 solar masses. In a paper accepted for publication in Astrophysical Journal Letters (arxiv.org/abs/astroph/0509408), Muno and colleagues conclude that some of the most massive stars do not become black holes as predicted.

    Why not? Frank Verbunt of Utrecht University in the Netherlands says that if the progenitor star had been part of a binary system, its companion could have siphoned off enough mass to keep the giant star from collapsing into a black hole. “Recent evolutionary calculations show that in a binary scenario, you almost always end up with a neutron star,” Verbunt says.

    Muno agrees that such a scenario is possible. But if the neutron star is still sitting in the cluster, its bloated binary partner should still be around too—yet, Muno says, infrared observations of the neutron star reveal no binary companions more massive than our sun. “It's still important to consider other reasons why some extremely massive stars won't collapse into black holes,” he says.

    • *Govert Schilling is an astronomy writer in Amersfoort, the Netherlands.

  5. NATIONAL SCIENCE FOUNDATION

    Board Suggests How to Thrive Under Stress

    1. Jeffrey Mervis

    When money's tight, it's important to let the experts call the shots—but make sure they aren't too conservative. That advice comes from the governing body of the U.S. National Science Foundation, which has drafted a long-term plan for running NSF without the promised doubling of its budget. The National Science Board's (NSB's) prescription: Give project managers more leeway, and don't let grants to large centers erode support to individuals.

    CREDIT: NSF

    Coincidentally, the board issued its plan 1 day before a key legislative spending panel approved a surprisingly generous 2006 budget for NSF. It's generous only in comparison to the president's requested 2.4% boost and earlier congressional action, however: The 3% increase will barely keep the agency ahead of inflation.

    Senator Kit Bond (R-MO) originally requested the report as chair of NSF's appropriations panel. Although his panel no longer has jurisdiction over the agency, NSB chair Warren Washington said the board wanted to reexamine NSF's policies anyway after concluding that current economic conditions had destroyed hopes of a 5-year doubling of NSF's budget spelled out in a 2002 reauthorization. “It's still a big disappointment to me that it hasn't happened,” Washington says, noting that NSF's budget would be about 50% larger by now if the doubling had begun on schedule. “But in the meantime, we wanted to emphasize what NSF needs to do to keep the country's basic science enterprise strong.”

    One essential step, according to the draft plan, “2020 Vision for NSF” (www.nsf.gov/nsb; NSB 05-142), would be to strengthen the hand of program officers in choosing from among a surfeit of good research proposals. It's part of the board's hunger for more “transformative” research (Science, 8 October 2004, p. 220): experiments with the potential to radically change a field rather than simply add incrementally to what is known. “Often a program officer will get a mixed set of reviews,” says Washington. “We shouldn't allow one negative review to kill a proposal if the program officer thinks that it's worth taking the risk.”

    The board also weighed in on the never-ending debate about the proper balance between the number, size, and duration of grants. It suggested that NSF should fund a larger and more diverse pool of researchers, even if it means suboptimal funding of individual grants and fewer large awards. And it challenged NSF administrators to “increase the impact” of its science education programs, a portfolio that the Bush Administration has tried to cut sharply in the past 2 years, by doing a better job of applying new research findings to the classroom.

    The board's ideas are “perfectly consistent with our initiatives,” says NSF Director Arden Bement. Program officers already have the ability to seed novel ideas, he notes. But Bement says he'd also welcome congressional authority to add staff, to relieve the growing workload and allow program officers to be even more creative.

    Last week's budget action, by House and Senate conferees, would give NSF an additional $164 million, for a total of $5.64 billion. NSF's research account would grow by $155 million, to $4.38 billion, and its education programs would drop by only $36 million, to $805 million, rather than by the $104 million cut requested by the Administration. The bill would fund all new research facilities requested in 2006 except for the high-energy RSVP project at Brookhaven National Laboratory in Upton, New York.

  6. U.S. HIGHER EDUCATION

    Schools Cheer Rise in Foreign Students

    1. Yudhijit Bhattacharjee

    The number of foreign students enrolling in U.S. graduate programs has gone up this fall for the first time since 2001. Educators attribute the 1% increase over last year, documented in a survey by the Council of Graduate Schools (CGS), to improvements in visa processing and see it as the reversal of a trend that began after the 2001 terrorist attacks. But they remain concerned that the United States may still be losing its attractiveness as a destination for students from around the world.

    The increase, reported by 125 institutions that responded to a survey of 450 schools, comes in spite of a 5% decline in international applications compared to last year. The number of students from China and India—the two largest sending countries—has risen by 3%, and enrollments from the Middle East are up by 11%. Engineering enrollments, a top draw among international students, are up 3%, and the physical sciences recorded a 1% rise.

    Renewed welcome.

    Faster U.S. visa processing has boosted graduate enrollments from the biggest pools of international talent and the Middle East.

    SOURCE: COUNCIL OF GRADUATE SCHOOLS; PHOTOS.COM

    “Visa problems—the main factor that hobbled enrollments in recent years—are clearly being addressed,” says CGS president Debra Stewart, who on a recent visit to the U.S. consulate in Beijing learned that approvals of student visa applications had shot up to more than 80% from less than 50% a year ago. “Things are moving in a good direction.”

    But Heath Brown, CGS's director of research and policy analysis and the author of the study, finds it troubling that a smaller fraction of international students—38% compared to 43%—chose to enroll after being accepted. “It's possible that some of these students are going to other countries,” says Brown. Stewart says one way for the United States to stay ahead in the global competition for talent would be to make it easier for foreign students with advanced U.S. degrees to gain permanent residency.

    John Martin of the Federation for American Immigration Reform in Washington, D.C., thinks that would be a bad idea. “We think the increased enrollment of foreign graduate students, particularly in science and math, has discouraged American students from pursuing scientific and engineering careers,” he says. “Foreign graduate students who end up with science and engineering jobs in this country tend to hold down salary increases in those fields, so it's natural for American students to pursue fields like law in which they see greater economic rewards.” As a consequence, Martin says, the United States is becoming more and more dependent on foreign scientific talent.

    Unlike Martin, most university officials are hoping this year's increase will turn out to be the beginning of an upward trend. Sherif Barsoum of the Office of International Education at Ohio State University in Columbus is particularly encouraged by the numbers from the Middle East. “The post-9/11 perception of U.S. campuses being unfriendly to foreign students seems to be fading,” he says.

  7. EPIDEMIOLOGY

    Russian Cancer Study Adds to the Indictment of Low-Dose Radiation

    1. Richard Stone

    A Cold War environmental calamity appears to be the cause of a spate of cancers in the Russian heartland. A landmark study this month by U.S. and Russian scientists blames excess cancers in the Ural Mountains on chronic exposures to radioactivity leaked from a weapons plant a half-century ago.

    The study is the latest blow to the notion that there is a threshold of exposure to radiation below which there is no health threat (and there might even be a benefit). The results add weight to last summer's report from the U.S. National Research Council, which backed the hypothesis that radiation is risky even at the smallest doses (Science, 8 July, p. 233). Although that conclusion had been inferred from Japanese atomic bomb survivors, the Russian study—along with a recent report revealing an elevated cancer risk in nuclear workers around the globe—provides the strongest direct evidence yet of chronic, low-dose health effects.

    Both sets of findings indicate that workplace radiation standards are correct in erring on the safe side. In 1991, the International Commission on Radiological Protection (ICRP) set an annual workplace limit of 20 millisieverts (mSv) per year over 5 years, which assumes there is no safe level. “This is an endorsement of the precautionary approach as a tool for radiation protection,” says Lars-Erik Holm, director general of the Swedish Radiation Protection Authority and ICRP chair.

    Low-dose risks.

    A study of cancers in Muslyumovo (inset) and other villages near the radionuclide-laden Techa River points up the importance of limiting exposure to radiation in the workplace.

    CREDITS: REUTERS; (INSET) COURTESY OF A. AKLEYEV

    The new data come from villagers downstream from the Mayak weapons complex in the southern Urals, victims of the struggle for nuclear supremacy. From 1949 to 1956, they were exposed to a steady stream of plutonium production byproducts released into the Techa River. After the Soviet breakup, U.S. experts—including atomic bomb radiation expert Dale Preston, now at Hirosoft International Corp. in Eureka, California, and epidemiologist Elaine Ron of the U.S. National Cancer Institute—joined forces with colleagues at the Urals Research Center for Radiation Medicine in Chelyabinsk to scrutinize the health of 25,000 people who lived in 41 villages along the Techa between 1950 and 1952, when radioactivity releases climaxed, and nearly 5000 people who moved to these communities between 1953 and 1960.

    The biggest challenge has been getting a handle on individual radiation doses, which remain uncertain. The team has measured strontium-90, the most common downstream radioisotope, in teeth from scores of subjects and conducted whole-body counts of strontium and cesium-137. They have at least one strontium measurement for more than a third of the villagers.

    According to death certificates, 1842 villagers died from solid tumors other than bone cancer, the prevalence of which would have been skewed by strontium-90. And 49 died from leukemia, not counting chronic lymphocytic leukemia (CLL), which is not thought to be triggered by radiation. The researchers attributed deaths above the background rate to radiation—46 from solid cancer (2.5%) and 31 from leukemia (63%). The risks increase with estimated dose, the team reports in the November issue of Radiation Research. “People were hoping that the risks would be a lot lower,” says Lynn Anspaugh now at the University of Utah, Salt Lake City, who helped with the study's dosimetry.

    The figures, although alarming, are in line with the largest study of nuclear power workers ever carried out. A team led by Elisabeth Cardis of the International Agency for Research on Cancer in Lyon, France, pooled data on more than 400,000 plant workers in 15 countries. In this group, 6519 have died from solid cancers and 196 from non-CLL leukemias. The finding suggests that between 1% and 2% of the deaths may be due to radiation, the team concluded in the 29 June issue of the British Medical Journal.

    It's an “impressive study,” says Holm, although he and others flagged a shortcoming: Smoking may account for a large share of deaths attributed to radiation. In the study, the risk of smoking-related tumors—primarily lung cancers—is much higher than for other solid cancers. Cardis points out that the paper acknowledges smoking as a confounding factor.

    “Although smoking may play a role in the increased risk of all cancers excluding leukemia, it is unlikely to explain all of the increased risk observed,” she says. Future publications will address concerns about the study's methods, she says. Although the Cardis study has been challenged, exhibit B in the low-dose indictment, the Techa River study, provides corroborating evidence. The two studies come to “practically the same conclusion,” says Peter Jacob of the Institute of Radiation Protection in Neuherberg, Germany. That means that the 20-mSv standard is unlikely to budge, despite arguments from industry that it is too stringent. The Russian results are “a setback for those who hope for a relaxation of the standards,” says Anspaugh. The United States is one of the few nations that does not use the ICRP standard; it permits exposures up to 50 mSv per year.

    In practice, most nuclear industry workers are exposed to far less radiation than the ICRP limit. That's a good thing: The average lifetime dose in the Cardis power plant study was only 19.4 mSv, with less than 0.1% of workers receiving more than 500 mSv. Calculations in the Techa study suggest that the vast majority of villagers received less than 50 mSv of lifetime accumulated dose to the stomach. In light of ongoing efforts to refine these estimates, says Urals center director Alexander Akleyev, cancer risks should be viewed as “preliminary.” Jacob agrees: “This is not the final word,” he says.

  8. BIOGEOGRAPHY

    Is Everything Everywhere?

    1. John Whitfield*
    1. John Whitfield, a science writer based in London, is the author of the forthcoming book In the Beat of a Heart: The Search for a Unity of Nature.

    Researchers have dug up some surprising evidence doubt on the long-held belief that microbes are impervious to geographic constraints

    How would the world look if marsupials, instead of being confined to the sanctuary and prison of Australia, had been forced to confront every other carnivore, tree-climber, burrower, and grazer on the planet? Our ideas about ecology, evolution, and history would be quite different had they been derived from studying animals that had crossed oceans, mountains, or deserts to seek out suitable environments. Polar bears in the Antarctic? Penguins in Alaska? Chimpanzees in Amazonia? Kangaroos on the Serengeti?

    The picture seems far-fetched. Yet for about a century, microbiologists have believed that the organisms they study are unhindered by geographic boundaries, traveling the world and thriving wherever they find their preferred environment—be it hot springs, freshwater ponds, or rotting fir trees. That view gives researchers who study microbes a rather different perspective on the world. As the Dutch biologist Lourens Bass-Becking put it in 1934: “Everything is everywhere; the environment selects.”

    Or maybe not. In the past few years, many microbial ecologists have come to believe that microbes are not infinitely mobile. Bass-Becking's dictum is really only “an assumption,” says Jessica Green of the University of California, Merced. “It's based on a confusion of hypotheses for facts.”

    DNA studies have given us a more detailed picture of microbial diversity that, argue some, demands a more nuanced view of microbial ecology. Those nuances have spawned a debate over what the DNA data actually show, and how a molecular view of microbial diversity can be compared with our species-based view of plant and animal ecology. Answering those questions, in turn, will help scientists better understand the crucial role played by microbes in keeping our ecosystem livable.

    On Priest Pot

    Bland Finlay of the Centre for Ecology and Hydrology (CEH) in Dorset, U.K., has spent a quarter of a century building up evidence to support Bass-Becking's view of microbial ubiquity, much of it gathered in a small lake in northern England called Priest Pot. For example, he has found that a mere 25 microliters of sediment from Priest Pot contains 40 of the world's 50 known species of the protozoan Paraphysomonas. What's more, each species' abundance in the sample matches its abundance worldwide. Everywhere he goes, Finlay finds identical ciliates: “There's no convincing evidence for endemic species,” he says. “I see the same ones in Scotland, New Zealand, and central Africa.”

    Lake effect.

    Bland Finlay has plumbed Cumbria's Priest Pot for a quarter-century of discoveries involving ciliate protozoa.

    CREDITS: (TOP) T. J. BEVERIDGE/VISUALS UNLIMITED; (BOTTOM GROUP) B. J. FINLAY/NERC

    The main cause of microbes' ubiquity is their vast populations, says Finlay. Although a specific ciliate is extremely unlikely to make a long journey, there are so many of them that some inevitably will hitch a ride via wind, water, a bird's foot, or a clump of floating vegetation. Many can tolerate a wide range of environments—salt- and freshwater, for example—and they have an astonishing ability to hunker down in harsh environments until their moment arises.

    Cultured in its native conditions, a gram of Priest Pot sediment yields 20 species of ciliate protozoan. But when Finlay's team tested that sediment in the lab under different conditions—altering salinity, temperature, illumination and so on—it found 137 species. And the total keeps rising. He thinks those findings argue strongly for the idea that the lake contains not only all the species adapted to its conditions but also a “seed bank” of many others that have arrived and survived, but not thrived. Everything seems to be everywhere, even if it is not immediately obvious. “There is no biogeography for anything smaller than 1 millimeter,” he says.

    But Green believes that our understanding of microbial diversity is too sketchy to support such statements. “[Finlay] has shown that there's similarity at certain points, but the sampling effort on a global scale isn't enough to make these sweeping generalizations,” she says. Green is one of a wave of researchers using molecular studies to probe the patterns in microbial diversity. The ability to sequence DNA samples from the environment has allowed scientists to detect far more than the 1% of microbes that can be cultured in the laboratory. It has also revealed how they vary from place to place.

    Studying ascomycete soil fungi in the Australian desert, Green and her colleagues have found that the genetic differences between fungi from different locations increase with distance. Others have found that archaea of the genus Sulfolobus living in the hot springs of Yellowstone and Lassen U.S. national parks, for example, are more similar to each other than to those found on Russia's Kamchatka peninsula. “We are beginning to see biogeographic patterns in microorganisms,” says Claire Horner-Devine of the University of Washington, Seattle, lead author of a study of New England salt-marsh bacteria with similar results. “There will be organisms that are global and can get anywhere, and you'll also find ones that don't have those ranges.”

    Biologists studying plants and animals realized 150 years ago that the number of species found in a patch of habitat climbs as the area of the patch increases, a biogeographic pattern called the species-area relationship. A CEH team led by Christopher van der Gast recently argued that the same held for the bacteria living in oil-filled sump tanks in engineering machines and in water-filled tree holes in Amazonia. This seems to contradict the “everything-is-everywhere” view, in which the relationship between a place's area and the microbial species it contains is essentially flat.

    One complication is that limited dispersal is not the only thing that could create geographic variation in microbes. A big challenge is to separate the effect of environmental heterogeneity—which everyone accepts will cause biological differences—from divergences caused by dispersal. Finlay and his colleague Tom Fenchel of the University of Copenhagen, Denmark, have argued that van der Gast's tree-hole study found more diversity in larger sites because larger sites are environmentally more heterogeneous, not because they are easier to disperse into or harder for populations to go extinct in.

    Digging in.

    Jessica Green pursues Nitrosomonas bacteria at two high-altitude locations in Chile.

    CREDITS (TOP TO BOTTOM): (INSET) DENNIS KUNKEL MICROSCOPY INC.; PETRA WALLEM; JENNIFER DUNNE

    “The next frontier is to figure out whether the patterns are due to environmental selection or to evolution and diversification,” says Jennifer Hughes of Brown University. She says a handful of published studies so far show geographical patterns when environmental differences are controlled for.

    Phenotype matters

    More vexing is the issue of how to make sense of the molecular data themselves. Some believe that microbes seem ubiquitous because our view of them is blurry. Many studies assign microbes to different species if their ribosomal DNA is less than 97% identical. If that were done with animals, Green points out, all primates from humans to lemurs would likely be lumped into one category—creating a group with far more cosmopolitan distribution and habits than any of the species erected by traditional naturalists. What's needed, she says, is a study that would detect whether and how the patterns in microbial diversity compare with those seen in plants and animals—at scales from a cubic centimeter to intercontinental. She aims to do this for the bacteria in Mediterranean-type ecosystems in Chile, California, and South Africa.

    But Fenchel believes that simply comparing DNA sequences misses biological reality. Microbial species tend to be very old, he says, and have accumulated a lot of “neutral” genetic variation that has no evolutionary effect. If you look hard enough, he argues, every individual will be different.

    Fenchel favors classifying microbes by what they look like and what they do. “The molecular data are super, but you shouldn't forget the phenotype—and some of the molecular chaps do,” says Fenchel. “A couple of years ago, people thought genetic analysis was the bees' knees and that it would clear up all the questions,” adds Finlay. “I don't think this is true at all.” And many microbiologists believe that the ability of distantly related bacteria to swap DNA may further confuse our picture of their diversity and distribution.

    The debate about microbial biogeography is about more than how many bacteria can dance on the head of a pin. Microbes support the visible living world and provide trillions of dollars' worth of ecosystem services for free, cleaning air and water and keeping soil fertile and healthy. They are a critical component of efforts to restore degraded ecosystems. As pathogens, they help regulate the populations of plants and animals, and their absence may be one factor behind the success of invasive species.

    To understand these processes, says Horner-Devine, we must understand microbes' ecology and how they will respond to stresses such as climate change and pollution. “To know how we're affecting these communities, we need to know what the patterns in spatial and temporal variations are,” she says. Such knowledge will help build a biology that applies to all life on Earth.

    “Comparing microorganisms with plants and animals will highlight where we see patterns and processes that could be the same for all domains of life,” says Horner-Devine. “That would be pretty phenomenal.”

  9. PROFILE: SUSAN GREENFIELD

    The Baroness and the Brain

    1. John Bohannon*
    1. John Bohannon is a writer in Berlin, Germany.

    Best known for her popular writing, neuroscientist Susan Greenfield has launched a new center at Oxford to investigate consciousness

    OXFORD, U.K.—Chatting amicably around a long oval table sit a couple of dozen researchers interested in how the brain works. This is the first gathering of the Oxford Centre for the Science of Mind, an ambitious project involving people with a diverse set of skills and interests. Today's first order of business is to choose a keynote speaker for a conference on consciousness next year. All eyes turn to a commandingly tall woman with leonine features, Director Susan Greenfield, as she throws out a suggestion: “How about the Dalai Lama?” There are chuckles around the room, but it soon becomes clear that Greenfield is serious—and that she could probably make it happen.

    A neuroscientist at Oxford University for 30 years, a politician, and celebrity, Greenfield rose through the academic ranks like a bottle rocket, but she didn't stop there. Over the past decade, she has become a household name in the United Kingdom, the author of 10 popular science books, the host of a TV series about the brain, and the first woman director of London's Royal Institution, a 200-year-old venue for the public understanding of science. Along the way, she has been tapped as a scientific adviser by both the U.K. and Australian governments. In 2001, she became a lifetime member of the U.K. House of Lords with real decision-making power.

    “She has been immensely energetic and effective,” says Martin Rees, an astrophysicist and Master of Trinity College at the University of Cambridge, U.K., “expounding and debating scientific ideas and issues to a wider range of audiences than most scientists ever reach.” But critics say Greenfield's ascendancy has been fueled by self-promotion rather than published research. They grumble that she appears to have left real science behind without delivering on the promise of her early ideas.

    Science rock star

    When it comes to the media, most scientists are shy creatures, preferring the snail's pace of peer-reviewed journals to the glamour—or terror—of a 30-second TV interview. Not Greenfield. She comes alive in the spotlight. “I get a terrific kick out of engaging with the public,” she says. “As an academic, you just sit there with all the ideas and have very little influence, … but I'd rather see my ideas translate to policy that makes a difference in people's lives.”

    Big agenda.

    Dubbed Britain's “14th most powerful woman” by the press, Susan Greenfield is a skilled attention getter—here in an appeal for new high-tech ventures.

    CREDIT: PHOTO BY ALAN BURLES, AS PUBLISHED ON http://www.spaceforideas.uk.com/

    Greenfield's early career gave no clue that the neuroscientist, now 55, would become “the 14th most powerful woman in the U.K.” and one of “the 300 most influential people in the world,” as two British newspapers have ranked her. Her research has centered on a workhorse molecule of the nervous system called acetylcholinesterase (AChE).

    “She first made a name for herself with a very bold idea about AChE,” says Hermona Soreq, a neuroscientist at the Hebrew University in Jerusalem, Israel; namely, that the enzyme might be a link between several neurodegenerative diseases—Alzheimer's, Parkinson's, and possibly also motoneuron disease—“but not as an enzyme.” Whereas an enzyme's job is to catalyze a chemical reaction—AChE splits the neurotransmitter acetylcholine into choline and acetic acid—Greenfield proposes that AChE does more: It may also interact with proteins to stimulate neuron growth during development, and this pathway may become deranged in the adult brain, she believes, leading to neuronal death and other symptoms shared by neurodegenerative diseases. “If her idea turns out to be true, it would be an amazing breakthrough,” says Jean Massoulie, a neuroscientist at the National Center for Scientific Research in Paris, France. But, he adds, “in my view, it is still not proven that AChE even has nonenzymatic roles.”

    Everything changed for Greenfield in 1994 when she was invited to give the annual Royal Institution (RI) Christmas lecture on television, the first woman to do so. Soon after that lively presentation on brain function, she says, “one thing just led to another.” She began writing regular columns for newspapers, weighing in on hot topics such as whether marijuana should be legalized—Greenfield believes not—and producing popular books about the brain. Greenfield became a familiar face on television. She even appeared in the U.K. tabloid magazine Hello!

    In 1998, Greenfield was tapped to be director of the RI—again, the first woman so honored—running Britain's oldest institute for showcasing science. In 2001, a committee of U.K. politicians appointed her a member for life of the House of Lords as part of an effort to include nonpolitical experts in the legislative branch. Now known as the Baroness of Otmoor in the County of Oxfordshire, Greenfield can vote on laws, although she says her “most important contribution there is to take part in debates.”

    But Greenfield is interested in more than talk; she wants to put ideas into action. One of her initiatives, called the Science Media Center, offers briefings for journalists on scientific issues and rallies researchers willing to be interviewed on short notice. “It makes a tremendous difference,” says David King, a chemist at the University of Cambridge and the U.K. government's chief science adviser, particularly with fast-breaking news, such as the current threat of an avian influenza pandemic, in which disinformation can cause panic.

    Greenfield is now working on a plan to establish a Science Peace Corps in the United Kingdom modeled on the U.S. Peace Corps. Scientists would spend a year or two in the developing world, broadening their horizons while sharing their expertise.

    Meanwhile, Greenfield, who is single, says she still maintains a research laboratory at Oxford, when she isn't flying around the world to collect honorary degrees—28 so far—or achievement awards. Her day begins at 5 a.m., but still, she says, “life is too short.”

    At odds with her peers

    Widely admired by the public, Greenfield nevertheless gets mixed reviews from her scientific peers. Although she has become one of the United Kingdom's high profile “science ambassadors,” says King, she has taken an unusual path. “A good comparison,” he says, “is Lord [Robert] May,” an Oxford biologist who was also appointed to the House of Lords in 2001. “Everyone considers him to be one of the most important epidemiologists in the world, but when people are asked about Susan's background, they falter.”

    Greenfield's new venture into the field of consciousness research is raising more hackles. Her Oxford Centre for the Science of Mind (OXCSOM) has received $2 million in start-up funding from the U.S.-based Templeton Foundation (Science, 21 May 1999, p. 1257), and she could receive a further $10 million next year. Greenfield admits she has never done an experiment involving consciousness, although she has described her theory for how the activity of neurons creates individual minds in her popular books, which she describes as “the work I am most proud of.”

    In a nutshell, Greenfield argues that consciousness is generated by “highly transient assemblies of brain cells that wax and wane in size, from one moment to the next,” and the larger the assemblies, the higher the level of consciousness. She uses the analogy of a stone dropped into a pond, with associations between neurons rippling out from a “trigger.”

    Greenfield gave a speech about her idea at the annual meeting of the Association for the Scientific Study of Consciousness, held at the California Institute of Technology (Caltech) in Pasadena in June. “It went extremely well,” she told Science after the meeting, but some in her audience painted a different picture. Patrick Wilken, a psychologist at Otto von Guericke University in Magdeburg, Germany, and one of the conference organizers, says people complained that Greenfield's lecture was insubstantial—for example, some felt that “talks like this lower the perception of consciousness as a serious field of academic study.” Christof Koch, a Caltech neuroscientist who chaired the meeting, calls Greenfield “an excellent public speaker” but says her talk had “very little science” and focused more on metaphors than testable hypotheses.

    CREDIT: COURTESY OF S. GREENFIELD

    Greenfield calls the assessment unfair and claims she is being “held to a different standard” from others, perhaps “because I'm a neuroscientist and most of the others were cognitive scientists.”

    Wilken disagrees. A decade ago, “there were a number of researchers asserting [that they could] solve the problems of consciousness without having a great deal of data to back up their claims,” he says, but “things have moved a long way since then, and people who make statements like this today without having let their ideas go through the normal scientific practice of peer review are generally ignored.”

    But Greenfield plans to get data to back her ideas with the help of OXCSOM. One of its research aims is “to test Susan's theory,” says John Stein, an Oxford neurophysiologist and one of OXCSOM's core group of researchers, although “obviously we won't solve the problem of consciousness in a matter of months.” In line with the religious interests of the Templeton Foundation, which bankrolls OXCSOM, its initial focus is on “the physical basis of beliefs.”

    For example, Oxford neuroscientist Irene Tracey is investigating whether religious beliefs affect pain tolerance. The pain is delivered to volunteers in the form of heat or a chili paste applied to the arm. Subjects who identify themselves as “deeply religious” use rituals to cope, such as praying, whereas nonreligious subjects just grit their teeth. Meanwhile, she uses functional magnetic resonance imaging to observe patterns of brain activity during the ordeal.

    Capturing the brain's reaction is the easier part of the experiment, she explains, because it is readily detected. But to determine “how deep” beliefs are or “how much” pain is experienced, she must rely on reports from the subjects themselves. That subjective aspect is both a pro and a con. Although it can make comparisons very difficult without carefully chosen controls, it is also “exactly the aspect that we're trying to figure out,” she says. “Pain is an incredibly flexible phenomenon, depending on your perceptions, expectations, and degree of self-awareness,” all ingredients of consciousness. And on the practical end, determining the mechanisms that might dampen pain for a believer could lead to better therapies for everyone.

    Whether grappling with slippery concepts such as belief will bring us closer to understanding consciousness is an open question. “But even if the project fails in its ultimate aim,” says Erik Myin, a philosopher of consciousness at the University of Antwerp, the Netherlands, it could reveal how to convert such “big questions” into ones that can be scientifically validated.

    But judging Greenfield on her own research may be missing the point. “She's gutsy and an inspiration” to younger scientists, says King. And among the public, “her ability to communicate that science is fun and creative” and that “you don't have to be a boring fuddyduddy wearing tweed skirts” is vital, says Stein. He says he can measure her impact every year in “the number of girls applying to do medicine or neuroscience who've said they've been enthused by Susan's lectures or books.” Even if she doesn't crack consciousness, he says, Greenfield has already made an enormous contribution.

  10. EVOLUTION

    Ancient DNA Yields Clues to the Puzzle of European Origins

    1. Michael Balter

    DNA from prehistoric farmers adds fuel to a long-simmering debate over the ancestry of living Europeans; divergent male and female histories may help explain the contradictory data

    In 2000, archaeologists uncovered a well-preserved male skeleton at an early farming site at Halberstadt, northwest of Leipzig, Germany. The skeleton was lying on its left side, its legs and arms tightly flexed, with three pottery bowls buried with it. The man had belonged to a central European culture called the Linearbandkeramik (LBK), characterized by large longhouses and distinctive pottery featuring sweeping striped designs. The LBK people, the first farmers known to occupy central Europe, arose in modern-day Hungary and Slovakia about 7500 years ago and within 500 years had spread as far west as France and as far east as the Ukraine.

    Dead end?

    This prehistoric farmer from Germany had a DNA variant that is very rare today, suggesting that his farming techniques may have spread much farther than his genes.

    CREDIT: LANDESAMT FÜR ARCHÄOLOGIE SACHSEN-ANHALT, HALLE/SAALE, GERMANY

    For decades, researchers have studied the LBK culture for clues to how farming spread across Europe, an issue that is key to tracing the origins of Europe's now 700-million-strong population. The archaeological evidence shows that farming was introduced into Greece and southeast Europe from the Near East more than 8000 years ago, then spread west and north to the Atlantic Ocean. But did the farmers themselves move across Europe, in a migration of people and their genes? Or was the chief movement one of culture, as Paleolithic hunter-gatherers—whose ancestors arrived on the continent as long as 40,000 years ago—adopted farming?

    Many studies over the past 2 decades have sought to test these hypotheses, often focusing on the DNA of modern Europeans in an attempt to trace their heritage. But the data have been conflicting. Now, a paper on page 1016 of this issue offers the first direct look at the DNA of early farmers themselves, including a sample from the Halberstadt skeleton. Anthropologist Joachim Burger and graduate student Wolfgang Haak of Johannes Gutenberg University in Mainz, Germany, and their colleagues found that many LBK farmers carry a mitochondrial DNA (mtDNA) type rarely found today, implying that they left little genetic legacy in living Europeans. The new data clash with some earlier studies, including Y chromosome analyses of living Europeans, which suggest that early farmers with roots in the Near East made a deep imprint on the European genome.

    Because the Y chromosome is inherited through the male line, and mtDNA is passed down through women, some researchers now think that different genetic destinies of men and women could reconcile the data—and perhaps even the European origins debate. “A simple explanation for the difference is that indigenous hunter-gatherer females intermarried with [early] farmers,” says Alexander Bentley, an anthropologist at the University of Durham, U.K.

    The model of farming spread by migration, called demic diffusion, was formally proposed in 1984 by archaeologist Albert Ammerman and geneticist Luigi Luca Cavalli-Sforza. It postulates that large numbers of colonizing farmers spread across Europe, mating with some of the hunter-gatherers already there and displacing the rest through rapid local population growth. These growing populations then provided colonizers for still more movements west and north. Many early and some recent studies have supported the idea. For example, a widely cited 2002 paper by geneticist Lounès Chikhi of Paul Sabatier University in Toulouse, France, tracked Y chromosome variation in living Europeans and concluded that indigenous hunter-gatherers contributed less than 50% of the genes of modern Europeans; most genes, Chikhi concluded in the Proceedings of the National Academy of Sciences, came from the colonizing farmers.

    But other researchers argued that there was little evidence that early farmers had undergone the kind of explosive population growth required by demic diffusion. Archaeologist Marek Zvelebil of the University of Sheffield, U.K., proposed an alternative model in which some colonization took place in certain areas—perhaps including the LBK region in central Europe—but that farming then spread mostly via local adoption rather than further movements of the original colonizers. A number of recent genetic studies have supported this model. For example, one study concluded that less than 25% of the mtDNA gene pool of modern Europeans could be traced to incoming early farmers (Science, 10 November 2000, p. 1080).

    To try to get around this stalemate, Burger and Haak's team zeroed in on the mtDNA of ancient Europeans. The team tried to extract mtDNA from 57 individuals buried at 16 early farming sites, most from the LBK culture, and dated between 7000 and 7500 years ago. They succeeded with 24 of the skeletons. Moreover, the team found that six of the 24 skeletons had a mtDNA variant, called haplotype N1a, that is now very rare worldwide. Thus an apparently wide-spread mtDNA variant in early European farmers has left almost no trace on living Europeans, a finding the authors interpret as support for the cultural diffusion model.

    Indeed, proponents of cultural diffusion hail the results. “It really does seem that [early farmers] must have left far fewer descendants than one might expect, given the apparent archaeological impact of the LBK at the time,” says Martin Richards of the University of Leeds, U.K. Agrees Zvelebil: “This is a very important step forward. … It bypasses all the problems of extrapolation from modern DNA.” But he cautions that to completely prove its case, the team should extract ancient DNA from early farmers in the Near East to see if they also have high frequencies of the N1a haplotype, as well as from hunter-gatherer skeletons in the LBK region to see if they have low frequencies.

    Those who favor demic diffusion aren't yet convinced, however. “The authors are rather impatient in drawing conclusions,” says Cavalli-Sforza, who thinks the results can't be properly interpreted without knowing the farmers' Y chromosome sequences too. Chikhi adds that the authors have not entirely ruled out the possibility that today's low N1a frequencies are due to chance loss of the variant. And ancient DNA pioneer Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, warns that ancient DNA studies of modern humans are notoriously unreliable because of the problem of contamination with living people's DNA. “In our experience, results from the majority of ancient human samples are irreproducible,” he says.

    All this leaves researchers trying to sort out the conflicting data. Most studies of mtDNA have supported the cultural diffusion hypothesis, whereas the Y chromosome data seem to favor a movement of people themselves. The idea that colonizing farmers married local hunter-gatherer women might resolve the conflict between the mtDNA and Y chromosome data and also explain the team's results, argues Bentley. “Intermarriage of farmers with indigenous women would reduce N1a in subsequent generations,” he points out. Cavalli-Sforza agrees that inter-marriage is a possibility, noting that men may have mated with more than one woman and that the typical LBK longhouse often had three or four hearths. “Polygynic families are a very reasonable explanation” for this architectural arrangement, he says.

    For Zvelebil, the contradictory results probably indicate that neither large-scale migrations nor cultural diffusion can explain everything that happened in Europe during the adoption of agriculture. Rather, he suggests, the contribution of each of these processes probably varied from region to region: “Our prehistory was far more complicated and fascinating than either of these models allow for.”

  11. DRUG RESEARCH

    Trying to Catch Troublemakers With a Metabolic Profile

    1. Gunjan Sinha*
    1. Gunjan Sinha is a writer in Berlin, Germany.

    Drug discovery and toxicity research are just two areas that could benefit enormously from the use of new “metabonomic” techniques

    Only one in 10 potential drug compounds ever makes it to market; the others are rejected as either too risky or ineffective. Companies have dreamed of making screening processes more efficient—and now researchers may have a way to do it. They're developing a technique based on “metabonomics,” using metabolic profiles to identify toxicities rapidly and analyze the likely effects of unknown compounds. The strategy got a boost last month when several companies that had previously backed researchers at Imperial College London—including Bristol-Myers Squibb (BMS) and Pfizer—signed up to extend the work.

    Metabonomics—the study of metabolic changes in urine, serum, or tissue after an organism has been exposed to a drug or other stressor—is decades old in concept. But measurement tools have become more sophisticated, making it possible to analyze data from multiple, small samples and make associations at high speed.

    “It is a very powerful technology,” says Bruce Carr, director of pharmaceutical candidate optimization at BMS, who has been collaborating with Imperial College researchers. Although studies suggest that companies already catch 90% of adverse effects before a drug application is submitted to the U.S. Food and Drug Administration (FDA), he believes metabolic profiling might help detect them earlier because it gives a snapshot of an organism's entire physiology.

    Toxic signature.

    The CLOUDS program creates a set of references based on animals' responses to liver-toxic (blue) and kidney-toxic (red) compounds over time.

    CREDIT: ADAPTED FROM FIGURE COURTESY OF TIM EBBELS/IMPERIAL COLLEGE

    The teamwork began 5 years ago when six pharmaceutical companies including BMS and researchers at Imperial College formed the Consortium for Metabonomic Toxicology (COMET). Their goal was to develop a database of known toxins and their metabolic signatures from animal tests, to which experimental drugs with unknown toxicity could be compared. Rats were dosed at a separate facility with more than 100 toxic compounds, one per animal. The Imperial College researchers scanned urine and serum samples through nuclear magnetic resonance (NMR) spectroscopy and other technology to generate metabolic profiles of the animals.

    The COMET researchers then used a computer program they had developed to assess which organs were affected. Called Classification of Unknowns by Density Superposition (CLOUDS), the software compared the NMR data—typically a hallmark signature of peaks corresponding to unknown or known metabolites—to an existing database of profiles. Tissue samples went to histology researchers for confirmation of the NMR findings. Preliminary results, published in Analytica Chimica Acta in 2003, demonstrated that this method could group samples accurately by affected organ: for example, liver toxicity with up to 77% accuracy and kidney toxicity with up to 90% accuracy.

    More significant, however, was the program's ability to crunch data taken at intervals in long-term studies. Researchers analyzed urine and serum samples at various times from the moment an animal was dosed with a toxin through its recovery. The program used probability calculations to assign the effects seen in each animal to the most likely toxin class and to identify when the compound caused the toxicity.

    The analysis is “much more sophisticated” than any other screening tool now available, says Jeremy Nicholson, head of biological chemistry and COMET project director at Imperial College, because it can identify more simply biochemical changes that may cause pathology. (Nicholson has helped launch a spinoff in London to commercialize similar technology applied to medical diagnostics, called Metabometrix.)

    Existing toxicology research methods can only examine toxic effects on one tissue type at a time. A gene-expression study, for example, yields data from a single time point in a single tissue. Moreover, changes in gene expression may not mean a net biological change. The body's homeostatic mechanisms may compensate by degrading or modifying gene transcripts. By contrast, “urine and plasma give the metabolic interaction of all tissues,” Nicholson adds.

    “Metabonomics has a big role to play in toxicology research,” says Ian Blair, a professor of chemistry at the University of Pennsylvania. “Once you have a signature of toxicological response, you could use that as an assay for many things.”

    After the consortium published the initial results, each member developed its own database and technology in-house. Companies have been mum on details but have confirmed that they are building larger databases against which they can compare new compounds. Several companies also employ metabonomics to screen animals prior to an experiment to ensure that they are normal. Researchers say the technology could also be applied to clinical trials to correlate drug response to individual metabolism.

    Drug companies aren't the only ones interested in metabolic profiling. In 2003, the U.S. National Institutes of Health awarded $35 million to a consortium of 18 institutions to identify, characterize, and quantify human cellular lipid metabolites. And recently Nicholson formed a coalition of scientists to establish standards for the field.

    COMET's success has prompted several companies to join COMET II at Imperial College, says Nicholson. He's heading the project, which is scheduled to launch this month. The goal this time, however, is to create a “multiomics” platform that combines data from many sources, including gene and protein arrays, to reveal biochemical mechanisms.

    That will be no easy task. Spectral data from a single urine sample contain thousands of peaks, the majority of which are unidentified metabolites. But the analytical tools to assign identities to the peaks are already emerging, says Nicholson. For example, his colleagues at Imperial College have a paper in press at Analytical Chemistry describing software that can combine data from NMR spectroscopy and mass spectrometry—similar yet complementary metabonomic techniques. And Nicholson says his colleagues have already developed a prototype system that integrates data from gene, protein, and metabolic profiles.

    But whether the technology will actually help make drugs safer remains to be seen, says David Jacobson-Kram, head of pharmacology and toxicology at FDA's Office of New Drugs: “Technology can help to some extent, but perhaps our expectations are unrealistic.” One potential application touted by the technology's supporters is to metabolically profile drug side effects. “Is there a metabolic profile characteristic of suicidal ideation?”—a side effect of several antidepressants—Jacobson-Kram asks. “That's a stretch.”

    Nevertheless, the field is young, Blair says, and the number of papers it is producing these days suggests it has begun a growth spurt.

  12. PALEONTOLOGY

    Tyrannosaurus rex Gets Sensitive

    1. Erik Stokstad

    Its supersized smell organs have been scaled back a bit, but new studies show that the tyrant lizard's sensory apparatus was indeed fit for a king

    MESA, ARIZONA—With its powerful jaws and serrated teeth, Tyrannosaurus rex had fear-some tools for catching and eating prey. The lumbering carnivore also had some top-of-the-line sensory equipment, paleontologists reported here last month at the 65th annual meeting of the Society of Vertebrate Paleontology (SVP).

    You can't hide.

    Tyrannosaurus rex's keen senses of smell, hearing, and vision helped it take down prey or snap up carrion.

    CREDIT: JIM ZUCKERMAN/CORBIS

    The new insights come from studies of bony clues to the brain, ears, and eyes of T. rex. They suggest that the “tyrant lizard king” had an acute sense of smell—although perhaps not as acute as some recent studies had suggested—a knack for listening as well as keeping its eyes fixed on prey, and depth perception to rival modern birds of prey. To most paleontologists, it all adds up to a talented predator. “The more we look at T. rex, the more sophisticated it is,” says Philip Currie of the University of Alberta in Edmonton, Canada.

    T. rex was first unveiled and named 100 years ago by the legendary paleontologist Henry Fairfield Osborn of the American Museum of Natural History in New York City. Ever since, T. rex has been famous for its staggering dimensions—ranging up to 12 meters and perhaps as much as 7 tons—and its highly modified skeleton. Several of those distinctive features, such as the shrimpy arms and the bone-crushing teeth, led some researchers to propose that T. rex was primarily a scavenger. That case was bolstered in 1999, when computed tomography (CT) scans revealed that the T. rex named Sue had enormous olfactory bulbs (Science, 9 June 2000, p. 1728)—a specialization that would presumably have helped it catch the scent of a dead dinosaur in the distance.

    To some paleontologists, the gargantuan olfactory bulbs were difficult to swallow. One group of researchers—François Therrien of the Royal Tyrrell Museum of Palaeontology in Drumheller, Canada, and Farheen Ali and David Weishampel of the Johns Hopkins University School of Medicine in Baltimore, Maryland—decided to see what the olfactory bulbs in Tyrannosaurus's closest living relatives, birds and crocodiles, could reveal about their long-gone cousin.

    In both groups, the olfactory bulbs rest against a trough on the upper part of the braincase and are bounded toward the front of the head by a septum. By locating bony traces of this septum in T. rex braincases, Therrien and colleagues could more accurately estimate the position of the olfactory bulbs. “This would have limited their size to no more than that of a plum,” Therrien says. Paleontologists had thought the olfactory lobes extended further forward inside the head. “I think they're probably right,” says Christopher Brochu of the University of Iowa in Iowa City, who had studied Sue while at the Field Museum in Chicago.

    Still, T. rex did very well by its nose. The relative size of the bulbs—their width compared with the width of the cerebral hemispheres—was the highest of seven dinosaurs examined, including other tyrannosaurids and smaller, more birdlike dinosaurs.

    Therrien speculates that the acute sense of smell could have been used to track prey, locate putrefying carcasses, or help males patrol territory for rivals. Greg Erickson of Florida State University in Tallahassee cautions that it's not straightforward to link organ size to sensory acuity, as sense of smell is also determined by features such as the density of neurons. “We need more [comparative] studies … so we can make sense of this,” he says.

    In the meantime, another study described at the meeting matched Therrien's results on the size of the olfactory bulbs. Lawrence Witmer and Ryan Ridgely of Ohio University College of Osteopathic Medicine in Athens put several T. rex skulls into a CT scanner. By examining features preserved in the skull, including fossilized evidence of nasal tissue in front of the olfactory bulbs, Witmer found that the bulbs were roughly walnut-sized.

    Witmer's study extracted clues to other sensory abilities. For example, he resolved the bones that surrounded the so-called cochlear duct of the inner ear, which helps turn sounds into nerve signals. The length of these bones, relative to the over-all dimensions of the skull, suggests that T. rex may have had better hearing than other theropods did.

    Acute.

    CT scans of T. rex's brain (blue) reveal sizable olfactory bulbs (red arrow) and an inner ear (red) with long, delicate canals for balance and cochlear duct for hearing.

    CREDIT: L. M. WITMER AND R. RIDGELY/OHIO UNIVERSITY COLLEGE OF OSTEOPATHIC MEDICINE

    The inner ear can also reveal aspects of an extinct animal's posture and sense of balance (Science, 31 October 2003, p. 770). Thanks to the CT scans, Witmer could resolve the bony labyrinth of the inner ear with its trio of semicircular canals, oriented at right angles to one another. These once contained fine hairs that sensed the motion of fluid, helping the brain know how the body was oriented and which way it was moving. In modern creatures, the larger the loop of the semicircular canals, relative to head size, the more agile they tend to be.

    T. rex turned out to have surprisingly long canals. “You might not expect a large animal to have quick movements,” Witmer says. He suspects that the primary purpose of the canals was not gymnastics but helping T. rex keep its head and eyes fixed on prey. That's not all the canals reveal. In modern animals, the orientation of the lateral canal relative to the skull correlates with how they tend to hold their heads while alert. T. rex apparently kept its head dipped down about 5° to 10°. For tall animals with long snouts, such as T. rex, tilting the head downward can help them better see what's directly ahead.

    Kent Stevens of the University of Oregon, Eugene, has come to a similar conclusion about T. rex's vision, which again places it at the top of its class. He reconstructed the visual abilities of T. rex and six other predatory dinosaurs by working with sculpted reconstructions of their heads. After placing a sheet of glass in front of the busts, he stood eye to eye with the dinosaurs and shined a laser at each fake pupil. This allowed him to map onto the glass the entire area from which the laser glinted off the pupils, tracing their visual field.

    T. rex, with its forward-facing and widely separated eye sockets, turned out to have great binocular vision and, likely, depth perception. When T. rex dipped its head about 10°—similar to the angle of the alert posture that Witmer estimated—it would have maximized the width of its binocular field of view at 55°, as good as that of hawks, Stevens says. That's not quite as good as those of the highly birdlike dinosaurs, such as Troondon, but it exceeds that of other adult tyrannosaurids. The research, which Stevens presented at an SVP meeting several years ago, is in press at the Journal of Vertebrate Paleontology.

    To Stevens, the degree of depth perception, hearing, and sense of smell point in one direction: a top predator. In contrast, Jack Horner of the Museum of the Rockies in Bozeman, Montana, is sticking with his idea of where T. rex got its meals. “I think this olfactory business is very supportive of the T. rex-as-scavenger hypothesis.” Others say it's more likely that T. rex wasn't a picky eater. “If it can smell a carcass a mile away, it can also smell a herd of hadrosaurs from a mile away,” says James Hopson of the University of Chicago in Illinois. “I don't think it would have preferred one over the other.”

Log in to view full text