News this Week

Science  31 Oct 2003:
Vol. 302, Issue 5646, pp. 758

    NIH Roiled by Inquiries Over Grants Hit List

    1. Jocelyn Kaiser

    The National Institutes of Health (NIH) is scrambling to “justify” about 200 approved or funded projects to the House Committee on Energy and Commerce after being questioned about controversial research topics such as sexual behavior. In the past 3 weeks, NIH officials have pored over a list of awards drawn up by a conservative group and forwarded to NIH by a committee staffer, contacted grantees, and worked to allay lawmakers' qualms.

    Behavioral researchers are particularly concerned. “Anything that identifies or targets individual investigators because of the subject matter of their research is unacceptable,” says Judith Auerbach of the American Foundation for AIDS Research. This is not the first time Congress has questioned research involving sexual behavior, says Alan Kraut, director of the American Psychological Society, but the scale seems unprecedented. “These are important areas of public health that have to be studied,” says Kraut. “I think there's reason to worry.”

    Representative Henry Waxman (D-CA) goes further, expressing “outrage” and calling the list “scientific McCarthyism” in a 27 October letter to Tommy Thompson, secretary of the Department of Health and Human Services (HHS), which oversees NIH. Waxman's letter alleges that HHS staff helped compile the list and calls for an investigation. HHS spokesperson Bill Pierce denies that HHS employees were involved: “There's no hit list as far as the department is concerned.”

    After Waxman's letter was released, Ken Johnson, spokesperson for the House Commerce Committee, told Science that the panel “has not asked NIH to investigate these specific grants.” He said the list was given to the committee by the Traditional Values Coalition, a conservative advocacy group in Washington, D.C. The committee staffer who passed it on “exercised poor judgment,” Johnson says. “We don't know where the list came from or how it was compiled or whether it's accurate, for that matter.”


    Rep. Ferguson's questions led NIH to review a list of 198 projects drawn up by the Traditional Values Coalition.


    If NIH has the jitters, there may be good reason. Last spring, NIH project officers advised some researchers to reword their grant abstracts to make them appear less controversial, apparently in response to questions from congressional aides who had found them in CRISP, the NIH grants database (Science, 18 April, p. 403). In July, the House came close to defunding four research grants on sexual behavior, voting 212-210 against a measure sponsored by Patrick Toomey (R-PA) (Science, 18 July, p. 289). At a hearing on 2 October, several Commerce Committee members grilled NIH Director Elias Zerhouni about other projects. Michael Ferguson (R-NJ), referring to a list of 10 projects compiled by House Republicans, asked him to “provide us just a written explanation for the medical benefit that is hoped to be derived from these studies.” Before the hearing, committee and NIH staffers also had a “limited discussion” of the Traditional Values Coalition's list, says Johnson. NIH later requested the list. “They asked for it,” Johnson says. (According to NIH spokesperson John Burklow, NIH wanted the list Ferguson had but was given the coalition's much longer list.)

    View this table:

    Titled “HHS Grant Projects,” the document lists project titles for 198 projects sponsored by nine NIH institutes. The spreadsheet includes 2000–2003 award amounts, which, as Waxman's letter notes, are not included in CRISP. Many studies involve HIV risk in populations such as drug users and adolescents. A few notations hint at the listmaker's disapproval—for example, the observation, “endorses sexual behavior and condom use among teens.” A project to prevent HIV among Russian drug users carries the note: “Gives credence to intravenous drug activity.” Curiously, in addition to the names of the 157 principal investigators (PIs) for these projects, the list includes about 20 researchers, some of whom have studied homosexuality; for these it says, “Nothing found on HHS search.” Traditional Values Coalition executive director Andrea Lafferty says her staff got the data from the Internet. “To have taxpayers' money piddled away on some of this stuff is outrageous,” she says.

    NIH has already begun contacting researchers on the list to compile summaries of their studies. One PI, who asked not to be identified, said he was given 5 days to write up his latest findings. He worries about “a broader attack on behavioral research.” Wendy Auslander, a social scientist at Washington University in St. Louis, Missouri, says an NIH staffer simply asked her to approve an e-mailed summary of her study of preventing HIV in foster children as part of a “justification for a large number of grants.” “I didn't panic,” Auslander says, but “it's very odd.”

    Burklow says researchers shouldn't panic. “We stand behind the peer-review process,” he says. NIH's information-gathering, he says, is meant “to explain to members of Congress the public health importance of this research.”

    Although Johnson says the Commerce Committee is not conducting an investigation, some scientific societies are concerned that the grants will come up during a reauthorization of NIH expected next spring. They are forming a coalition to inform lawmakers about the value of behavioral research. The Toomey amendment caught everyone off-guard, says Karen Studwell, legislative director for the American Psychological Association; the community hopes to be better prepared next year.


    Warmer Ocean Could Threaten Antarctic Ice Shelves

    1. Jocelyn Kaiser

    When two chunks of ice the size of a small country broke off the Antarctic Peninsula's Larsen Ice Shelf in 1995 and 2002, experts scrambled to figure out how it had happened. The pat answer, global warming, was too simple: Some parts of Antarctica are cooling, and the soaring air temperatures along the peninsula that seemed to have triggered the collapse have not yet been convincingly linked to a worldwide pattern. But as a model for what could happen elsewhere in Antarctica as temperatures rise, it was crucial to understand the Larsen Ice Shelf's demise.

    Now a detailed look at the Larsen's shrinking ice is challenging conventional wisdom about the collapse. Andrew Shepherd, a glaciologist at the University of Cambridge, U.K., and co-workers analyzed satellite data to produce the first estimate of how quickly the Larsen Ice Shelf is thinning. They report on page 856 that since 1992, the thinning has been too fast for rising air temperatures to explain. They conclude that the shelf must be melting due to warmer ocean waters below. If so, the rest of the Larsen is doomed, and other Antarctic ice shelves could be more endangered than had been thought. “We need to understand the potential for other ice shelves to be hit,” says glaciologist David Vaughan of the British Antarctic Survey (BAS) in Cambridge, U.K.

    Researchers were stunned when the Larsen's two roughly Luxembourg-sized northern sections abruptly shattered, each breakup taking just a few weeks. Suspicion fell on air temperatures on the peninsula, which for a half-century have risen by 0.5°C a decade, 10 times faster than the global trend. Glaciologist Ted Scambos of the National Snow and Ice Data Center in Boulder, Colorado, and co-workers have suggested that summertime pools of meltwater on the Larsen Ice Shelf eventually force crevasses apart and lead to collapse.

    Cold discomfort.

    The Larsen Ice Shelf is imperiled from above and below.


    But Shepherd's study suggests that although air temperatures may strike the final blow, they can't be the whole story. He and colleagues at the U.K.-funded Centre for Polar Observation and Modelling and in Argentina quantified the thinning using satellite radar measurements of the shelf's height corrected for tides. They found that the shelf thinned by up to 18 meters between 1992 and 2001. One possible explanation, that the summer meltwater includes snow that refreezes into denser ice, fell through when the team calculated that the shelf doesn't receive enough solar energy to drive that process. “The temperature is just not enough to provide that amount of melt,” Shepherd says.

    That suggests melting from below. Although there are no long-term ocean temperature data from the Larsen shelf, deep waters farther out in the Weddell Sea have been warming over the past 3 decades, Shepherd's team notes. Also, a BAS ship sailing near the Larsen shelf in 2002 detected midlevel temperatures warm enough to melt ice at that depth. At the current melting rate, the Larsen will reach the breaking point within this century, Sheperd's team predicts.

    Others caution that the findings are not that solid; the satellite data are imprecise, Scambos notes, and there is no direct evidence that ocean waters off the Larsen have warmed. But if the results hold up, the study will serve as a wake-up call that larger ice shelves on mainland Antarctica may also be vulnerable to ocean warming, says oceanographer Stan Jacobs of the Lamont-Doherty Earth Observatory in Palisades, New York. These larger shelves hold back massive aboveground ice sheets that would dramatically raise global sea level if they melted. “It's a good model for what could happen,” Jacobs says.


    Isotopic Data Pinpoint Iceman's Origins

    1. Constance Holden

    The renowned Alpine Iceman, known as Ötzi, has proven to be an extraordinarily rich source for researchers interested in life during the late Neolithic era. They know Ötzi's age, his health, his mitochondrial DNA sequence, what he ate, and how he died. But where did he live?

    Plant matter in his intestine had suggested that the Iceman spent his final days in an area south of where he was found in 1991, sticking out of melting snow near the mountainous border between Italy and Austria. Now researchers have used isotopic signatures from teeth and bones to pinpoint his origins to a few valleys in southern Tyrol; they report their findings on page 862. They say Ötzi probably did not stray more than 60 kilometers from his birthplace until his death more than 5000 years ago.

    Wolfgang Müller of Australian National University in Canberra, who began the research 3 years ago while at the Swiss Federal Institute of Technology, and an international team cleverly used various parts of Ötzi's body and three kinds of isotopes to trace his whereabouts during his lifetime. “It's a marvelous paper,” says Henry Schwarcz, a geologist at McMaster University in Hamilton, Ontario. “This multidisciplinary approach simply hasn't been done in other sites.”

    The researchers arrived at their findings by comparing isotope signatures from dozens of soil and water samples with those from tiny pieces of Ötzi's tooth enamel and thighbone. Dental enamel is fixed at the time the tooth is formed, so the three teeth the scientists examined contain the signature of trace elements in food ingested when Ötzi was about 3 to 5 years old, Müller explains. Bone, however, is remineralized with ingested substances every 10 to 20 years, giving a clue to the Iceman's whereabouts in adulthood. And tiny pieces of mica in Ötzi's intestine yielded data about the hours just before his death.

    Close to home.

    Ötzi probably spent his whole life within 60 km of his childhood home in Eisack Valley.


    To link Ötzi to particular places, the researchers took advantage of “the geological and topographical complexity of the area,” explains Schwarcz. Their method wouldn't have worked “if [Ötzi] had been found in the middle of Iowa.” For example, ratios of the stable isotopes of oxygen in rainfall vary with altitude and geography. High-altitude inland areas, such as those north of where Ötzi was found, are depleted in the heavier oxygen isotope (18O), which drops out first as clouds travel from the Atlantic Ocean. More southern, Mediterranean-fed rainfall carries higher levels of 18O, and Ötzi's teeth matched these southern values, indicating that he lived in those valleys when his teeth formed. But his thighbone values lay between those and the values from the area where he was found, indicating that as an adult he spent time farther north at a higher altitude than his native valley.

    In addition, the Alpine mountains around the Iceman are so geologically complex that they include at least four different rock types, each of which has a distinct ratio of radioactive isotopes of strontium and lead in the rock and soil. Because food reflects the isotopic composition of the soil in which it was grown, the team could narrow Ötzi's childhood origins down to several southern valleys.

    Putting together these two types of isotope data, the researchers have zeroed in on the Eisack Valley (see map) as a good candidate for his childhood home; an archaeological site at a village called Feldthurns there has revealed a megalith from the same era.

    Finally, the researchers characterized Ötzi's later stomping grounds with what Schwarcz calls the most “unusual and ingenious” aspect of the research: argon-argon dating of 12 tiny pieces of white mica that may have come from the grindstone of the wheat and barley he had eaten shortly before his death. The age distribution of the mica pieces—between 95 million and 300 million years—is consistent with that of a small area west of the Eisack Valley, lower Vinschgau, in the Etsch Valley.

    Thus Müller's group concludes that Ötzi grew up in the Eisack Valley, then as an adult spent time in the mountains of lower Vinschgau before setting off on his final journey to the Ötz valley.

    Jurian Hoogewerff of the Institute of Food Research in Norwich, U.K., however, questions the conclusions drawn from the oxygen isotopes. He wonders whether the north-south variation seen today existed 5000 years ago. He saw no such variation in a 2001 study of medieval Tyrolean skulls. Still, other researchers are impressed by what Paul Budd of the University of Durham, U.K., calls the “diverse biochemical data.” The study “sets new standards for isotopic life history reconstruction,” Budd says.

    Although Italy and Austria competed for access after the mummy was discovered, it was determined in 1991 that Ötzi was found 93 meters into the Italian side of the border, and he has been housed at a new museum in Bolzano, Italy, since 1998. The new data confirm that Ötzi did indeed spend his entire life in what is now Italy.


    Galaxy Maps Support Theory That the Universe Is Flying to Pieces

    1. Charles Seife

    When, earlier this year, the Wilkinson Microwave Anisotropy Probe (WMAP) team revealed the most detailed pictures of the infant universe, other cosmologists had to scramble to catch up (Science, 14 February, p. 991). Although there are other ways to gauge the fundamental properties of the universe that don't rely on microwave measurements, they were much less accurate than WMAP. Now, two papers released by a consortium of galaxy hunters have put WMAP's rivals back in the race. Cosmologists at the Sloan Digital Sky Survey (SDSS) have released their first “power spectrum” of the clustering of nearly a quarter-million galaxies and have provided an independent means of estimating how much dark energy the universe holds and how fast the cosmos is expanding. To nobody's surprise—and some theorists' disappointment—its conclusions match those of WMAP.

    “WMAP placed the ball in the nonmicrowave court. We have to bring everything else up to the same level, and that's what we feel Sloan has done,” says Max Tegmark, a physicist at the University of Pennsylvania in Philadelphia and member of the Sloan team. With the increased precision of SDSS's galaxy-mapping data, Tegmark says, cosmologists can discard any one line of evidence for the standard cosmological model—supernova data, galaxy clustering, or microwave measurements—and still be forced to reach the same conclusions. “It's really, really encouraging that any one data set is expendable,” he says.

    Cosmic lace.

    By analyzing the clustering of galaxies (dots on map at left), SDSS astronomers confirmed that dark energy is expanding the universe.


    SDSS is about one-third of its way to its goal of mapping a million galaxies in a large section of the sky. The survey, which is scheduled to end in 2006, was designed to give astronomers a handle on the way galaxies cluster in space, a pattern that reveals the hidden influences (such as dark matter and dark energy) that drive cluster formation. Because galaxy surveys such as SDSS and its rival 2dF analyze clusters of galaxies (which are fairly young, cosmologically speaking), they provide a much more recent snapshot than microwave-background fluctuations do. But it takes a lot of galaxy-cluster observations to develop a decent picture.

    Broadly speaking, the SDSS team uses a mathematical technique similar to the method microwave astronomers use: They generate what is known as a power spectrum, a bumpy graph that represents the abundance of different-sized features in the sky (Science, 31 May 2002, p. 1588). And just as the WMAP team did, the SDSS team can use its power spectrum to derive cosmological parameters such as the amounts of ordinary matter, exotic matter, and dark energy in the universe.

    Two papers, both available on the arXiv preprint server (, do just that with the first quarter-million galaxies. There are few surprises. “I think the news here is that there has been a convergence,” says Sloan team member Michael Strauss, a physicist at Princeton University. “It's in exact agreement with what the WMAP team was telling us; it's all pointing in the same direction.”

    Even after the researchers exclude all of WMAP's data from the joint analysis of cosmological parameters, they get the same results that the WMAP team has been getting: The universe is about 14 billion years old, flat, and dominated by dark energy.

    “The cosmic model stands tall,” says Tegmark. “It's depressing if you would like to see everything go down in flames.”


    DOE Told to Make Its Science More Visible

    1. David Malakoff

    A blue-ribbon review panel says the Department of Energy (DOE) needs to improve higher-level management of its $3.3 billion science program and do a better job of promoting its scientific efforts to stand any chance of gaining more resources.

    The new draft report* “gives [Secretary of Energy Spencer Abraham] some of the ammunition he'll need” to fight for improvements, says Michael Lubell, head of government affairs for the American Physical Society in Washington, D.C. But “the question is how much he'll pick it up and run with it,” says one university lobbyist. Without Abraham's backing, science advocates say, the recommendations will end up joining a pile of past proclamations that are now gathering dust.

    The advice comes from a 14-member panel, stocked with academic and industry leaders, that Abraham created last December. Led by Massachusetts Institute of Technology President Charles Vest, the task force received an earful of complaints about DOE. DOE's science budgets have suffered from “the department's historically poor reputation as badly managed, excessively fragmented, and politically unresponsive,” the panel said. “The depth of criticism and concern was shocking.”

    So was ignorance about DOE's science programs, which account for 40% of all federal funding in the physical sciences and nearly 20% in mathematics, computing, engineering, and environmental science. “Outside of the research community itself, science is rarely recognized as an essential component of the DOE mission,” the report notes. The panel laid part of the blame on DOE's lackluster efforts to communicate with the public and Congress.

    The panel, which reports to the Secretary of Energy Advisory Board, says DOE needs an undersecretary for science to serve as a high-level advocate within the department. DOE also needs to make greater use of peer review in selecting research projects, the panel added, and assign a higher priority to repairing crumbling facilities at its two dozen national laboratories.

    One way to raise DOE's scientific profile, the panel says, is through “three major, highly visible research initiatives.” One should address energy issues, another should focus on advanced computing, and the third should produce “a frontier research facility for the pursuit of basic science.” Although the panel offered no specifics, observers say existing initiatives could fill the bill. The Bush Administration's hydrogen energy program, for instance, has a basic research component that could be beefed up, and DOE science chief Ray Orbach is already pushing a supercomputing initiative. A new “frontier” facility could include a next-generation accelerator sought by particle physicists.

    DOE is seeking public comments on the report before it goes to Abraham. Its recommendation for a new undersecretary slot is already part of a massive energy bill before Congress. But the bill's prospects are uncertain, given opposition from the White House and lukewarm support from key lawmakers.


    In a First, Infected Mice Recover From Prion Disease

    1. Jennifer Couzin

    Prion diseases are thought to herald certain death. No therapy can slow or stop the progression of “mad cow disease” or other ailments linked to these misfolded proteins, which destroy swaths of neurons and punch spongy holes in the brain. By the time the earliest symptoms appear, such as forgetfulness and unsteadiness, brain tissue is already badly disfigured.

    Now on page 871, a team of scientists appears to have upended this principle, reversing disease in afflicted mice even while prions stay put. The researchers did so not by targeting the prions themselves but by eliminating the healthy protein from which prions originate. The work shrinks the list of suspects that are capable of destroying a mouse brain afflicted by prion disease.

    “We are getting closer to understanding how the damage occurs,” says Adriano Aguzzi, a neuropathologist at the University of Zürich, Switzerland, whose views have changed: He reveals that he once derided reversing prion disease as “science fiction.”

    John Collinge, head of the Prion Unit at the Medical Research Council in London, and Giovanna Mallucci, along with their colleagues, first created a mouse strain that enabled them to test whether eliminating the normal prion protein, called PrP, could alleviate disease. PrP is present throughout the body; when it morphs into a prion form called PrP-scrapie, it infects nearby healthy PrP.


    Despite prion deposits (arrows, insert), transgenic mouse brains plugged spongy holes.


    By crossing two different transgenic mice, Collinge's group got an unusual and potentially illuminating combination. The animals were normal at birth, but at 12 weeks they churned out an enzyme that disabled the PrP gene in neurons alone.

    A few weeks after the mice were born, Collinge's group injected them with prions. By the time the infected animals were 12 weeks old, their brains contained spongy holes. Prions hadn't yet infiltrated the animals' neurons, however, and the mice weren't showing symptoms. Collinge posits that this stage may mirror early disease in humans, marked by subtle cognitive and motor changes that cannot be assessed in mice.

    Within days after the PrP gene in their neurons shut down, the mice depleted their supply of the normal PrP protein. Remarkably, more than a year later, these nine mice “live a normal life,” says Collinge. Control mice succumbed to prion disease.

    Two observations about the recovering animals struck the researchers. The spongiosis long viewed as irreversible disappeared. And non-neuronal brain cells, called glia, which still produced PrP, contained wads of prions.

    Apparently, PrP can contort into prions “in the cell right next door, and it's not hurting the neurons,” says Susan Lindquist, a prion researcher and director of the Whitehead Institute at the Massachusetts Institute of Technology in Cambridge.

    The work adds to the growing body of evidence, say Lindquist and others, that the prion form of PrP—at least, when it's in non-neuronal brain cells—might not be the poison it's viewed as. Lindquist says the work establishes that attacks on neurons are key to mouse spongiform disease. But because prions don't appear toxic elsewhere in the brains of the engineered mice, what harms neurons may be something else entirely. Aguzzi thinks this could be a still-unidentified form of PrP that may fleetingly appear as normal PrP morphs into the scrapie form.

    This idea extends to related disorders. A number of neurodegenerative diseases, such as Parkinson's and Alzheimer's, are marked by abnormal protein aggregates much like those seen in prion diseases. Recent work suggests that such abnormalities might be markers of destruction rather than causes. The results from Collinge's team suggest that researchers should focus not “on the … end product, but on something upstream,” says Suzette Priola, a virologist at Rocky Mountain Laboratories in Hamilton, Montana.

    Whether or not such a strategy works, the study offers a glimmer of hope. “In the past, neurodegenerative diseases were a death sentence,” says Ai Yamamoto, a neurobiologist at Memorial Sloan-Kettering Cancer Center in New York City. Now “there's a possibility, at least in mice, that you can recover.”


    NRC Backs Ecosystem-Wide Changes to Save Klamath Fish

    1. Robert F. Service

    In one of the bitterest water disputes in the American West, federal biologists have tried to save three species of endangered fish by restricting how much water farmers can remove from waterways in the Klamath Basin along the Oregon-California border. But rebuilding fish populations will require a broader set of remedies, according to a report released 21 October by the National Research Council (NRC; see Among the fixes, the report recommends considering tearing out dozens of small dams, restoring wetlands, altering logging practices, and refilling long-drained lakes.

    The recommendations echo proposals that have long been discussed, but the report could provide a new impetus for action. Both Congress and the Bush Administration have indicated that they are inclined to follow NRC's guidance. And farmers whose federal project takes about one-third of the irrigation water removed from Upper Klamath Lake like the message that cutting their share won't solve the problem. “We agree with the council that the recovery of the [fish] cannot be achieved through actions primarily focused on the Klamath [Irrigation] Project,” says Interior Secretary Gale Norton.

    But critics say wide-ranging actions—although commendable—might not come in time to save imperiled fish stocks. “There are a lot of great ideas here,” says Steve Pedery of WaterWatch of Oregon, a Portland-based environmental group. “But most of the solutions they advocate are going to take 15 to 20 years to implement.” In the meantime, Pedery and others say that federal agencies should buy out some farmers to decrease water demand.

    The new report caps a 2-year study by an NRC committee that was marked by controversy throughout. An interim report drew widespread fire from researchers and environmentalists. It concluded that there was no scientific evidence that turning off the irrigation spigot would help two populations of endangered suckers in Upper Klamath Lake, or that increasing the flow of water in the Klamath River below the lake would benefit threatened coho salmon. Farming advocates and politicians seized on the report as “proof” that past fisheries management decisions were based on “junk science” (Science, 4 April, p. 36).

    No simple solution.

    Reducing irrigation isn't enough to save fish, an NRC panel contends.


    The new report rejected that view: “The listing agencies have been criticized for using pseudoscientific reasoning. … The committee disagrees with this criticism.” At the same time, the committee stuck by its previous assessment that there is no strong correlation between water levels and fish kills in the lake.

    The basin's problems are legion. Algae blooms choke the lake every summer, fed by nutrients from natural sources as well as runoff from farms and ranches. The blooms trigger conditions that strip the water of oxygen and suffocate the suckers. But changing farming and ranching practices upstream will have little immediate impact, because lake sediments are already saturated with nutrients, the panel concluded. Instead, fisheries agencies should improve spawning and rearing habitat in hopes of bolstering the number of fry.

    The committee's top priority, removing the 3-meter-high Chiloquin dam on the Sprague River, would restore access to 90% of the suckers' spawning grounds above Upper Klamath Lake. NRC also suggests that fisheries managers add what amounts to giant fish-tank bubblers in the lake to provide oxygen refuges during algae blooms. Although largely untested, “this is worth trying,” says William Lewis Jr., a limnologist at the University of Colorado, Boulder, who chaired the 12-member committee.

    Improvements to the tributaries of the Klamath River should benefit the coho. Among the suggestions: rewriting logging rules to prevent erosion and temporarily closing a hatchery that releases millions of fish that compete with wild coho.

    The committee estimates that it will cost up to $35 million to set up a broad-based research program and carry out small-scale habitat improvements such as installing screens to repel fish from irrigation canals. Major changes, such as tearing down dams, will likely cost much more. Says Douglas Markle, a fisheries biologist at Oregon State University in Corvallis: “Making any of that happen is the rub in all of this.”

    Meshing the NRC panel's views with those of other committees could be another obstacle: Two new reports appear to be partially at odds with NRC's. One, released this month by researchers assembled by the Oregon governor, concluded that higher water levels in Upper Klamath Lake result in more healthy juvenile suckers and greater availability of spawning areas. A draft report from the U.rsquo. Fish and Wildlife Service (USFWS) leaked to the Eureka Times Standard reportedly blames low water flows in the Klamath River for a fish kill last year that claimed 33,000 Chinook salmon. The NRC panel said it didn't have enough evidence to make that link, although it was not given access to the USFWS data. About the only conclusion that all parties agree on is that saving endangered fish in the Klamath Basin won't be easy.


    Search for SARS Origins Stalls

    1. Martin Enserink,
    2. Dennis Normile*
    1. With reporting by Ding Yimin.

    The outbreak was contained effectively. But some researchers worry that key questions about the source of the deadly virus are not being pursued

    It all started so well. When severe acute respiratory syndrome (SARS) erupted last spring, sickening thousands around the globe, a dozen labs quickly teamed up to find out what caused the devastating disease. The World Health Organization (WHO), sometimes maligned for its suffocating bureaucracy, appeared invigorated as it took the lead in pinpointing the new virus and containing the outbreak (Science, 11 April, p. 224)—efforts that paid off in early July, when the organization declared the world free of SARS.

    But today, research appears to have stalled in what many consider the key step in preventing a recurrence of SARS: determining where the virus came from. Research in China —the only place where answers to most of these questions can be found —has lacked coordination, WHO officials say, and ambitious plans to launch an international endeavor to hunt for the virus's origins have largely fallen flat. As a result, there has been virtually no progress during the last 4 or 5 months, says Klaus Stöhr, the WHO virologist who until August coordinated a network of SARS research labs.

    The slowdown stems from several sources, including the intricacies of Chinese research policy and difficulties in starting international collaborations. In Geneva, meanwhile, WHO staff involved in SARS say they are stretched thin, and some key officials, including Stöhr, have moved on.

    The stalemate worries many researchers, because without a full understanding of the origins of SARS, effective control is much more difficult. “We won't feel secure that SARS is not coming back until we have an understanding of where it came from,” Robert Breiman of the Centre for Health and Population Research in Dhaka, Bangladesh, told a meeting of the Institute of Medicine last month in Washington, D.C.

    “The animals are the same, human activity is the same, the farms are the same, and the virus is still around,” says microbiologist Guan Yi of the University of Hong Kong. “There is no reason to doubt the virus will be back.”

    Still searching.

    Studies have fingered the animal trade as a conduit for SARS, but its natural host is unknown.


    With no way to prevent a resurgence of the disease, public health authorities' best hope is to recognize the first new cluster of cases early on and implement the same strict control measures that helped stem the original outbreak. But they are handicapped by a lack of reliable tests that can identify the disease. In this area, too, researchers say progress has been slow. Indeed, Stöhr says that is one reason he left the SARS job last August.

    Cats and ferrets

    Type the letters “SARS” into PubMed today, and you'll find an avalanche of more than 1000 papers published since last February. But disappointingly few deal with the origin of SARS. Researchers have known since last May that the virus once lurked in masked palm civets and other animals sold in a market in Shenzhen, a city in mainland China just across the border from Hong Kong. The results of that study, carried out by Guan's team and researchers at the Center for Disease Control and Prevention (CDC) in Shenzhen and published in Science this month (10 October, p. 276), suggest that the virus used the animal markets as a steppingstone to humans.

    That idea was reinforced by another study by the Guangdong CDC, written up in a short note in the 17 October Morbidity and Mortality Weekly Report. It showed that of 508 animal traders whose blood was sampled during the outbreak last May, 13% tested positive for so-called IgG antibodies to the SARS virus, versus 1.2% to 2.9% in control groups. Those who handled primarily civets were most likely to have antibodies, followed by traders who dealt mostly in wild boars and muntjac deers.

    But researchers know little beyond that. Civets, for instance, may not be the actual reservoir but may have acquired the virus from some other animal. In a study published in Nature this week, Albert Osterhaus and his group at Erasmus University in Rotterdam, the Netherlands, report that domestic cats and ferrets can also be easily infected with the coronavirus, and they transmit the virus to cagemates. That suggests that the virus is more promiscuous than most, says Osterhaus, and may be carried by a range of animals.

    But, despite a brisk start last spring, there's been little headway in identifying those animals. For instance, in a project coordinated by China's Ministry of Agriculture, researchers have tested samples from more than 70 wild and domesticated animal species for the virus. But given the lack of specificity of existing tests, says Stöhr, he is skeptical of the findings, which pinpointed several unlikely species, including reptiles. More specific tests are urgently needed, he says.

    Kong Xiangang, director of the Harbin Veterinary Research Institute, says that systematic and extensive sampling of animals is still going on in Guangdong. But Stöhr insists there's no indication that China has launched the broad sampling effort that's needed.

    WHO had hoped to help out with intensive international collaboration. During a visit to China this summer, Stöhr told Science he was discussing the entry into China of four international scientific teams, which would work with local researchers to track the virus. But the negotiations have dragged on.

    Back to flu.

    Klaus Stöhr headed WHO's SARS lab network but says he stepped down because of a lack of support.


    Part of the problem in China is a lack of central direction, Stöhr says. At least three ministries are involved in the hunt, the labs are often intensely competitive, and the Ministry of Health—which Stöhr thinks should be in charge—plays second fiddle when it comes to animal issues. Moreover, the animal farming industry has actively opposed anything that could affect its trade.

    Nor has China responded to a report by a group of international experts that recommended, among other things, that China start regulating the animal trade. The panel, which consisted of Chinese infectious-disease researchers as well as experts from WHO and the United Nations Food and Agriculture Organization, delivered its report in September; many researchers think a response is long overdue, especially because China has allowed civets and other animals back in the markets (Science, 22 August, p. 1031).


    Researchers are also stymied in their efforts to identify the most reliable diagnostic tests for SARS. A false alarm this summer, when an outbreak of the common cold at a Canadian nursing home was mistaken for a return of SARS, provided disturbing hints of the cost and disruption that can result from misdiagnosis (see sidebar).

    Scientists at the Robert Koch Institute in Germany have done some work to study the sensitivity and specificity of different tests. But Stöhr says the job is a task cut out for WHO. He says he quit his post as SARS research coordinator in part because he did not receive enough support to undertake it. “I have spoken up loud and clear,” says Stöhr, who went back to coordinating influenza preparedness efforts and was absent from last week's meetings in Geneva. Guénaël Rodier, director of communicable diseases surveillance and Stöhr's boss, argues that money problems at WHO are perennial and says he preferred that Stöhr return to influenza work because that poses a more worrisome threat than does SARS.

    It's understandable that WHO has lost a bit of its steam, says John Mackenzie, an Australian virologist who was hired by WHO to take on part of Stöhr's job for 5 months. SARS put an enormous burden on an already understaffed organization. “It was so intense, there were people who slept in the office,” he says. “It just took some time to recover.” But after three meetings in Geneva last week, the organization is back in business, he assures. For one, progress was made on how to evaluate SARS tests, he says, and researchers agreed on a protocol to have positive SARS tests verified by an independent lab.

    Still, keeping up the agency's newly found prominence in battling emerging infections will require more money in the long run, says Rodier: “If we are going to be the world's fire brigade, we need more investments.”


    Unexplained False Alarm May Hold Lessons

    1. Martin Enserink

    What went wrong in Winnipeg this summer? Many SARS researchers are wondering how Canada's flagship National Microbiology Laboratory could have misdiagnosed an outbreak of the sniffles in a nursing home as a reemergence of severe acute respiratory syndrome, triggering alarm bells worldwide. The answer may be relevant for other labs trying to distinguish new SARS victims from cold and influenza sufferers this winter.

    The outbreak in the Kinsmen Place Lodge in Surrey, near Vancouver, occurred in July and August, just as the world was celebrating the disappearance of SARS. From the outset, the clinical picture was confounding: Respiratory symptoms among most of the 150 patients were much milder than in most cases of SARS. But researchers at the Winnipeg lab reported finding bits of viral DNA that only matched the SARS coronavirus, as well as antibodies against one of the virus's proteins, in tissue samples.

    At the time, lab director Frank Plummer said the findings suggested that either SARS had continued to spread surreptitiously—although perhaps in a milder form—or that they had discovered a very closely related virus. Patients were isolated, and employees who had been in contact with them were ordered to stay home.

    Testing, testing.

    Frank Plummer says he does not have a good explanation for his lab's results.


    But on 22 August, health authorities in British Columbia (BC) dismissed the episode as a false alarm. Tests at the BC Centre for Disease Control, as well as at the U.S. Centers for Disease Control and Prevention in Atlanta, Georgia, found no evidence of SARS; instead they fingered another human coronavirus, OC43, which causes the common cold and is only distantly related to SARS.

    So far, the Winnipeg lab has not recanted or clarified its statements, although Plummer says the lab will soon release a full account of its work. Plummer now agrees that the nursing home was never hit by SARS. But with the strict safeguards in place, “it seems impossible” that his results were caused by lab contamination, he says. It's still possible that the patients were infected with another virus that nobody else has found, he says: “I simply don't have a good explanation.”

    Privately, some researchers attribute the episode to an embarrassing series of mistakes, but not knowing the details, many are reluctant to judge the lab publicly. Virologist Albert Osterhaus of Erasmus University in Rotterdam would like an open discussion: Knowing the full details should at least help other labs avoid creating another scare when the cold and flu season hits soon, he says.


    Methuselahs in Our Midst

    1. Kevin Krajick*
    1. Kevin Krajick is the author of Barren Lands: An Epic Search for Diamonds in the North American Arctic.

    Scientists and tree lovers are discovering old-growth trees—and clues to the past—in places where they were long thought to be lost

    MIDDLEBURGH, NEW YORK—At the edge of a windy escarpment, towering 245 meters above this rural valley where Revolutionary War militias and British troops once clashed, arborist Fred Breglia is admiring a view that probably has changed little in 300 years. In the flat bottomlands down by the Schoharie Creek are wooden houses, the steeple of the Middleburgh Methodist church, and wide ancestral fields of corn and cows, sloping up to stands of second-, third-, or fourth-growth timber; people have been cutting every tree in sight here for centuries. But as Breglia has found, they missed a few. Rooted into the precipice and the steep talus at its base, Breglia has discovered a strip of gnarled red cedars, many no more than 20 or 25 centimeters across—and up to 500 years old. Twisty old chestnut oaks nearby run to 400 years and more. “This place has literally never been touched. It's too hard to get at, and the trees aren't worth logging,” he says.

    The site is not unique. Other people are now finding scores of places in eastern North America where ancient trees have somehow been protected or hidden from the logger's saw. A few of these elder trees are big, but many, like the red cedars, survived in part because they grew slowly in marginal places and lack the size to match their age. Located on up to 800,000 hectares in scattered bits—perhaps 0.5% of the primeval forest, according to Robert Leverett, a seasoned amateur old-tree searcher in Holyoke, Massachusetts—they offer scientists a new window on past climate, pollution, forest ecology, and human history. “The first big breakthrough was just to show you could find these sites,” says Edward Cook, head of the dendrochronology lab at Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York. “Now we know they have important and interesting things to tell us.”

    Discovering survivors

    Until recently, many professionals believed that the East's native forests were gone by 1830, when the region was basically a giant sheep pasture. Except for a few remnants in parks, it was assumed that “old growth” was only in the cathedral-like Western groves of sequoias and redwoods. Then researchers such as dendrochronologist David Stahle of the University of Arkansas, Fayetteville, began coring Eastern trees for climate reconstructions, and they turned up surprises. In 1985, Stahle documented what are still the East's oldest known living trees: stands of 1700- to 2000-year-old bald cypresses in swamps along North Carolina's Black River. Loggers had bypassed them because they were gnarly and often hollow.

    Cook has found dozens of other sites with presettlement hemlocks, oaks, and other species. And recently, dendrochronologist Peter Kelly of the University of Guelph, Canada, extracted an as-yet-unpublished 2767-year tree-ring chronology from living white cedars in Ontario that are up to 1050 years old, plus well-preserved dead ones that stretch back to 3900 years. The diminutive trees survived because, like the Middleburgh cedars, they are on inaccessible cliffs (Science, 12 March 1999, p. 1623). Kelly says the rings suggest that very hot summers, like those increasingly seen in the Northeast, may retard the cedars' already slow growth.

    Other examples illustrate the corners of the landscape where ancient trees might survive: Small lake islands in lower Ontario and Quebec harbor large conifers aged at up to 800 years; the world's oldest known pitch pine germinated in 1617 on a rocky ridge at Mohonk, an upstate New York mountain resort long held by a conservation-minded family; a humble-looking 687-year-old tupelo stands in a backyard swamp near Concord, New Hampshire; and 200- to 400-year-old longleaf pines thrive on northern Florida's Eglin Air Force Base, which maintains the pines' fire-dependent habitat with blazes started by bombs.

    Hanging on.

    Fred Breglia (left) and Neil Pederson check out a dead cliffside cedar near Middleburgh, New York.


    Few sites are true “virgin” forest. Rather, they're the leftover scraps—rarely more than 30 hectares and selectively logged or otherwise disturbed. But a few bigger tracts remain: parts of New York state's Adirondack and Catskill forest preserves, set aside in the 19th century; Great Smoky Mountains National Park, which is up to 25% old growth; and some 3800 square kilometers of post oaks up to 400 years old, spread across rugged uplands in Texas, Oklahoma, and Kansas. “We used to get excited about trees that were 200 years old, but this has changed our whole concept of what is old,” says Gary Walker, a biologist at Appalachian State University in Boone, North Carolina, whose paper on 1000-year-old cedars on ledges along the Obed River of eastern Tennessee is in press at Southern Naturalist.

    The East can't match the West, however: Living bristlecone pines in Nevada and California, the world's oldest known trees, reach 5000 years. Plant ecologist Charles Cogbill of Hubbard Brook Experimental Forest near North Woodstock, New Hampshire, says that most common Eastern species such as sugar maples and red oaks max out at 350 to 400 years, perhaps because many have canopy structures that make them inherently more susceptible to damage than, say, sequoias are. And the East has insects, pathogens, ice storms, and hurricanes aplenty that chip away at trees until they die not so much of old age but of the thousand cuts that time inflicts. Still, bald cypresses, cedars, and eastern hemlocks can reach 500-plus years, says Cogbill, although no one knows the maximum possible ages for most species, nor why one outlives another.

    Amateur tree lovers have played a surprisingly large role in spotting these survivors. Twenty years ago, computer analyst Leverett turned his passion for hiking into a systematic weekend search for old trees, and he's now recognized by scientists and amateurs alike as a sort of guru of Eastern old growth, having found dozens of sites. He is co-author of the forthcoming Sierra Club Guide to the Ancient Forests of the Northeast.

    Among his dozens of emulators is Breglia, head horticulturist at the Landis Arboretum in Esperance, New York, who spotted the Middleburgh cedars by squeezing through a fracture in the bedrock to get below the cliff face. He found himself next to a rough-barked cedar that had apparently survived being blown over at least twice and had developed a wild U-shaped main stem with numerous extra trunks. A cross-section of a nearby dead tree revealed 500 rings.

    The long view

    Once the amateurs discover ancient trees, the scientists swoop in. One recent day Neil Pederson, a doctoral candidate at Columbia, scrambled alongside Breglia through the talus at Middleburgh. Pederson, who is studying how climate change affects Eastern forests, says that the oaks on this site are some of the oldest anywhere and that only a series of ledges near Franklin, West Virginia, holds similarly old red cedars. Researchers compare tree rings to local weather records to explore the effects of climate on trees, then try extending the record back into the uncharted past by using the rings alone. A group headed by Daniel Druckenbrod, a graduate student at the University of Virginia, Charlottesville, has refined drought records by coring trees at the estates of U.S. Presidents James Madison and Thomas Jefferson, then comparing rings to the men's daily weather diaries, written when the trees were young.

    Magic forest.

    Few elder trees are as big as this ancient beech in Cook Forest, Pennsylvania.


    Pederson's data so far suggest that some species further north are growing faster with the increased warmth, but that others near their southern range limits, such as white spruce, may be experiencing heat stress. This is the first good support for theoretical models predicting that the composition of many Eastern forests could change significantly in the next 100 years, as some species migrate north 100 to 250 kilometers. “You need long chronologies to tease out these different effects,” Pederson explained as he cranked hard on a hand borer to extract a core from a battered 65-centimeter-diameter chestnut oak. “Holy moley. Holy moley!” he cried, pulling out the core and inspecting it. “This is in the 300-, 400-year range!”

    Global-warming studies are only one application. Cook believes that most eastern species are in fact more sensitive to moisture than temperature, and since the 1970s he has assembled a widening chronology of droughts using tree rings. Early studies reached back to about 1700 and showed that the Dust Bowl of the 1930s was the worst recorded. However, recently Cook, Stahle, and others have found enough older trees to extend the record back another 200 years and more, and they've shown that at least two earlier episodes were far worse, with some regions seeing little rainfall for up to 5 years. Stahle asserts that one such episode may have caused the mysterious disappearance of the earliest English colony on Roanoke Island, Virginia, in the 1580s (Science, 24 April 1998, p. 564). “Getting the really old trees completely changes the picture,” says Cook. “It's scary, because it means this could happen again, and municipal water systems today couldn't possibly stand droughts of that magnitude.”

    Stahle and his colleagues, along with geologist Roy Van Arsdale of the University of Memphis, Tennessee, have used damaged trunks and suppressed growth in old trees to demonstrate that the devastating New Madrid, Missouri, earthquakes of 1811-12 were the worst of the last 500 years. Some trunks cracked, then largely ceased growing for 50 years, probably due to shaking of their roots, says Malcolm Cleaveland, a geographer at the University of Arkansas, Fayetteville.

    Some believe that the trees' greatest scientific value today is their genetic material. Alan Gordon, an emeritus scientist at the Forest Research Institute in Sault Ste. Marie, Canada, argues that loggers have long “high-graded” forests, taking the tallest, straightest trees and leaving twisted, smaller ones to reseed. That might select against straightness and height—bad news for loggers—and remove genes that allow species to adjust to fluctuating climate or new diseases. “It's a sad tale,” says Gordon, who advocates collecting seed from dwindling old trees. Lee Frelich, director of the Center for Hardwood Ecology at the University of Minnesota, Twin Cities, is beginning a project to test for gene loss in New England.

    Splendid isolation.

    These ancient hemlocks and white pines in the Mohonk Preserve in upstate New York were too small and remote to be worth logging and have also been protected by their owners.


    Meanwhile, people cut down more survivors each year. No laws protect old trees, and probably only half are in parks or otherwise off limits, according to the Kentucky-based nonprofit Eastern Old Growth Clearinghouse ( The Southern Appalachian Forest Coalition, composed of 12 regional groups, has hired teams to scout U.S. Forest Service lands, where it says the government has failed to recognize old growth and continues to log it. A 2000 report by the organization maps 15,400 hectares of old growth in western North Carolina alone—four times what the Forest Service has recognized. Rob Messick, a coalition researcher, says this has caused the Forest Service to halt most old-growth timber sales in the last few years, but elsewhere the threat continues. Recently one of the Landis Arboretum's neighbors mowed down 12 hectares of 300- to 400-year-old hemlocks and maples in a steep ravine. And the Middleburgh escarpment trees, recently publicized in the local paper, are not safe either. While clambering up a series of ledges to reach the top, Breglia chanced upon a fresh stump. It was a 15-centimeter-diameter cedar, neatly cut off at breast height to expose hundreds of tiny reddish rings—the apparent booty of souvenir hunters. “Oh, no,” moaned Breglia, adding an expletive. “Maybe we shouldn't have told anybody about this place.”


    Peering Into Ancient Ears

    1. Erik Stokstad

    New views of fossil ears, aided by CT scans, are helping reveal how extinct animals walked, swam, and flew—and perhaps one day even what they heard

    ST. PAUL, MINNESOTA—Long before birds took to the wing, the skies were ruled by pterosaurs. Some of these reptile kin eventually achieved wingspans of 10 meters, and others sported enormous crests on their head. But how well did they fly? Could they hunt by dive-bombing fish? The ability to peep inside rare fossils and examine their inner ears is now helping paleontologists answer these and similar questions. “We can exploit the structure of the inner ear to tell how an extinct animal may have led its life,” says Lawrence Witmer of Ohio University College of Osteopathic Medicine in Athens.

    This week in Nature, Witmer and colleagues unveil the first detailed look inside the heads of pterosaurs. By using computed tomography (CT) scans, they've shown that the balance organs of pterosaurs are even larger than those of birds. That's strong evidence for agile flying. Also much bigger is the part of the brain that keeps the retinas locked on target, suggesting that pterosaurs were likely more than eagle-eyed at hunting fish and other fast prey. “It's one of the most exciting pieces of work on pterosaurs in recent years,” says David Unwin of the Museum of Natural History in Berlin, Germany.

    The excitement about inner ears stretches far beyond pterosaurs. Following up on early successes with hominids, paleontologists have been filling CT scanners with a menagerie of fossils. At the annual meeting of the Society of Vertebrate Paleontology (SVP), held here 15 to 18 October, researchers described the inner ears of tiny extinct primates, the oldest known birds, and early whales. These fossilized balance organs, they report, are yielding insights into the behavior and evolution of extinct animals. Eventually, some researchers hope, other structures in the ear may also reveal the range of sounds that ancient animals heard—evidence that may help them deduce how the creatures interacted with their surroundings and with one another.

    Balancing act

    To envision how deftly long-dead animals moved, scientists have traditionally studied limb proportions, muscle attachments, and joints. Limbs, however, are just levers; the inner ear is an animal's guidance system, the mechanism that helps the brain keep its bearings. Three fluid-filled loops, called the semicircular canals, are oriented at right angles to one another. When the head moves, the fluid lags behind and triggers sensory hairs lining the canals. The brain relies on these accelerometers to control the eye and neck muscles, so that a sprinting cheetah can keep its gaze fixed on its prey. “All these mechanisms work in concert,” Witmer says. “We can look at the inner ear and get a picture of that.”

    In living animals, agility tends to go hand in hand with a large-looped semicircular canal. Gazelles, for example, have larger canals relative to their body size than hippos do. Biologists noted this intriguing pattern decades ago by filling the inner ears of dead animals with epoxy and then dissolving the skull. For paleontologists, the disadvantages of demolishing a fossil usually kept the inner ear off limits. Recently, however, CT has given researchers a more detailed, and sometimes unprecedented, look inside the skull (Science, 9 June 2000, p. 1728). “It's fair to say that in the last 10 years, there's been a renaissance,” says Witmer.

    Driving the boom is a flood of comparative data on modern animals. Paleontologists Fred Spoor of University College London and Alan Walker of Pennsylvania State University, University Park, have scanned more than 60 species of primates, along with other mammals. Thanks to a so-called micro CT scanner, the researchers can study specimens whose inner ears would have been too small to see with standard scanners. Initial results confirm that inner ears can distinguish fast, agile primates such as bush babies from slower relatives like lorises.

    Armed with such measurements, paleontologists can visualize long-extinct creatures as living, moving animals. In their latest paper, for example, Witmer and colleagues use the scans to infer how pterosaurs held their heads. Their conclusion is based on knowing that one of the three rings, the so-called lateral canal, works best when an animal holds it parallel to the ground. The canals reveal that an advanced pterosaur known as Anhanguera typically bent its head down (see figure). The head posture, Unwin says, will help researchers figure out how the crest on the pterosaur's snout affected its flying.

    Cretaceous ace.

    The large inner ear and related brain region (lower left, red) of this pterosaur implies aerial prowess and a lowered head.


    Witmer also showed that a part of the brain that receives input from the balance organs of the ear, called the flocculus, was surprisingly big—even larger than that of modern birds, relative to their body size. That suggests a highly refined sense of equilibrium, he says. The flocculus might have kept tabs on the muscle fibers in the pterosaur's skin-covered wings. Such “smart wings” might have allowed pterosaurs to keep a more stable gaze on fish and other prey, Unwin speculates.

    Paleoposture doesn't end with the pterosaurs. Witmer hopes to use CT scans to help answer the controversial question of how sauropods held their massive necks: whether they could raise them to nibble on treetops or whether, as some paleontologists believe, they kept their necks stretched parallel to the ground and grazed on low-lying vegetation.

    Paleo-ears can tackle bigger questions as well. Some researchers are using them to study evolutionary transitions: how and when ancient animals changed to adapt to new conditions and new ways of life. When, for example, did our ancestors start to walk gracefully on two legs? In a pioneering CT study in 1994, Spoor and colleagues showed that the modern inner ear of humans appeared in Homo erectus, implying modern running and jumping. Examining a more primitive relative, Australopithecus, they found that its ears were more like those of great apes, less likely suited for fast, agile two-legged gaits. The results show that humans became full-fledged bipeds some 1.8 million years ago.

    Ears may also say something about how birds evolved from nonavian dinosaurs. At the SVP meeting, Angela Milner of the Natural History Museum in London and Tim Rowe of the University of Texas, Austin, unveiled the first CT scan of the skull of Archaeopteryx, the oldest known bird. Although the rest of the skeleton shares features with dinosaurs, Milner and Rowe showed that the skull's semicircular canals are clearly birdlike.

    Also at the meeting, Justin Sipla and Justin Georgi, graduate students at the State University of New York, Stony Brook, reported that they and Spoor had scanned 37 species of modern birds and three of crocodilians, the other closest living relative of dinosaurs. The semicircular canals of birds were larger, relative to body size, than those of crocodilians. Less agile or terrestrial birds, such as the ostrich, had canals like those of crocodilians. Sipla and Georgi hope to use the technique to study the transition from dinosaurs to birds and the evolution of flight.

    Whorled series.

    Balance organs of agile bush babies (left) are larger than those of more sluggish whale ancestors. Canals shrank even more relative to body size as whales entered the sea. (Left to right: land-dwelling Ichthyolestes, marine Indocetus, modern dolphin.)


    Similar studies are shedding light on the evolution of marine mammals. Spoor and Hans Thewissen, an anatomist at Northeastern Ohio Universities College of Medicine in Rootstown, have examined the locomotion of whales and their ancestors by scanning semicircular canals of fossil and modern whales. The canals of whales and dolphins are much smaller relative to body size than those of all other living mammals. (The canals of a blue whale are about the same size as ours.) Their compactness makes them less sensitive and may allow cetaceans to swim acrobatically without getting seasick, Thewissen says. The pair reported in the 9 May 2002 issue of Nature that the canals shrank quickly and early during the transition back to the sea, implying that the shift was much more rapid than the rest of the skeleton suggests.

    Thewissen and his postdoc Sirpa Nummela are also studying how whale ears evolved from those of land mammals to ones that could hear underwater—and eventually use sonar. “This is a very exciting evolutionary story,” Thewissen says.

    Listening to the past

    In using CT scans to study the evolution of hearing, Thewissen and Nummela are pushing the envelope. Some researchers hope to go further—in effect, to use structures in the ear to give fossils hearing tests that would reveal which sound frequencies a living animal could have heard. Making the leap from ear to hearing, however, is harder than it sounds. Researchers know that the acuity of an animal's hearing corresponds crudely to the size of its eardrum. A handful hope to wring more information from the ossicles, tiny bones that transmit sound from the eardrum to the fluid-filled cochlea. The size and shape of the ossicles may reveal whether an animal usually perceived high or low frequencies.

    In a pilot project, Mary Silcox of the University of Winnipeg, Canada, has used micro CT to scan 20 contemporary mammals, including primates, rodents, and marsupials, measuring variability in relative lengths of ear ossicles. “If we could figure out what they could hear, we'll get information we never had before,” Silcox says. One might be able to estimate the distances over which animals were communicating, she says, because lower frequencies travel farther.


    Hidden inside skull bone, ossicles transmit sound; semicircular canals keep the head oriented.


    Paleontologists, however, are not sure how much CT scans will teach them about hearing. “The correlation between ossicle size and hearing frequency is extremely tenuous,” warns Thewissen, despite repeated attempts to link them. And modeling the acoustic properties of ossicles could prove fruitless without better information about how the nervous system processes the signals they send. “We always struggle with how far we can take this,” Witmer says.

    CT studies of the anatomy of hearing have yielded at least one intriguing earful. It comes from Ichthyostega, a 400-million-year-old four-legged creature famous as a transitional fossil between fish and land vertebrates. While scanning this rare fossil, Jenny Clack of the University of Cambridge, U.K., and colleagues stumbled across some bizarre anatomy that they interpret as a strange kind of ear. The ear is unlike anything else paleontologists or biologists have discovered. The ossicle called the stapes, for example, is extremely thin and platelike, rather than short and blocky as it is in all other known animals. By analogy with the ears of some modern aquatic frogs, the researchers speculate that the ear was suited for underwater hearing—implying that Ichthyostega spent more time in the water than paleontologists had believed. Because Ichthyostega's ear is so weird, they argue in the 4 September issue of Nature that early land-dwelling animals evolved a wild variety of “experimental” ear types, only one of which has survived, with modifications, to the present day.

    Paleontologists are just beginning to flesh out the evolutionary history of the ear. Figuring out the details of ancient ear anatomy will require new fossils as well as better technology. And large suites of comparative data are needed to decipher behavior. With paleontologists mining away, the ear should soon be giving up more of its riches.


    Scientists Counting on Census to Reveal Marine Biodiversity

    1. David Malakoff

    A progress report on the 3-year-old Census of Marine Life shows how new talent and technology are unlocking the secrets of the deep

    Every new sea-floor sample brings both delight and despair to marine biologist Yoshihisa Shirayama. The University of Tokyo researcher is thrilled to find previously unknown nematodes—the microscopic worms that are his specialty—wriggling under the microscope. But their presence is also a sobering reminder that scientists have never seen—much less described—the vast majority of marine species. Just a few hundred of an estimated 1 million marine nematode species have names, for instance, Shirayama says: “It is a very big surprise to find a [known] species.”

    Last week, Shirayama and more than 100 other marine scientists gathered in Washington, D.C., to discuss an ambitious initiative to fill in the blanks. Now in its third year, the Census of Marine Life (CoML) is harnessing new technologies to document the global diversity, distribution, and abundance of ocean animals (Science, 2 June 2000, p. 1575). The ultimate goal: a detailed series of online atlases that will help researchers visualize where marine organisms once lived, where they are now, and where they might be found in the future.

    At the meeting, CoML organizers reported significant progress. Scientists have so far named 210,000 marine species, they concluded in a new “baseline” report,* and information on more than 25,000 is now accessible online through CoML's Ocean Biogeographic Information System (OBIS, But 10 times as many marine species may remain unknown. And although the census doesn't expect to tally them all by its 2010 deadline, organizers say they will need additional resources and an influx of talent to make a significant dent.

    Beyond bar codes

    The census was born after marine scientists realized that new tools—from sensors that can track individual fish to genetic “bar code” readers that might speedily identify species—could revolutionize efforts to survey the briny deep. With help from the Alfred P. Sloan Foundation in New York City, researchers formed a global network. Early projects have included using cameras to count tiny Atlantic crustaceans and satellite tags to track wandering Pacific tuna and salmon. Organizers also enlisted historians to unearth fishing records and other documents that would help scientists reconstruct what some marine populations looked like up to 500 years ago. More than 300 scientists in 53 nations now participate, an unusually large effort for the field. “We're demonstrating a new, international, large-scale approach to studying marine ecosystems,” says deep-sea biologist J. Frederick Grassle of Rutgers University in Piscataway, New Jersey.

    Scientists can't wait for the new data. Shirayama, for instance, says it's “reasonable” to assume that there are a million unknown marine nematodes—and 100 million mystery species “is possible.” In contrast, scientists have so far described fewer than 100 nematode species in Japanese waters, he notes.

    Biologists, meanwhile, expect the current tally of 15,300 marine fish to top out at more than 20,000, with at least 2000 new species by 2010. Taxonomists in other areas are describing about 1700 new marine animals and plants each year, says Ron O'Dor, chief scientist for the Washington, D.C.-based census. Organizers hope to pick up the pace as census takers enter virgin territory. So far, CoML projects have explored marine environments such as rugged volcanic peaks in the Mid-Atlantic Ridge, featureless sediments of the abyssal plain, hydrothermal vents, and coastal waters. Although not comprehensive, such sampling projects contribute to the bigger picture, notes the Sloan Foundation's Jesse Ausubel.

    Researchers at the meeting offered up a half-dozen new projects that could enlarge the image. One would take a closer look at the Arctic Ocean, the world's smallest but least explored marine body. An international team that includes Russian, Canadian, and U.S. researchers wants to use a new generation of automated submarines to sample along a transect crossing the Canada Basin, which they believe holds some of the least disturbed bottom waters in the world. Other plans range from sampling towering seamounts and submerged canyons to tiny marine bacteria and plankton. “We seem to be going after littler and littler organisms as the census goes on,” says Anne Bucklin of the University of New Hampshire in Durham.

    But every project needs a financial backer. Bucklin's group, for instance, hopes to use “ships of opportunity,” such as ferry boats, in a cost-saving effort to tally new plankton species for about $2000 apiece. “It's not hard to convince some [captains] to slow down so you can tow a net,” she says. Still, the project could cost $15 million over 7 years. The census has raised about $70 million so far from various private and public sources, including $20 million from Sloan. But organizers say it could take $1 billion to complete their work.

    Money will be of little use without an adequate labor force, however. Although there are currently more than 500 fish taxonomists, for instance, Shirayama is one of the few able to classify marine nematodes. Even if the nematode experts could “work 10 times as fast as the fish experts, thousands of years would pass by the time they named most of the species,” notes O'Dor. And although the latest genetic technologies may help scientists detect new species, “someone still has to describe them,” he says.

    For Shirayama, the census should mean reinforcements for his labors. And, eventually, it could produce guidebooks that will make information-hungry marine biologists weep with joy.


    Fast Friends, Sworn Enemies

    1. Elizabeth Pennisi

    Organisms that work together, researchers are finding, sometimes have a falling out

    Even as the bride and groom walk down the aisle, they—or at least their guests—know that marital bliss can be short-lived. Wrinkles can appear in the smoothest relationships and turn lovers into adversaries. Biologists are now realizing that the same holds true in symbiotic relationships. What starts out as a mutually beneficial arrangement can turn into a commensal one, in which just one partner benefits. In the worst case, one symbiont begins to parasitize the other. But sometimes the partners work through adversity to restore balance in their alliance.

    A new awareness of the complexity of these interactions is shaking up the ecology and evolutionary biology communities, which are used to thinking of interspecies interactions as stable. “We've been stuck classifying these things as mutualist, commensal, or parasitic, but we've come increasingly to understand how variable [these relationships] are,” explains Angela Douglas, a symbiologist at the University of York, U.K. Given the range of behaviors covered, the word “symbiosis” needs redefining, says Douglas Zook of Boston University: It should be applied to any interactions that use one or both partners' resources.

    The more biologists look, the more symbiosis they see. Forests thrive only when fungi blanket their roots. Corals rely on photosynthesizing algae. Gut symbionts help humans and other animals digest food. “Symbiosis is a major phenomenon extending across all kingdoms,” says Zook.

    The idea that different organisms live and work together dates to 1868, when German botanists Albert Bernhard and Heinrich Anton de Bary independently developed the symbiosis concept. The term applied to any association between different organisms, including parasitism. Later mutualism (both parties profit) and commensalism (one benefits but not at the expense of the other) joined parasitism as subsets of symbiosis. For decades, all were included under a single term. In the early 1900s, biologists decided the word symbiosis should apply only to relationships in which both partners benefit, and that's what most textbooks teach today.


    Sometimes helpful fungi can stifle reproduction in their benefactor by spreading along the outside of the stem (bottom).


    Only recently have researchers begun a wholesale investigation of how these relationships change over time. Plant pathologists have made a few key observations in studying grasses and microscopic fungi that live between their cells. Others have noticed that pathogens in one species or individual are partners in another. All this can lead to complex relationships that sometimes involve more than just two species. Moreover, the cause of a relationship switch is not clear-cut. But environmental factors can play a role, such as food shortages, new hosts, alterations in the chemical milieu, or changes in the local community.

    Dynamic relationships

    Fungi that live inside grasses can be fickle. These endophytes provide protection and stamina to the grass, deterring insects and livestock and making the grass drought-tolerant and disease-resistant. In return, the grass provides sustenance. But in August at the Fourth International Symbiosis Society Congress, held in Halifax, Nova Scotia, Christopher Schardl of the University of Kentucky, Lexington, reported that endophytes sometimes abort seed production.

    Endophytes spread by inhabiting seed-bearing stalks. Schardl and his colleagues found that the “friendliness” of the fungus can vary stalk by stalk, depending on the fungus's mode of reproduction. Those that follow the asexual route remain within the stalk, spreading to the next generation of grasses by hiding out in the developing seeds. They are the friendly sort. But the same organism may reproduce sexually on the outside of other stalks, leading to the production of fungal spores. These fungi rob the plant of nutrients it needs for its own reproduction. In some cases, they choke off seed development entirely. “The [fungi] clearly span the range between mutualistic and antagonistic,” Schardl notes.

    The stable grass-fungi relationship can be disrupted by a fly. The insect relies on the fungus for its own reproduction, laying eggs on developing fungal fruiting bodies. As the fly travels from stalk to stalk, it transfers fungal spermatia, allowing for cross-fertilization, which benefits the fungus. The fly's larvae also benefit because they do best on fungi that have been fertilized. In this way both the fly and fungus maximize reproduction, but the plant may lose out. When flies are not present, sexual reproduction may become too inefficient for the fungus, and thus, over time, an amicable relationship with the grass is restored.

    Nutrient supplies can likewise upset the balance between certain plants and their endophytes. When the fungi take more than their usual carbon allotment, they can overrun and kill a plant host. Conversely, fungi that are normally aggressive carbon-takers can't spread on nursery trees, likely because these trees get nutrients from the nursery and don't need the fungi. All in all for these two species, there's no alliance, just détente.

    Colonizing a different host, meanwhile, may enable a microbe with a history of hostility to develop a friendly collaboration. Colletotrichum magna is a plant pathogen that attacks cucumbers, watermelons, and squash. Yet when Regina Redman, a geneticist at the U.S. Geological Survey in Seattle, Washington, infected tomatoes with this fungus, the plants thrived, producing bigger fruit and resisting diseases. The fungus's lifestyle “depends on the interaction with the plant's genotype,” she says.

    Complex alliances.

    The fate of bark beetle larvae, shown in their burrows with nourishing fungi, depends on a mite hitchhiking on adult beetles.


    Fungal genes, too, can make a difference in how species interact. Redman created mutants by randomly knocking out each gene in C. magna, then testing each mutant's interactions with its native plant. Some 200 mutations rendered the pathogens harmless or even beneficial. In recent work, Redman knocked out one of the genes responsible for harmful effects in five more Colletotrichum species. In each case, the genetic alteration tamed the pathogen.

    Geographic location affects alliances as well. It shapes the relationships among the members of a threesome: a southern pine beetle, a mite that lives on the bark beetle's body, and a fungus carried by the mite. The mite can be helpful or harmful to the beetle depending on the type of fungus it carries, says Matthew Ayres of Dartmouth College in Hanover, New Hampshire, who described the complex relationship at the August meeting.

    The bark beetle dumps fungi into the host tree, which help kill the tree, and lays eggs in the excavated burrows. In the case of two types of fungi, beetle larvae feed off the fungus. In return, the beetle provides a safe haven for spore growth in the form of sacks behind the beetle's head that cull all fungi except the beneficial ones.

    But mites that hop onto the beetle sometimes bring with them a less generous fungal partner. Called the blue stain fungus, the interloper disrupts the beetle's reproduction by shoving out the fungi-nourishing beetle larvae. As a result, “virtually all the larvae die,” says Ayres. By aiding the blue stain fungus, the mites shift their relationship with the bark beetle from a positive or neutral one—in which it carries beneficial fungi or none at all—to an antagonistic one.

    Ayres and Kier Klepzig, a research entomologist at the U.S. Forest Service in Pineville, Louisiana, are now studying this three-way interaction in a different latitude. In Mexico, the beetle and the mites get along well. The researchers find no blue stain fungus, and instead the mites ferry beneficial fungi.

    Thwarting cheaters

    The mites that interact with pine beetles have no control over the circumstances that make or break a relationship. But in other cases, the partners themselves take steps to maintain harmony, keeping undercurrents of tension in check. Take legumes and rhizobia, the nitrogen-fixing bacteria that help feed them. Soybeans supply the bacteria with nutrients and a safe place to live. In return, the rhizobia expend an exorbitant amount of energy fixing nitrogen for the plant's consumption.

    Theorists have calculated that rhizobia would do much better as freeloaders, curtailing nitrate production and devoting more energy to their own growth. Researchers wonder why the microbes don't evolve ways to sell the soybean short. “That question has been around for a long time,” says Frans de Bruijn, a microbiologist at INRA-CNRS in Toulouse, France.

    Telling experiment.

    Algae colonizing immature algae-free jellyfish (top) added color to their hosts (bottom) but sometimes caused harm as well.


    The reason is that the plant keeps close tabs on microbial productivity, and “cooperation is maintained through coercive measures,” E. Toby Kiers of the University of California, Davis, reported at the symbiosis meeting. Working with her adviser, crop ecologist R. Ford Denison, Kiers and her colleagues showed that soybeans cut off nutrient supplies to nitrogen-fixing bacteria whenever they begin to slack off. The plants can even selectively sanction the worst offenders.

    The researchers turned honest bacteria into cheaters. In the lab, Kiers deprived rhizobia of nitrogen, which they normally get from the air, and watched what happened as nitrate production declined. It took just a few days to see a 50% reduction in the rhizobia's reproduction; the loss of key nutrients provided by the plants, including oxygen, caused these declines. Moreover, the plant seems able to turn off the oxygen spigot nodule by nodule, mounting surgical strikes against what it perceives as cheaters.

    Occasionally cheaters do get the upper hand. Joel Sachs, a graduate student at the University of Texas, Austin, has found this to be the case with the upside-down jellyfish Cassiopeia xamachana. It hosts an alga, Symbiodinium microadriaticum, whose photosynthetic activity supplies the jellyfish with carbohydrates.

    Working with Tom Wilcox of Long Key Tropical Research Center in Florida, Sachs raised algae-free jellyfish, then exposed them to algae from wild jellyfish. He tracked the well-being of both the host and its guests as they spread from parent to offspring (known as vertical transmission) or from jellyfish to jellyfish (horizontal transmission).

    Transmission mode shaped the partnerships over time. Declines in jellyfish growth and reproduction “revealed the evolution of exploitation in the horizontal treatment,” Sachs explained. Sometimes the cheater algae that spread from jellyfish to jellyfish got carried away. The algae robbed so much that they killed their host, compromising their own reproduction.

    In contrast, vertical transmission “selected for symbiont cooperation,” says Sachs. In each successive generation, jellyfish and alga survived better and were more prolific. As long as each took just enough from its partner, the relationship remained balanced and productive.

    Work in coral has suggested that these creatures eject algae that don't suit their purposes. Some researchers think that when the weather gets too warm, corals bleach because they kick out their algal symbionts and, possibly, take on others that can tolerate the heat better. If so, Sachs questions the long-term effectiveness of that strategy. Replacing the original algae with others from the surrounding water, a case of horizontal transfer, “could lead the coral to pick up cheaters.”

    Sachs is now exploring what jellyfish and possibly other hosts can do to curtail freeloading. But at least, he notes, work by his team and others is helping reshape current thinking about symbiosis. “A lot of biologists think the relationship [between two species] is static,” he points out. “But it's much more dynamic.” But then, what relationship isn't?

  14. In Sickness or in Health?

    1. Laura Helmuth

    Worried that their current diagnostic scheme isn't carving nature at its joints, psychiatrists are contemplating a massive overhaul. It may redraw the boundaries between certain mental illnesses, integrate genetics and neuroscience research, and codify the notion that sometimes there's no bright line between mental health and mental illness

    Before 1980, psychiatric diagnosis was locked in a tower of Babel. An old-school therapist, listening to a chaise-lounging patient discuss last night's dream, might label a case “neurotic.” This diagnosis implied a certain psychosexual developmental history and a course of treatment, such as psychoanalysis to uncover the repressed source of the patient's hang-ups. The problem was, another psychiatrist, one schooled in Kleinian theory, say, rather than Freudian, might come up with an entirely different diagnosis. And no one agreed on what “neurosis” meant in the first place. More-disabling disorders likewise lacked coherent diagnoses. A patient categorized as schizophrenic in New York would take on a diagnosis of affective disorder upon flying to London.

    Order emerged out of this chaos 2 decades ago when a group assembled by the American Psychiatric Association (APA) put together the third edition of the Diagnostic and Statistical Manual of Mental Disorders—fondly known as DSM-III. “It was a breakthrough,” says psychiatrist Michael First of Columbia University in New York City, who edited the manual's most recent update, the DSM-IV-TR. It systematically identified different mental illnesses according to lists of symptoms—with no mention of the underlying cause of disease. Does the patient suffer from sleeplessness? Check. Suicidal ideation? Check. Rack up enough symptoms and receive a diagnosis of major depression.

    The DSM-III was “instantly recognized for being able to produce reliable diagnoses that would be the same anywhere in the world,” says Darrel Regier, director of research for APA. It standardized clinical trials, made cross-cultural studies of mental illness possible, and gave epidemiologists a reliable tool. The modern DSM editions have sold about 2.5 million copies and been translated into 21 languages.

    Now psychiatrists, psychologists, and other researchers are talking about overhauling the DSM once again. Some proposals are relatively modest, such as adding a “personal health index” so therapists can quantify the strengths a patient can call upon. Some would redraw diagnostic boundaries. And still others point out that mental illness doesn't appear to be an all-or-none state: It might reflect reality better—and help outcomes measurements—if people could be diagnosed as falling along a continuum of disability. Finally, some researchers propose getting back to causality. With input from genetics, neuroscience, and behavioral science, it should be possible—not now, but soon—to define mental illnesses according to their pathophysiology and etiology.

    “We have a window of opportunity to digest key questions,” says psychiatrist David Kupfer of the University of Pittsburgh, Pennsylvania. The DSM-V won't be issued until 2010 at the earliest, so he and other scientists are evaluating research that could bear upon the next edition of the manual and recommending lines of research for the next few years. They've published a collection of white papers on research that should guide the next revision and scheduled an exploratory conference series that begins in February. “By the end of the decade, I think we could be at the point where we could make radical changes,” says Dennis Charney of the National Institute of Mental Health (NIMH) in Bethesda, Maryland.

    Others question that timetable. Wholesale changes in the DSM shouldn't be undertaken lightly, cautions Steven Hyman, former director of NIMH and current Harvard University provost. Tweaking diagnostic criteria changes who's eligible to participate in clinical trials, for instance, and can cause the apparent prevalence of a mental illness to spike or plummet. “To be honest, the science doesn't warrant changes in [the diagnosis of] many disorders,” he says. The upcoming conferences and subsequent DSM-V drafting sessions promise to be lively.

    Field guide.

    The DSM may be in for a major revision.



    When the framers of the modern DSM came up with lists of symptoms that define various mental illnesses, “the hope was that the categories described represented underlying diseases,” says First. “That turned out not to be the case.” After 20 years of unparalleled advances in psychiatric research made possible in part by the DSM, researchers and clinicians have decided that the DSM doesn't carve nature at its joints.

    View this table:

    Depression and anxiety, for instance, are separate diagnoses, each with its own checklist of symptoms. But most studies show a tremendous overlap between the two: People who meet the DSM criteria for one disorder almost always have many symptoms of the other. Obsessive-compulsive disorder (OCD), meanwhile, is currently classed with the anxiety disorders, but genetic and neuropathological studies suggest that it has little in common with them. And “antidepressant” drugs ease symptoms of depression, anxiety, OCD, and other conditions, suggesting that the diseases' underlying pathophysiologies don't respect DSM boundaries.

    There's a lot at stake for researchers in how diseases are labeled: Faulty diagnosis could make it harder to uncover the causes of and treatments for diseases. For instance, some people diagnosed with bulimia, an eating disorder characterized by bingeing and purging, are extremely self-critical and others have trouble controlling their impulses. But lumping the two types together in one treatment trial, says psychologist Drew Westen of Emory University in Atlanta, could obscure any benefit. And the DSM-IV-TR definition of depression likely encompasses so many biological underpinnings that hunts for a specific genetic cause bog down, Charney points out.

    First says that the public airing of psychiatry's reexamination of the DSM sends a message to the research community: “Don't get caught up with the existing categories.” They don't correspond to particular genes or brain pathologies, and “we know that the categories are not pure diseases,” he says.

    To improve the chances of tracking problems to a genetic source, Charney and others recommend focusing on traits that might cut across various disease categories, such as disorders in the reward system, or vulnerability to psychosis. Another good approach might be to focus on neurotransmitter systems that are known to respond to psychoactive drugs. Tweaking these channels of communication with drugs can calm mental illness, so it's likely that variations in these systems within the population can put someone at relatively greater or lesser risk.

    Anchoring genetic studies in patients' social environment could make it easier to sort out causes and effects—and break megadiagnoses such as depression into more tractable subtypes. Like diabetes or heart disease, depression emerges out of a combination of genes and environment. A study earlier this year showed that people with a short allele for a receptor for the neurotransmitter serotonin were vulnerable to depression if they experienced stress (Science, 18 July, p. 291). The study is “a model of where we want to go,” says Charney, because it allows researchers to define a certain subset of depression in a way that takes etiology into account.

    Different degrees

    Today's DSM diagnoses don't leave a lot of room for ambiguity. Patients with only four out of a checklist of nine symptoms are not depressed. Those with five symptoms are. This distinction “isn't very clinically useful,” says Westen.

    People who are suffering but don't meet full criteria for a particular diagnosis are categorized as “not otherwise specified” (NOS). As Westen points out, for any given disorder, 20% to 50% of patients fall short of the DSM criteria and get thrown into the NOS category. They can still receive therapy, and their psychiatrists bill insurance companies. (The insurance industry reimburses mental health care providers according to DSM categories, including NOS.) But this slop in the system is a problem “if you want evidence-based medicine,” Westen points out: People defined as NOS aren't eligible to participate in clinical trials.

    Not only do DSM categories seem too exclusive in some cases, they also put distance between syndromes that seem to be related. Some diseases that are defined as separate entities might lie along a continuum of disability, says NIMH's Wayne Fenton. Genetic studies suggest that schizophrenia, schizotypal personality disorder, schizoaffective disorder, and two other DSM diagnoses “may share substantial heritability,” he says. In this and other cases, he says, the data suggest that some disorders might be better described by the magnitude of dysfunction.

    View this table:

    Diagnoses are often imprecise; so are the individual symptoms on which they're based. Some so-called personality disorders, for example, are defined by the presence of symptoms such as “narcissism” or “grandiosity.” (Personality disorders describe people with various long-standing difficulties getting along with other people, regulating their emotions, or other symptoms that can be quite debilitating; they are grouped under a different DSM “axis” than more episodic conditions such as depression or schizophrenia.) But as Westen points out, anyone who's been on a bad blind date knows that narcissism and grandiosity are fairly common; it's the degree of imbalance that matters.

    Getting away from the DSM's all-or-none approach would have several advantages. “There's an increasing consensus that [using a continuum] would fit the nature of reality,” says Westen. It would allow clinicians to track a patient's progress better, perhaps to quantify a change from severe to moderate depression—logging a significant improvement in a patient's quality of life that wouldn't necessarily register on a DSM diagnosis. And these so-called dimensional diagnoses might reveal links to genes or neuropathology that are obscured by the current system.

    Change would come with a cost, however. “It would upend treatment if we abandon the current system,” says First. Evaluating every patient for the degree of depression, psychosis, or narcissism “would be impossible clinically,” he says. But he and others say that such an approach might work well for researchers, who could focus on a dimension of illness that cuts across several present DSM categories. Helping the research community establish such dimensions is “one short-term solution we are looking at,” First says.

    A dimensional approach might also bring some order to the DSM's confusing classification of personality disorders. The average patient with a personality disorder, Westen says, qualifies for a diagnosis in three to six of the 10 categories. There's so much comorbidity with each other and with diagnoses such as depression that almost everyone thinking about how to change the DSM is ready to tear up the whole personality disorders axis and start fresh. “If any part of the DSM can morph to dimensional, it's probably that one,” says First.

    The DSM might also benefit from some self-diagnosis along a continuum, Hyman proposes. It could indicate the degree of support experts find for each diagnostic label, running from strong to iffy. Some DSM diagnoses, such as those for a panic attack or manic episode, “are pretty unassailable,” particularly if they have substantial backing from genetics or brain-imaging studies. Others, such as personality disorders, are a diagnostic mess—and this should be acknowledged, Hyman says.

    Access issues

    There are many complex diseases, but for the most part, “they're diseases of accessible organs,” Hyman says. Clinicians can blow up a blood pressure cuff or test a blood sample to diagnose heart disease or diabetes, “whereas for mental disorders, what we have is the DSM.” Fenton adds that “it's a fundamental problem in psychiatry that we have no biological markers of disease.”

    Regier says, “We hope that in the next version [of the DSM] we might have some greater understanding of the pathophysiology, biological markers, and other ways of reliably measuring” and defining psychiatric disorders. Researchers hope to use brain imaging for this purpose, or more foreseeably, to incorporate electroencephalogram measures into diagnoses of sleep disorders.

    But the field needs more; it needs a big push from an effort like the Framingham Heart Study, Hyman says. This groundbreaking 10,000-subject longitudinal study helped establish risk thresholds for blood pressure. Guidelines to prevent early heart attacks are constantly being revised to reflect new results from the Framingham study and others like it. In mental illness, however, “we have no long-term follow-up studies,” Hyman says—and therefore no data to establish whether the mental health equivalent of a blood pressure reading of 140 over 90 mmHg is something to worry about.

    The data gap is particularly acute for childhood disorders, Hyman says. Many children go through low periods resembling depression, but it's very difficult to know what such signs of trouble portend. “For which kid do we just reassure the family [that the child will snap out of it], and for which is this likely the beginning of major depression?” Hyman asks. One conclusion might lead to medication; another, not. The DSM symptom checklist alone doesn't give an answer, but Hyman says that identifying genetic and environmental risk factors through longitudinal studies might do so: “It's as if no one had ever done studies of blood pressure.”

    There's another obscure zone in mental illness: Offsetting a person's risk factors are resilience factors—powers that may be hard to observe but that people can marshal to fend off mental illness. “We are very interested in what protects people,” says Charney, pointing out that some people experience massive stress and trauma yet emerge relatively unscathed. Studying resilience might identify areas of strength a patient can work with, Westen says, “such as the ability to empathize, work to one's abilities, or enjoy recreational activities.” Adding a mental strengths checklist would make the DSM more useful for clinicians and make research on resilience more easily reproducible.

    Although it's not a Framingham study, there is a rich source of epidemiological data on mental illnesses that will be released soon and that should help inform the next DSM. NIMH's Fenton points out that the National Comorbidity Survey Replication, a household-to-household survey of 15,000 people, will be issuing results in the coming year. Among other things, it will provide “unprecedented epidemiological data on base rates of symptoms … and the relationships among symptom clusters and disability and service use,” Fenton says. It's tied to a worldwide survey that will allow cross-cultural comparisons, all possibly feeding into a new DSM.

    APA isn't making any promises or predictions yet about what the next DSM will look like. “In the next 3 or 4 years, we want to encourage research and look at the current state,” says First, and “provide alternatives” for the eventual DSM revision committees to work with. He'll launch an APA Web site early next year that will solicit complaints and suggestions from DSM users, whatever their research background or affiliation. “It'll be a forum to gripe, to say, ‘This needs to be changed, and here are data to support that,’” First says.

    Between the Web site, ongoing critiques from clinicians and researchers, and the 10 conferences aimed at exploring ways to change the DSM—inviting scrutiny from the National Institutes of Health and the World Health Organization as well as APA—the next few years will be stressful for psychiatry and its guiding manual. But if the field is resilient enough, it may someday reminisce on the days of personality disorders and symptom checklists with as much nostalgia as it now holds for couch-based medicine.

  15. Future Brightening for Depression Treatments

    1. Constance Holden

    Without fully understanding what causes depression or how the disease takes hold in the brain, researchers are racking up early successes with a wide variety of new treatment strategies

    Treatment for depression badly needs a lift. Antidepressants usually help only about 70% of the people who try them, a modest success rate that hasn't changed for decades. The reason: “Virtually all are variations on a theme established 40 years ago,” says Dennis Charney, head of the Mood and Anxiety Disorders Research Program at the National Institute of Mental Health (NIMH) in Bethesda, Maryland. The drugs increase the amount of serotonin or norepinephrine, or both, in the brain, usually by preventing the chemicals from being absorbed back into the neurons that released them. The persistence of this theme “is a bit disappointing after 50 years of intense neuroscience research,” says Florian Holsboer, director of the Max Planck Institute for Psychiatry in Munich, Germany.

    But a spate of research findings over the past few years are bringing a new sense of optimism to the field. Although the biology underlying depression is still cryptic (see previous story), the new findings have suggested a variety of strategies to perk up the depressed brain. And the hunt for new drug targets is unveiling depression's commonalities with a host of other diseases and conditions, including Parkinson's, Alzheimer's, Cushing syndrome, pain, and epilepsy. Treatment of the world's most common mental health problem—and one of the major causes of debilitation worldwide—may soon be entering a new era.

    Stress's toll.

    Creation of new cells (pink) in a rat hippocampus is suppressed by stress.


    A decade ago, the “monoamine” hypothesis still held sway. “We thought depression was caused by too little serotonin,” one of several neurotransmitters in the monoamine family, says Eric Nestler of the University of Texas Southwestern Medical Center in Dallas. And the notion was bolstered by the fact that drugs like Prozac, which boosts serotonin levels, were bringing relief to unprecedented numbers of patients. “Now we know that was naïve,” says Nestler. Indeed, although abnormally low levels of a serotonin metabolite have been found in the spinal cords of violent-suicide victims, scientists “have not been able to determine that depressed people have a deficiency” in serotonin, says Charles DeBattista, director of the Depression Research Clinic at Stanford University.


    Overtaking the monoamine hypothesis in recent years has been the stress hypothesis, which posits that depression is caused when the brain's stress machinery goes into overdrive. The most prominent player in this theory is the hypothalamic-pituitary-adrenal (HPA) axis. The hypothalamus produces a substance, corticotropin-releasing hormone (CRH), which stimulates the pituitary gland, which in turn triggers the release of glucocorticoids, stress hormones such as cortisol, from the adrenal glands atop the kidneys.

    Abundant animal research shows that stress hormones are bad for neurons. They decrease the amounts of key ingredients in the chemical broth that keeps neurons healthy and sprouting, such as brain-derived neurotrophic factor (BDNF). And long-term stress can reshape the brain. For instance, it shrinks the hippocampus, a subcortical structure involved in memory that is also a key site of antidepressants' actions.

    Bolstering the stress hypothesis is research by Charles Nemeroff of Emory University School of Medicine in Atlanta, Georgia, showing that there are critical periods in early childhood during which abuse or other emotional stress can permanently disrupt the HPA axis. Such trauma often leads to “hypersecretion” of CRH, a classic symptom of HPA-axis disruption, and is associated with depression in adulthood.

    Animal models, too, show that early stress, such as that induced by maternal deprivation, causes depression-like behavior, such as giving up on swimming in a forced swim test. Such animals also hypersecrete CRH. “Many of the established neurobiological findings in depression may indeed be due to early life stress,” when the young nervous system is still tender and impressionable, according to Nemeroff. His research has revealed that among adults who have been sunk in depression for 2 years or longer, 45% experienced abuse, neglect, or parental loss as children. “It just blew our socks off,” Nemeroff says.

    Depressed animals also have stunted neurogenesis, or birth of new neurons, in the hippocampus (Science, 3 January, p. 32). In the past few years, findings on neural growth and neurogenesis have led some researchers to hope that they have arrived at the Holy Grail of depression. They suspect that at the heart of the disease—and the reason stress is depressing—is suppression of the growth of new neurons in the hippocampus. Scientists led by Ronald Duman of Yale University found that neurogenesis in the adult rat hippocampus escalates in response to all antidepressant treatments, including electroconvulsive therapy (ECT)—one mechanism presumably being through stimulation of BDNF. And this year a team led by René Hen of Columbia University reported that if neurogenesis is blocked in rats, antidepressants don't work (Science, 8 August, pp. 757 and 805).

    Duman says the so-called neurotrophic theory ties in nicely both with what is known about the effects of stress on the brain and with findings that the hippocampus and other parts of the brain, especially the prefrontal cortex, are shrunken in chronically depressed patients. And neurogenesis, which takes weeks to create fully functioning neurons, “provides a ready explanation for the perplexing fact that antidepressant treatments typically require weeks to become effective” even though the drugs raise monoamine levels almost immediately, notes Barry Jacobs of Princeton University, one of the originators of the neurogenesis theory.

    Just how powerful this theory is depends on whom you talk to. Duman thinks “that's really the hottest thing going right now.” Jacobs adds that “the stress hypothesis has been waiting for neurogenesis … to explain how it works.” He notes that the neurotrophic theory encompasses not just the birth of new cells but also the growth of neural projections such as axons and dendrites and enhanced malleability of neural connections called synapses.

    But others are more cautious. “As soon as you get out of the hippocampus, everybody starts to fight,” says brain imager Helen Mayberg of the University of Toronto, Canada, because the most important brain areas for depression may be elsewhere. Nestler agrees that “many other areas of the brain figure at least as prominently” in depression. And even within the hippocampus things aren't so clear. “You don't see [hippocampal shrinkage] early in depression; … if it were part of underlying etiology, you'd be seeing it early,” says Mayberg. Holsboer is also skeptical. He says that so far scientists have established only correlation—and no cause-and-effect relationship—between neurogenesis in “a remote area of the brain” and antidepressants.

    While scientists continue to argue about the architecture of depression, however, enough has been discovered in recent years to give them a number of new levers to manipulate. Some of these strategies tie into the stress hypothesis or lend support for the role of the hippocampus, but others are agnostic with respect to the causes or key pathophysiology of depression.

    Ease the pain

    The betting, according to Charney, is that the first genuinely new antidepressant will be some compound that blocks the effects of a neuropeptide known as substance P. Discovered in the 1930s, substance P has been implicated in a number of diseases that involve chronic inflammation as well as in pain and anxiety. It is prevalent in brain regions such as the prefrontal cortex that are associated with emotional regulation, and it is released in response to stress. Substance P-containing fibers also innervate the hippocampus. Substance P is most famous for regulating pain signals to the spinal column, but it would fit into a variety of depression scenarios. There is some evidence, for example, that substance P antagonists stimulate neurogenesis in the hippocampus.

    As is often the case in antidepressant history, Merck, the company exploring substance P, was originally looking into its antagonists for a different purpose—in this case as possible painkillers. That didn't pan out, but Merck researchers decided to see if it had an effect on stress and depression. A clinical trial showed that the company's experimental drug, a substance P antagonist called MK-869, performed comparably to the selective serotonin reuptake inhibitor Paroxetine (Science, 11 September 1998, p. 1640). High hopes that the drug would be the next blockbuster antidepressant were dashed by a “failed” trial—one in which the response to a placebo was so high that it swamped effects from both the experimental drug and the established drug.

    Merck is back in the game now, however, with phase III trials on the compound currently under way. Other companies are pursuing substance P antagonists as well. “This is probably the most around-the-corner novel antidepressant,” says Husseini Manji, chief of the Laboratory of Molecular Pathophysiology at NIMH.


    Compounds based on substance P antagonists may be further along, but many researchers are more excited by the potential for medication directed at the hyperactive HPA axis. Some are focusing on antagonists to CRH, the stress-induced hormone produced by the hypothalamus that eventually leads to production of cortisol and other glucocorticoids; others are seeking to block cortisol itself.

    CRH antagonists are likely to produce a range of effects. The hormone appears to have many functions in addition to stimulating the pituitary: CRH receptors are found throughout the cortex and in limbic structures such as the amygdala, which processes fear and other emotions. Indeed, animal studies have shown that CRH, when administered directly into the brain, produces not only behaviors analogous to anxiety and depression but also depression's “vegetative” symptoms: disrupted eating, sleeping, and sexual habits. Holsboer calls CRH “the driving force in precipitation of depressive symptoms.” He says that anxiety and depression are very closely linked and that CRH is a common factor: “Most people who have depression in later life had a hyperanxious personality when younger.”

    Among the compounds close to clinical trials is a drug developed at NIMH by stress researcher Philip Gold that inhibits CRH in vitro. The agency is currently doing toxicity studies with the drug, called antalarmin; “if these turn out well, the NIMH will initiate human clinical trials,” says Manji. Several companies are also trying to develop CRH antagonists. Neurocrine Biosciences in San Diego, California, began a clinical trial but abandoned it when subjects started manifesting elevated liver enzymes. Now it's starting toxicity testing with a number of other compounds, says Dmitri Grigoriadis, director of CRH development.

    But preliminary as the field may be, it's an “area of great excitement,” says Charney. He sees CRH antagonists as having potential to treat a broad spectrum of depression and anxiety disorders, including posttraumatic stress disorder. “There will be a lot of disappointed clinical neuropharmacologists like me if these turn out to not be effective,” he says.

    Progress report.

    Compared to nonresponders (bottom), patients responding to Prozac (top) show increases (red) in activity in the cortex and decreases (green) in limbic areas.


    Related drugs operate at the other end of the HPA axis by blocking receptors for the glucocorticoids (primarily cortisol) produced by the adrenal glands. A group at Stanford University headed by Alan Schatzberg has identified what it believes is a subgroup of depressed patients who would respond to treatment with glucocorticoid receptor antagonists. These are people diagnosed with psychotic depression, in which each episode is characterized by delusions such as that one's insides are rotting. “Research over the last 17 years has revealed that cortisol is extremely elevated in psychotically depressed patients,” says DeBattista.

    The drug they are testing is mifepristone, originally developed as a steroid to block the excess cortisol secreted by patients with Cushing syndrome. (It also blocks progesterone and is better known as RU-486, the abortifacient drug.) Cushing's, says DeBattista, has a psychiatric syndrome that closely parallels psychotic depression. In a preliminary trial with five psychotically depressed patients, four got better on the drug. DeBattista says the results of the first big double-blind trial should be published shortly. Because the drug blocks the receptor, and not the production of cortisol itself, it reduces cortisol only in the bloodstream and doesn't raise risks from cortisol deficiency.

    Too much excitement

    There's been growing interest in drugs that modulate glutamate, the brain's main excitatory transmitter. Too much glutamate is toxic to neurons, but it's so pervasive that the danger of inducing dangerous side effects once loomed over the prospect of glutamate-based drugs. Now that many types of glutamate receptors have been discovered, there is hope that ways will be found to selectively manipulate this neurotransmitter (Science, 20 June, p. 1866).

    Of particular interest is the NMDA glutamate receptor. Substances that block it have been found to be powerful antidepressants; the only problem is that some also cause psychotic reactions. Because of that, says Ian Paul of the University of Mississippi Medical Center in Jackson, “drug companies are very, very tentative about trying to explore other types of NMDA receptor antagonists.”

    Nonetheless, NIMH is pushing ahead. Charney says the agency is doing a study now with memantine, an NMDA receptor antagonist used to treat dementia. Another drug of interest is riluzole, which reduces glutamate release and is used to treat amyotrophic lateral sclerosis (Lou Gehrig's disease). A preliminary trial has shown some effect with severely depressed patients, says Charney.

    Another drug candidate augments the activity of the glutamate AMPA receptor. “It's not clear why potentiating AMPA should be an antidepressant,” says Paul. Apparently it raises levels of BDNF, and animal studies have shown that the drug works well in a rat model of depression. AMPA potentiators have been used in clinical trials for Alzheimer's disease; NIMH is currently planning to try the drug out on depression.

    Growth industry

    Much more preliminary are ideas for drugs that directly stimulate nerve growth or neurogenesis. “It's a matter of finding the right molecule,” says Charney: one that will, among other things, pass through the blood-brain barrier. “We don't have one right now.” The main candidates at present are phosphodiesterase (PDE) inhibitors: drugs to inhibit the breakdown of cyclic AMP, part of a cascade that improves cell survival and plasticity. The cascade is known to be upregulated by antidepressants.

    A nonspecific PDE inhibitor called rolipram was tested in several large-scale, placebo-controlled clinical studies in the 1980s. But although the drug was effective as an antidepressant, it produced a bad side effect: nausea. Now, Charney says, several companies are tinkering with more selective PDE-4 inhibitors. They “look very good in depression, and perhaps learning and memory,” says Charney.

    The pleasure principle

    Although virtually all drugs directed at monoamines affect serotonin or norepinephrine or both, some researchers are intrigued by another neurotransmitter from this class: dopamine. Most investigators are dubious about its potential, noting that tinkering with the brain's reward system—which communicates by means of dopamine—could be a recipe for a drug of abuse. On the other hand, says Duman, that's what makes it interesting. “One of the key hallmarks of depression is anhedonia,” or an inability to experience pleasure. A drug called Merital (nomifensine) that blocks dopamine reuptake was briefly marketed in the 1980s. It was effective, but it was taken off the market because it caused a type of anemia in some patients.

    NIMH is now exploring another possibility: pramipexole, a dopamine receptor agonist used to treat Parkinson's disease. The drug showed antidepressant activity in rats, and it upregulates chemicals that have neurotrophic effects. “We're encouraged that this will work in some people,” says Charney. The NIMH intramural division is running a preliminary clinical trial with the drug.

    Targets of opportunity.

    Possible interventions for novel antidepressants include increasing nerve growth factors, modifying glutamate release, or attenuating stress hormones, among others.


    Rewiring diagrams

    Chemicals are not the only way to treat depression. Although many people still associate ECT with One Flew Over the Cuckoo's Nest, it is “the most effective antidepressant modality we have,” says Manji. The effects of ECT are so broad that it's not clear what the active ingredients are. In addition to inducing seizures, it jacks up serotonin levels, blocks the effects of stress hormones, and stimulates neurogenesis in the hippocampus, among other things.

    Now a family of other brain stimulation techniques—with more precise targeting and fewer side effects—may be in the offing. One is transcranial magnetic stimulation (TMS), which uses a magnetic field to induce current in the brain (Science, 18 May 2001, p. 1284). Another is deep brain stimulation, which involves planting electrodes in the brain and has been useful for some Parkinson's patients. A third is vagus nerve stimulation (VNS), originally developed to treat epilepsy.

    Harold Sackeim, head of biological psychiatry at Columbia University, thinks the field is blooming. Although in the past ECT has been thought to have general effects all over the brain, Sackeim says the evidence is now “irrefutable” that it has to be directed at specific circuits to be effective. When seizures are induced, they have to take place in the prefrontal lobe. Seizures cause release of inhibitory transmitters, primarily GABA, a phenomenon that fits with the fact that “many of our new anticonvulsants”—such as lamotrigine, which boosts GABA—“are increasingly used in depression,” says Sackeim. ECT-related therapies can also be effective—contrary to previous beliefs—without inducing seizures, provided the stimulation is in the right place, says Sackeim. Currently a company called Neuronetics in Malvern, Pennsylvania, is conducting a national trial in hopes of getting government approval of TMS for depression. It will compare TMS with “sham” TMS for 20 minutes a day, 5 days a week, for 6 weeks.

    Trials are also in progress with VNS, which has already been approved by the U.S. Food and Drug Administration (FDA) for treatment-resistant epilepsy. A stimulator is implanted in the chest and is connected to electrodes around the left vagus nerve in the neck. The stimulator gives a 30- second jolt every 5 minutes. Although the vagus nerve is traditionally associated with control over the heart and gut, Sackeim points out that it's also intimately hooked up with brain areas that—you guessed it—enhance neurotransmission of serotonin and norepinephrine. Results from clinical trials have not been overwhelming—only 30% of patients responded in the second trial—but Sackeim says improvements seem to grow with time. He's optimistic that “VNS may become the first nonpharmacological therapy for depression to be approved by FDA since ECT.”

    The first and still an effective treatment for depression, however, is decidedly low-tech. Repeated studies have shown that psychotherapy works as well as medication for mild depression. And in more serious cases, antidepressant drugs work best when they're coupled with psychotherapy. A study about to be published by Emory's Nemeroff shows that even people who were abused as children, and thus presumably have skewed HPA axes, recover best if they are treated with psychotherapy as well as drugs.

    Aside from seeing therapists, depressed people will still be popping monoamine-based antidepressants for the immediate future. But Charney promises more treatment options soon. “There has been far more success in treating this illness than understanding it,” he says. But thanks to advances in genetics, neurochemistry, and brain imaging, he says, “I view our field as at a threshold for major new discoveries”—discoveries that will set both diagnosis and treatment of mental illness on a firm scientific base.

Log in to view full text

Log in through your institution

Log in through your institution