News this Week

Science  07 Nov 1997:
Vol. 278, Issue 5340, pp. 1004

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Apocalypse Not

    1. Gary Taubes


    The story of global climate change and disease is what a newspaper reporter would call great copy. It has dire predictions of pestilence and death, with the imprimatur of topnotch science. The plague is coming, and it's coming home to the developed world. The idea, as proposed by a handful of public health researchers, is that global warming and the attendant climatic extremes of floods and droughts, storms and heat waves, may play havoc with public health. As the heat rises, hundreds of thousands may die yearly from heat-related ailments, while disease vectors and their pathogens may be redistributed far and wide with apocalyptic results. “To the layman,” wrote Harvard physician Paul Epstein in The Washington Post, “it means a global spread of infections.”

    Epstein and others have predicted that deaths from malaria may increase by a million a year; that malaria, dengue, and yellow fever may move north into the United States and Europe; that cholera epidemics may intensify; and that emerging diseases such as hantavirus and Ebola may run rampant. Already, they say, bursts of warming from short-lived climate shifts like the Pacific warming called El Niño may have triggered disease outbreaks that offer an ominous preview of what is to come. If it is not the beginning of the end, it has certainly read like it: “Global Fever” was just one of the headlines, from the 8 July 1996 Time. It continued: “Climate change threatens more than megastorms, floods and droughts. The real peril may be disease.” Even Science was suitably concerned: “If the Mercury Soars, So May Health Hazards” (17 February 1995, p. 957).

    The salient word in all these stories, however, was “may.” These predictions are getting renewed attention with the approach of the December climate change summit in Kyoto, Japan, and leading infectious-disease experts have taken to criticizing them sharply. Duane Gubler, for instance, director of the division of vector-borne infectious diseases at the Centers for Disease Control and Prevention (CDC), calls the prognostications “gloom and doom” speculations based on “soft data.” Johns Hopkins epidemiologist D. A. Henderson, who led the international smallpox eradication program from 1966 to 1977, says they are based on “a lot of simplistic thinking, which seems to ignore the fact that as climate changes, man changes as well.” Henderson, Gubler, and others argue that breakdowns in public health rather than climate shifts are to blame for the recent disease outbreaks—and that public health measures will be far more important than climate in future disease patterns.

    Many of the researchers behind the dire predictions concede that the scenarios are speculative. But they say their projections play a useful role in consciousness raising. “What it does is serve notice on us; we need to be aware we're tinkering with fundamentals, and there could be a range of consequences for human health,” says Anthony McMichael of the London School of Hygiene and Tropical Medicine.

    The increasingly heated tone of the debate has prompted the CDC and the National Research Council (NRC) to begin putting together an expert panel that will try to set the discussion on a footing of solid science. Says Henderson, who has been chosen as a co-chair, “What's worrying is the question of credibility when sweeping predictions like these are being made.” The panel will also set an agenda for further research, he says: “Fundamentally, we would like to have a better understanding of the transmission of disease under different circumstances of temperature and climate, not necessarily because of global warming, but because it can be of value to us whenever we have climate fluctuations.”

    Heat or light?

    The current controversy has been building for at least 6 years, since climatologists began agreeing that the planet's temperature is rising (although they still do not agree on the cause of the warming to date, or on how much warmer the planet will get). In 1991, virologist Robert Shope, then director of the Yale Arbovirus Research Unit, pointed out in Environmental Health Perspectives that with rising heat, the Aedes aegypti mosquito, which transmits dengue fever and yellow fever, might move northward, while the life cycles of the mosquito and the virus might accelerate, which “could lead to epidemics in North America.” Cholera could also become epidemic in North America, Shope said, as changes in marine ecology favor the growth and transmission of the pathogen, which is “harbored persistently in the estuaries of the U.S. Gulf Coast.”

    In 1992, microbiologist Rita Colwell of the University of Maryland, College Park, with Epstein and Harvard biologist Timothy Ford, took that idea further. They suggested in a Lancet article that an El Niño warming of the tropical Pacific was at least partially responsible for a 1991 cholera epidemic in Latin America that affected a half-million people and killed nearly 5000. Over the next 3 years, the tide of concern rose inexorably. In January 1996, the predictions erupted into the press when The Journal of the American Medical Association published a paper by Epstein, Jonathan Patz, an expert in occupational and environmental medicine at Johns Hopkins, and collaborators, speculating about the effects of a 4-degree warming over the next century on a range of public health threats from malaria and arboviral encephalitis to cholera and toxic algae.

    The last step toward turning these speculations into what Gubler calls “gospel” came last year, when the United Nations' Intergovernmental Panel on Climate Change (IPCC), which is meant to offer scientists' consensus voice on climate change and its effects, included a chapter on public health impacts in an update of its landmark 1990 assessment. The public health chapter, written by a team led by McMichael and including Patz and Epstein, concluded that “climate change is likely to have wide-ranging and mostly adverse impacts on human health, with significant loss of life.”

    The growing official acceptance of these predictions has irritated some other public health experts. “What I find astounding,” says epidemiologist Mark L. Wilson of the University of Michigan, Ann Arbor, “is how little research is actually being done in this whole thing.” As a case in point, Henderson cites the IPCC's suggestion that by 2050, summer heat waves in the United States will regularly kill 3000 to 6000 persons each year. “They say, ‘Look at what happened in Chicago a year or two ago. We had all these deaths due to heat stroke. If the temperature rises, there will be an even greater problem.’ Well, good heavens, people adapt. One doesn't see large numbers of cases of heat stroke in New Orleans or Phoenix, even though they are much warmer than Chicago.”

    As for infectious diseases, says Wilson, the predictions suffer from many levels of uncertainty. No one disputes the influence of weather patterns: “There's reason to believe that if it's an extremely rainy spring, summer mosquito populations will increase, or if it's an extremely snowy winter, the tick populations in the spring might benefit.” But Wilson and his colleagues point out that no one knows just how patterns of temperature and rainfall will change in a warmer world, or how these changes will affect the biology of diseases and their vectors. Then there are variations in public health practices and lifestyles, which can easily outweigh any change in disease biology. Says Wilson, “What's biologically possible isn't necessarily epidemiologically likely or important.”

    Behind the fears about the spread of cholera, for example, is an untested hypothesis about the biology of the disease. Cholera is known to spread directly from humans to other humans through feces and through food and water; that's why cholera epidemics appear when public health and sanitation break down. The global warming predictions, on the other hand, are based on a transmission scenario that R. Bradley Sack, a Johns Hopkins cholera expert who is collaborating with Colwell, admits is speculative. It is at best, he says, “a highly attractive hypothesis.”

    Seawater temperatures are known to affect the spread of bacteria similar to the cholera agent, and the cholera organism is known to live in sea-borne plankton. Putting those two facts together, Colwell and Epstein argue that the potential dosage of cholera in seawater and hence in shellfish increases during plankton blooms, which in turn become more likely as the sea surface warms. The 1991 Peruvian outbreak is then cited as circumstantial evidence for this chain of events, because it spread extremely quickly and took place when an El Niño had warmed Peru's coastal waters.

    But experts at the CDC say that the Peruvian outbreak doesn't require any explanation beyond the conventional ones. “We had a powder keg ready to explode,” says CDC medical epidemiologist Fred Angulo, “an entire continent in which the sanitation and public water supplies and everything was primed for transmission of this organism once it was introduced,” probably by ships emptying their bilge water near fishing areas. Angulo adds that cholera has been introduced into the United States several times in the last few years; it did not spread, simply “because we have a public health and sanitation infrastructure that prevents it.”

    A lifestyle question

    For mosquito-borne diseases such as dengue, yellow fever, and malaria, the assumption that warming will foster the spread of the vector is simplistic, says Bob Zimmerman, an entomologist with the Pan American Health Organization (PAHO). Zimmerman points out that in the Amazon basin, over 20 species of Anopheles mosquitoes can transmit malaria, and all are adapted to different habitats: “All of these are going to be impacted by rainfall, temperature, and humidity in different ways. There could actually be decreases in malaria in certain regions, depending on what happens.”

    Similarly, in Sri Lanka, says CDC entomologist Paul Reiter, malaria outbreaks are associated with arid periods, when rivers dry up and leave pools and puddles in which the mosquitoes breed. “Heavy rainfall is just what's needed to get rid of their malaria,” he says, because the puddles become torrents and mosquitoes don't breed in running water. “It's so easy to be simplistic and intuitive in these things, and to miss the boat altogether.”

    Gubler adds that the evidence to date suggests that lifestyle and public health measures such as mosquito control far outweigh any effects of climate. Epstein, for instance, attributes Latin America dengue epidemics in 1994 and 1995 in part to El Niño and the more gradual rise in global temperatures, both of which might have favored the spread of the mosquito. But dengue experts at PAHO and the CDC say the epidemics resulted from the breakdown of eradication programs aimed at Aedes aegypti in the 1970s, and the subsequent return of the mosquito. Once the mosquito was back, they say, the dengue followed.

    Gubler is even more dismissive of claims by Epstein and others that these diseases may spread into the United States. He calls such predictions “probably the most blatant disregard for other factors that influence disease transmission.” The mosquito vectors of malaria, dengue, and yellow fever have been in the United States for centuries, but the epidemics they once caused have vanished due to mosquito control, eradication programs, piped-water systems, and changing lifestyle. “We have good housing, air conditioning, and screens that keep the mosquitoes outside, and we have television that keeps us inside,” says Gubler. “All of these decrease the probability that humans will be bitten by these mosquitoes.” Gubler and Reiter point to the 1995 dengue pandemic that rolled through Mexico only to die at the Rio Grande. There were more than 2000 confirmed cases in Reynosa, Mexico, and only seven across the river in Texas.

    Gubler adds that the Gulf states of the United States are several degrees warmer than the Caribbean during the summer. Both regions have the dengue vector, and yet the Caribbean has the disease and the Gulf states don't. “If temperature was the main factor, we would see epidemics in the Southern U.S. We have the mosquito; we have higher temperatures and constant introduction of viruses, which means we should have epidemics, but we don't,” he says.

    Neither McMichael nor Epstein dispute Gubler's argument that climate shifts have had minimal impact on disease patterns so far. Gubler “is on very sound ground when he says that if you look at shifts of dengue fever and malaria over the last decade, most if not all have to do with things other than climate,” says McMichael. “There is no clear signal from any of recent past data that climate has been an important influence.”

    But Epstein says that Gubler's critique overlooks some worrisome signs. When Epstein and others feed data on the global warming that has taken place so far into their computer models of disease spread, he says, they find that the results match trends that are already apparent, such as the spread of mosquito vectors at higher latitudes. Gubler and others are “mixing up the present with the future,” McMichael adds. “What we're saying is if climatic changes do occur, given what we think we know about the influences of changing temperature and humidity on the distribution and biological behavior of mosquitoes, vectors, and infectious organisms, it's a perfectly reasonable prediction that there will be change in the potential transmissibility of these things.”

    The NRC panel will not be the only body trying to make sense of these disputes. Nancy Maynard, deputy director of science for NASA's Mission to Planet Earth program, says NASA has just started a subcommittee on global change and human health, hoping to “provide the strongest scientific basis for these relationships. We want to know the science underlying this.” PAHO, says Zimmerman, is also hoping to “establish a scientific agenda to define what studies are necessary to show the impact of changing climate and weather patterns on tropical diseases.”

    Still, Gubler worries that all the attention to global warming as a public health problem will distract the public from other priorities. “We should definitely do what we can” to reverse global warming, he says, “but we should also be thinking about directing resources toward public health measures to prevent the spread of disease—immunization, mosquito control, improved water systems, waste management systems. The most cost-effective way to mitigate the effect of climate change on infectious disease is to rebuild our public health infrastructure and implement better disease-prevention strategies.”

    Virologist Barry Beaty of Colorado State University in Fort Collins agrees: “You don't have to be a rocket scientist to say we've got a problem,” he says. “But global warming is not the current problem. It is a collapse in public health measures, an increase in drug resistance in parasites, and an increase in pesticide resistance in vector populations. Mosquitoes and parasites are efficiently exploiting these problems.”


    NIH Plans One Grant for All Sizes

    1. Eliot Marshall

    You've finished your postdoc, and now you are ready to apply for your own grant from the National Institutes of Health (NIH). But first, you have a decision to make: Do you want an R29 grant, a type custom-designed for new applicants, or a standard R01, which puts you in a competitive pool that includes Nobel Prize winners? It may sound like a no-brainer, but many young investigators are finding that the easier option can be a frustrating trap.

    Take cell biologist Kenneth Dunn of Indiana University in Indianapolis. He applied for an R29 because an adviser told him it was the surest route to success. But now that he's won the grant, Dunn is beginning to wonder. The R29's top payout of $70,000 a year means that, after salaries, Dunn will have at best $10,000 a year for supplies. “And I'm lucky,” he says, because reagents in his field are cheap.

    Next week, NIH's leaders are considering ending the agonizing R29-R01 dilemma simply by abolishing the R29 grant. The R29 was created 10 years ago as a low-budget alternative to the standard R01. It was designed to give new researchers easier access to the funding system, but NIH thinks the experiment has been a failure. The $70,000 per year it provides in direct costs over 5 years, NIH staffers say, is saddling good ideas with impossible budgets.

    Under the NIH's new proposal, everyone would compete for R01s, which have a $500,000 limit per year and pay on average more than $160,000 a year (see graph). New applicants would still get special status, however: They would be identified as newcomers on the cover of their application, and peer reviewers would be asked to give them a break. And, to ensure that the number of new entrants into the funding system at least remains steady, NIH may add more than $300 million to the budget for grants.

    Widening gap.

    While the average annual value of an R01 has increased steadily, R29s have fallen behind.


    The additional money will be needed, says Marvin Cassman, director of the National Institute of General Medical Sciences, because institutes would have to fund new grantees at the rate that veterans drop out—8% to 9% a year. Over a 5-year period, in effect, all the R29s would be converted to more expensive R01s. Using 1995 data, Cassman estimates that the added cost would be $55 million the first year, rising to $370 million in the fifth year.

    This plan was proposed last summer by a working group chaired by Cassman and Elvera Ehrenfeld, director of NIH's Center for Scientific Review, formerly the Division of Research Grants. It has been treated gingerly by NIH's top brass, however. The working group presented its report to NIH institute directors in July, and according to Ehrenfeld and Cassman, it was received favorably. But NIH made no decision.

    The proposal is “very sensitive,” explains working group member John Krystal, a Yale psychiatrist who strongly supports it, as does the other outsider on the panel, cell biologist Trina Schroer of Johns Hopkins University. But the NIH staff is wary that the plan will “increase everyone's anxiety,” says Krystal. As Dunn observes, “this may look awful at first blush” to postdocs who are leery of competing with senior investigators. And senior scientists who don't understand why winning an R29 is a kind of curse may also be confused. Dunn recalls, for example, that one senior colleague was dismissive of younger researchers' concerns about funding, noting that he himself had three R01s. As for Dunn, he fears it may sound ungrateful, but he agrees that the R29 is so stingy that ending it “sounds like a good idea.”

    Cassman is aware that this proposal “is not a trivial change,” in part because “it would require a significant increase in funding to new investigators” from all the institutes. The institutes seem to be inching toward making that commitment, however. A peer-review oversight group that advises Wendy Baldwin, NIH deputy director for extramural research, is hearing Cassman present the case for this change on 3 November, and the NIH institute chiefs will review it a second, and perhaps final, time at a meeting on 13 November.

    “This isn't a done deal,” says Baldwin. But she adds, “if I were betting, I would bet that it will be approved.”


    Universities Balk at OMB Funding Rules

    1. Eliot Marshall

    A smoldering dispute between major research universities and the U.S. government over the cost of doing research has flared up once again. The spark is a new set of rules, proposed by the White House, that would limit subsidies for research facilities. University lobbyists say the limits could hurt efforts to issue private construction bonds for new labs, and researchers worry that they could also increase the cost of animal research.

    The proposed regulation is part of an arcane but far-reaching document known as Circular A-21, drafted by the White House Office of Management and Budget (OMB). Its aim is to control the $3 billion in “indirect” or infrastructure costs paid by the federal government each year to educational institutions as overhead on research grants. In September, OMB proposed changes to A-21 and gave the universities until 10 November to comment on them.

    The strongest protest so far comes from the Association of American Medical Colleges (AAMC), which represents 125 U.S. medical schools and 86 professional societies. AAMC President Jordan Cohen sent a sharply worded critique to the White House on 28 October, urging that the proposals be “scrapped.” AAMC is particularly upset by what it views as an attempt to discourage the construction of expensive facilities. OMB's proposed rules would require universities to submit detailed justification if they seek reimbursement for buildings costing more than $10 million and if the construction costs are more than 125% of the median rate for gross square footage in their geographic region, as determined by a survey conducted by the National Science Foundation.

    Cohen's four-page letter says this demand for extra data assumes that universities are not now behaving reasonably—an assumption he finds outrageous. “Nowhere in the notice does OMB offer any evidence that educational institutions…have constructed any facility that is unreasonably costed,” writes Cohen. “We believe the OMB is proposing to create a burdensome system to solve a nonexistent problem.” Cohen says cutting-edge science cannot be done in “average” facilities, adding that AAMC is “astonished…that the proposal is presented without any credible, data-driven analysis modeling the impact of the new proposal on universities and schools of medicine.”

    Other groups representing research institutions—including the Association of American Universities and the Council on Governmental Relations (COGR)—are planning to submit letters as well. COGR's executive director, Milton Goldberg, predicts that their comments will be just as tough as AAMC's, and he confirms that some universities worry that the new rules could make it harder to raise money through bonds by undermining confidence in the universities' ability to recoup the cost of construction through federal payments.

    A different complaint comes from the Federation of American Societies for Experimental Biology (FASEB), which represents researchers rather than administrators. According to FASEB's public affairs officer, Howard Garrison, the group is primarily concerned about a new accounting rule for animal facilities. OMB has proposed treating animal centers as “specialized facilities,” which means that their costs would have to be paid directly from the grants of researchers who use the facilities and not charged as overhead across the entire university. Linda Cork, chair of comparative medicine at Stanford University School of Medicine, has estimated that this change could more than double the cost of animal studies. (See Policy Forum, Science, 2 May, p. 758.)

    After the public comment period ends next week, OMB will decide whether to revise its A-21 proposal or proceed immediately with implementation. If it chooses the second course, the dissent may soon grow louder.


    Storm Aborts Antarctic Drilling Project

    1. Richard Stone

    A fierce storm off the Antarctic coast has forced scientists to abandon work on an eagerly awaited drilling project weeks earlier than they had planned. They now must wait at least another year for long-sought data on key geologic events that shaped the frozen continent. “It's real sad that they had to quit so early,” says Rosemary Askin, a project scientist at Ohio State University in Columbus. But there was some good news: Before they aborted the drilling, researchers retrieved sediment from a period never before sampled in the region.

    For years geologists have searched for Antarctic sediment dating from 30 million to 145 million years ago, a time during which the vast Antarctic ice sheet is thought to have formed and the Transantarctic Mountains pushed up. These layers may hold clues to the forces that transformed a lush landscape teeming with dinosaurs and other life-forms into an icy wasteland. And the information might yield insights into how shifts in today's climate might alter the environment—particularly how warming might melt Antarctic ice and raise global sea levels.

    Finding accessible sediment from that period has been no easy task, because 95% of Antarctica's landmass is covered by a kilometers-thick ice sheet. In the 1980s, however, geologists bouncing sound waves off submerged sediment about 20 kilometers off Antarctica's Cape Roberts pinpointed ancient strata 150 to 500 meters beneath the surface of the southwest corner of the Ross Sea. The sediments, 1500 meters thick, are estimated to span a period ranging from 30 million to 100 million years ago.


    A severe spring storm has forced an end to drilling off Cape Roberts.

    Jumping to exploit the find, several dozen researchers from Australia, Germany, Italy, New Zealand, the United Kingdom, and the United States set out to build a special drilling platform. They couldn't use a drill ship because sea ice extends too far into the austral summer to make that an option, and conditions in the winter are too harsh for any drilling operations. So project engineers designed a rig that could be rolled onto a 1.5-meter ice sheet in early September and be used until the ice starts to break up, which usually occurs in late November.

    The platform was set to debut last year. However, late-winter storms in 1996 forced researchers to postpone the project. This year, drilling had been under way for just 9 days when an unseasonable storm bore down on the Ross Sea on 22 October. The 2-day storm, says project chief scientist Peter Barrett of Victoria University of Wellington, New Zealand, was “more severe than any [on record] from this time of year.” Abetted by 3 weeks of temperatures that were about 10 degrees Celsius warmer than usual, the storm swells ravaged the outer fringe of the weakened sea ice and sent fissures snaking to within a kilometer of the rig. If the storm had passed just 50 to 100 kilometers further east, “there is a good chance the sea ice would have survived nicely,” says program manager Scott Borg of the National Science Foundation, co-sponsor of the $4.3 million project. Instead, the 20 drillers and support staff worked around the clock to dismantle the 50-ton drilling platform and haul it to the base camp near shore, an operation completed early on 26 October. “The illusion of man triumphant over nature is ripped away by the winds and cold here,” says project scientist John Wrenn of Louisiana State University in Baton Rouge, who studies microfossils. Many scientists who had just reached the camp will now have to return home early.

    But the storm-shortened season was not a complete loss: Researchers were able to recover 113 meters of core tentatively dated at 17 million to 22 million years old. Although the sediment is several million years younger than indicated by acoustic studies, it represents a period never before sampled near the Antarctic ice sheet. “I'm delighted with what we have recovered,” says Wrenn. Because sediment analyses should help fill a gap in Antarctica's paleoclimactic record, adds Borg, “this core is expected to be very valuable from a scientific perspective.”

    The premature end to the drilling season, however, casts doubt on the scope of future work. While the project is funded for two field seasons, project scientists acknowledge that there's no way to sample the remaining 1350 meters of valuable sediment layers next season alone. That will leave Barrett and others to sort out over the coming months whether they can squeeze money out of project backers for a third season or whether they must settle for fulfilling only part of their goal. And of course they will keep a wary eye on the weather. Says Askin, “We'll keep our fingers crossed for next year.”


    Making the Most of a Short Life

    1. Gretchen Vogel

    The front-page pictures last month of the “pistol star”—perhaps the brightest star ever seen in our galaxy—was one of a string of striking images this year from the Hubble Space Telescope. But it was among the first from the Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS), one of two new instruments astronauts installed on the telescope in February. For NICMOS scientists, the splash of publicity was a welcome respite from the headaches that the instrument has caused.

    Brightest star in the galaxy.

    This image was one of the first from NICMOS.


    Problems with the instrument's cooling system have forced one of its three cameras out of focus and cut its life expectancy by more than half—from 4 1/2 years to less than 2 years. The efforts to complete as much science as possible during NICMOS's shortened lifetime are disrupting observing schedules on other instruments. And NASA is even planning to move the telescope's secondary mirror for a few weeks next January to sharpen some NICMOS observations—an adjustment that astronomers say carries a small, but real, chance of leaving other instruments permanently out of focus.

    Soon after NICMOS was installed, NASA engineers discovered that the solid nitrogen coolant, which keeps ambient heat from obliterating the infrared radiation NICMOS is designed to observe, had expanded so that it was touching its casing. The resulting “thermal leak” is heating the nitrogen so quickly that engineers predict it will all sublimate into space by late next year, leaving the instrument's sensors blinded. The expanded ice has also pushed out of focus the detector for the third camera and its multiobject spectrometer, a tool that separates incoming light into a spectrum, revealing an object's speed and what it is made of.

    Engineers say they might be able to install a cooling pump in 1999 (Science, 23 May, p. 1183), but Hubble managers are not counting on such a save. The Space Telescope Science Institute (STScI) in Baltimore, which controls Hubble operations, has set aside almost half of the telescope's orbits next year for NICMOS observation—double the original allotment. That means some long delays for astronomers who want to use the Space Telescope Imaging Spectrograph (STIS)—the other new instrument installed in February—the Wide Field Planetary Camera 2 (WFPC2), and the Faint Object Camera.

    While most of those affected say they understand and even support the shuffling, the delays are frustrating, says astronomer Jeff Linsky of the University of Colorado, Boulder, who hopes to use STIS to probe the anatomy of young stars. “We put in our proposals a long time ago,” he says. “[NASA] invested $125 million in STIS, and we have seen very little so far.” He estimates that his observations will end up a year behind schedule. Douglas Richstone of the University of Michigan, Ann Arbor, who plans to use STIS to take a census of black holes, estimates that his observations are 6 months behind.

    STScI officials acknowledge that the delays will be painful. “There are people who applied for time a year ago and who will have to wait another year for their data,” says Andrew Fruchter of STScI, a member of the WFPC2 group. “After that amount of time, a conception can become scientifically stale.”

    Efforts to sharpen the focus of the multiobject spectrometer camera will also disrupt other observations, at least temporarily. In late January, NASA engineers will send a command for Hubble to move its secondary mirror a fraction of a millimeter to bring the detector into focus. But even that tiny shift is enough to blur the vision of the other instruments, and they would be handicapped if the mirror can't be moved back to its original position. “That scares me,” Richstone says. “Suppose the motor fails. It's not astronaut serviceable.” But Fred Walter of the State University of New York, Stony Brook, chair of the Space Telescope User Committee that advises STScI, says NASA has made bigger adjustments before. “It's only a small risk that you'll be stuck at a bad focus,” he says. “The engineers are confident it will come back.”

    NICMOS users, on the other hand, are thrilled at the chance to use the third camera to look at the composition of Pluto's moon Charon and of the star-forming regions of the Milky Way and other galaxies. Almost half of the 3-week set of observations will be devoted to taking another look at the Hubble Deep Field, a region of the sky that WFPC2 probed nearly 2 years ago, revealing some of the faintest and most distant objects ever seen. The expansion of the universe stretches the light from distant galaxies into longer—redder—wavelengths, and by viewing the Deep Field in the infrared, NICMOS may be able to probe even deeper into the outer reaches of the universe.

    But the coolant troubles will still limit that observation. Scientists had hoped to get deep views of adjacent areas of the sky with WFPC2 and STIS. The mirror shift, however, will render WFPC2 useless, and astronomers are unsure how useful the out-of-focus information from STIS will be.

    Nonetheless, STScI director Robert Williams emphasizes that the intense set of observations over the next year should produce a spectacular scientific harvest: “NICMOS works. It can do everything we had hoped.” But he admits that the problems have been “a big disappointment.” The triage “has required a tremendous amount of work,” he says. “We're going to end up recouping most of the science, but it's taken so much more effort to do it.”


    Academics Fear Research Cuts to Pay Overhead Costs

    1. Nigel Williams

    LONDON—It is now 4 months since university-based researchers got their first look at a long-awaited special report chaired by the government's education troubleshooter Sir Ron Dearing. While its advice was initially welcomed as the first comprehensive look at the higher education system in 30 years, a closer look at the fine print has prompted fears that changes it proposes could lead to massive cuts in academic research funding.

    The new Labour government will publish its plans for higher education in a white paper by the end of the year. Dubbed “Lifelong Learning,” it will be Labour's most important policy so far for higher education and will aim to provide many more people with access to higher education. There is concern, however, that it may adopt one option suggested by the Dearing report—that research councils pay the full cost of overhead support for the research they fund in universities. If that happens, the councils' spending power could be reduced by 20%, and hundreds of research jobs will be lost. “This would affect our funding very badly,” says George Radda, chief executive of the Medical Research Council, adding that a 20% loss would be “huge.”

    Researchers' initial response to the Dearing report was one of relief (Science, 1 August, p. 628). After years of funding problems and restructuring, accompanied by a huge increase in the number of students in higher education, the government was finally going to address the problems in a reasoned way. But despite positive comments about the needs of the university research sector, the section of the report devoted to research was a disappointment. “Research is the least well worked through section,” says Radda. “The report comes apart in your hands when you begin to look at the implications,” says one researcher.

    One recommendation in particular has set alarm bells ringing: a radical shift in the way the overhead costs of research are funded. At present, grant providers such as the six research councils fix overhead costs at 45% of the staff costs on project grants. Most of the remaining overhead costs for university research are covered by block grants from the higher education funding councils. But Dearing recommends that those who fund the research should pay as much as 100% of the overhead costs for the projects they support, so that more funds from the higher education councils can be devoted entirely to basic facilities, such as buildings and equipment. “In principle this would be fine, but in practice, if the research councils had to find the extra money, it would lead to a totally unjustified reduction in the amount of research we could fund,” says Geoffrey Findlay, secretary at the Particle Physics and Astronomy Research Council.

    The extra bill for the research councils, estimated at $165 million, could jeopardize hundreds of jobs for postdoctoral researchers and lead to a long-term reduction of about 20% in direct support for research from the councils. “It's vital to press for new funds” to cover overhead costs, says Mark Ferguson, head of biology at the University of Manchester.

    So far, the government has given no indication of whether it intends to carry through on Dearing's recommendations or, if it does, whether it will provide extra funds for the research councils. And contradictory statements by government ministers have added to the air of uncertainty. Margaret Beckett, secretary of state for the Department of Trade and Industry, which oversees science spending, has told researchers that science policy is still being formulated and will adopt a long-term approach to funding and be “people-centered.” However, higher education minister Tessa Blackstone said in a recent article that “a world-class science base is vital to our national prosperity. But that is different to arguing that we must increase the already significant proportions of young people studying science and engineering in higher education.” Secretary of State for Education David Blunkett has also criticized as “excessive” the amount of time some university staff spend on research.

    Such comments are sending a chill through the universities and, given the government's commitment to keeping a tight hold on the fiscal purse strings, researchers are bracing for bad news. Some have suggested, however, that there is one obvious solution: Transfer funds from the higher education councils to the research councils. But most researchers believe that this solution would not be adequate. The higher education councils' support for universities has failed to keep up with the need to improve facilities, leaving universities with an estimated minimum backlog of $205 million in renovations. Ferguson thinks it would be “absolutely appalling” to rob one set of hard-pressed budgets to help another. And Alistair MacFarlane, who drafted the Royal Society's response to the Dearing report, agrees that it would be “ineffective” to try to solve the problem with such a transfer of funds.

    Some researchers are now beginning to contemplate the worst-case scenario—that the research councils may simply have to pay more for their research. “If new money is not forthcoming, the [Royal Society] would accept with reluctance that the least bad alternative is to reduce the number of grants awarded by the research councils,” says MacFarlane. “It's a bullet we're willing to bite to maintain quality.” Such a move would also force changes in the relationship between the research councils and the universities. Says Radda: “If we are fully funding overhead costs, then we would want a better idea of whether those funds are actually supporting research.…Some universities are much better at this than others, but I hope that doesn't lead to a competition.”

    The prospect that the Dearing report will create a more competitive research environment has worried some private funders as well. The Wellcome Trust, Britain's largest source of private biomedical research funds, said in its response to the government on the Dearing report: “Whatever figure is finally agreed, the trust believes that it should be applied across the U.K., to prevent competition between universities on overhead rates.”

    But the real victims of a move to the research councils providing full overhead support would be young researchers. Research councils would have to cut the number of grants, substantially reducing research opportunities. Such a decision would give credence to “the dangerous assumption that the present science base is too large,” says Ferguson. “We have a lot of talented people,” but science is “an international playing field: If we can't support them, they will go elsewhere.”


    The Dial-Up Sky

    1. Ann Finkbeiner
    1. Ann Finkbeiner is a science writer in Baltimore.


    In the 1780s, William Herschel used a home-built telescope to make what he called “sweeps of the sky.” He discovered Uranus and found thousands of what he called nebulae, diffuse blobs that we now know as galaxies. The heavens “are now seen to resemble a luxuriant garden,” he wrote in 1789, “which contains the greatest variety of productions.” Since then, sky surveys like Herschel's have been finding such a variety of productions that they have become what Alan Sandage of the Carnegie Observatories calls “the backbone, the roadmaps, the census reports, the bread and butter of astronomy.”

    Now, a whole new crop of surveys—planned or under way—is promising a garden so luxuriant that strict management will be needed to harvest it. The largest of these surveys map anywhere from hundreds of thousands to hundreds of millions of objects, and together they will cover 14 wavelengths, from ultraviolet and optical through infrared and radio. The resulting archives of digital data will be measured in terabytes, or trillions of bytes—“a phenomenal amount of information,” says George Djorgovski of the California Institute of Technology (Caltech) in Pasadena. Djorgovski then lists a litany of problems: “Now, how to store these data so they're easily accessible? And how the heck do you find what you want?” How indeed.

    For the archives to be most useful, they will not only have to be accessible and searchable; they will also need to be interconnected. The goal, say survey leaders, is something like a single survey of the sky in 14 wavelengths, all accessible with links to one Web site. Alex Szalay of Johns Hopkins University calls it “dialing up the sky”: Click on an astronomical object, get its image and spectrum in, say, both optical and radio wavelengths, explore its neighborhood, and identify others like it elsewhere in the sky. “Done right,” says Szalay, “astronomers and the public both could have a virtual telescope.”

    A dial-up sky is just starting to take shape. One radio survey is already linked to a catalog of galaxy images made at optical wavelengths; the so-called Digital Sky Project is laying plans for a broader confederation of surveys; and researchers throughout these projects are discussing common database structures and search schemes. “Everybody is talking in various commutations about creating an entity that will connect the various wavelengths,” says Carol Lonsdale of Caltech, a member of an infrared survey called 2MASS. “We're sharing code and ideas.”

    These astronomers hope that such links will multiply the historical powers of surveys. “Every time there's a new survey,” says Lonsdale, “we change the way we think of the universe.” Surveys in recent decades at optical wavelengths showed, for instance, that galaxies are distributed not randomly but in clusters and filaments. A 1960s radio survey, by finding that distant galaxies had shapes and spectra different from those nearby, provided some of the first evidence that the universe has changed with time. In the 1970s, an x-ray survey showed stars—which in optical surveys seem to lead quiet lives—exploding or being ripped apart in gravitational fields.

    Partly because of new technologies—including charge-coupled device (CCD) detectors and multifiber spectrographs—that allow light from many objects to be analyzed at once, large surveys have proliferated (see sidebar on p. 1011). The databases they are amassing are among the largest in science, from about 120 gigabytes (billion bytes) at the small end—already 10 times the size of the Human Genome Project's database—to the 1.2 terabytes of processed data and 40 terabytes of raw data from the Sloan Digital Sky Survey, which will start capturing 100 million galaxies, a million quasars, and 100 million stars in 1999. These databases will be too large to be downloaded directly. “At typical university baud rates, using the World Wide Web,” says Szalay, who is on the Sloan team, “a 500-gigabyte data set would take a year to download.” The data archives will have to be stored in a central computer that can be reached and searched over the Internet. “It can't be done any other way,” says Szalay.

    For the moment, each survey is maintaining its own data archive, structured and searched its own way. DPOSS, an optical survey of tens of millions of galaxies and billions of stars, has a scheme called SKICAT that assigns each object a set of numbers based on whether it is a star or a galaxy and on its color, shape, position, and brightness—“10 to 20 useful numbers per object,” says Djorgovski. The Sloan is working on a more complex scheme that will make it possible to search for objects that have certain features in their spectra, say, or colors within a chosen range. “We can carve out a complex shape in color space,” says Szalay, “and find these objects over the whole sky.” 2MASS will eventually use some combination of both schemes. “Right now, each survey is concentrating on its own problems,” says Szalay, “but the next thing the community will want is a merger.”

    They want a merger because astronomical objects shine in all wavelengths and, says Jim Gunn of Princeton and the Sloan team, “information in any one wavelength just doesn't tell you enough.” In the optical alone, for instance, quasars just look like stars. Only by combining observations in radio through gamma wavelengths have astronomers converged on a picture of quasars as active galaxies dominated by central black holes. Seeing the sky in any one wavelength is a little like listening to a symphony hearing only the horns.

    Just how astronomers can hear the whole orchestra, however, is still uncertain. So far the idea seems to be that each survey team will maintain its own archives at home but link to a common search mechanism. “We are building the data warehouse at the site of the survey,” Lonsdale says, “then layering on the top the software needed to query it.” FIRST, a radio survey, is already linked to an old optical survey done at Mount Palomar by software that queries by position: A user can get on FIRST's Web site, type in the coordinates of a galaxy to get its radio image, then click on the Palomar survey to get its optical image. “By matching bright [Palomar] objects with radio,” says Richard White of the Space Telescope Science Institute (STScI) in Baltimore and FIRST, “we found 400 new quasars. We can dial up the sky right now.”

    But astronomers want to navigate the multiwavelength sky not just by coordinates but also by neighborhood. For the Sloan database, Szalay's team has a scheme called sky tessellation, which covers the sky with triangles. To learn a galaxy's neighborhood, locate its position in a particular triangle, then call up the occupants of the whole triangle. A similar tessellation might group galaxies not by location but by color: Specify a color, and get all objects in the whole sky with similar colors. Or combine color with position and ask for everything with similar colors in a specific neighborhood. Szalay argues that tessellation could be the basis of the superstructure needed to query all archives: Calling up, say, triangle S2 in the optical will allow you to call up the same neighborhood in the radio and infrared. “We propose identical subdivisions of the sky,” says Szalay: “One shoe fits all.”

    Tessellation has the community's attention. STScI's second Guide Star Catalog, made up mostly of stars extracted from DPOSS data and an Anglo-Australian Observatory survey, may use tessellation: “We listened to Alex,” says STScI's Barry Lasker, “and said, ‘Why not just do what they're doing?’” GALEX, an ultraviolet survey done from a satellite, will use tessellation; 2 MASS and FIRST have agreed to try it; and DPOSS, says Djorgovski, may join in. “Six institutions with the same design concept is a landslide,” says Lasker—although nothing is yet in writing.

    Meanwhile, a consortium of astronomers based at the University of California, San Diego, has begun the Digital Sky Project, which will integrate 2MASS, DPOSS, FIRST, a second all-sky radio survey, and eventually, perhaps, the Sloan. The project will develop software to link the archives, probably via tessellation, and also provide a mechanism for storing each new link its users discover. An astronomer who identifies the same object in, say, the radio, infrared, and optical archives will store the new multiwavelength object in a sort of superarchive. When the Digital Sky will be up and running, says project leader Thomas Prince at Caltech, is “a little hard to say but within a couple of years definitely.”

    However they're designed, public databases with a common searching scheme will mean that astronomers and anyone else with access to the Internet can, in Szalay's words, dial up the sky in 14 wavelengths and ask it questions. Lonsdale, for example, would like to look for quasars that are hidden by dust, which would make them bright in the infrared but faint in the ultraviolet where quasars are usually prominent. The question, she says, is, “Have we missed a large population of quasars because dust hides the nucleus?” Gunn wants to study star birth in nearby spiral galaxies by combining radio observations—sensitive to the compressed gas that spawns stars—with far-infrared, near-infrared, and optical observations that trace the stars' early lives. “You can get the whole history of star birth,” he says.

    A multiwavelength, publicly accessible universe is a couple of years off, says Szalay, “but it's gonna come, it's gonna come. All this information at our fingertips will change the way we do astronomy. It's going to be more democratic. It's a totally different world, and it's exciting to be there shaping it.”


    Many Ways to Survey the Sky

    1. Ann Finkbeiner

    Astronomical surveys are nothing new, but the late 1990s are a heyday of large surveys. Here is a handful of the largest, which may be linked into a single virtual universe (see main text).

    Digitized Palomar Sky Survey (DPOSS). At the Palomar Observatory, a Caltech team has nearly finished photographing the entire northern sky in three optical wavelength regions: green, red, and near-infrared. The Space Telescope Science Institute (STScI) is scanning the plates, converting pictures to digital information. It is identifying up to 2 billion stars and other objects with positions exact enough to be used for aiming the Hubble Space Telescope and for research on, for example, the structure of our galaxy; Caltech will use the same digitized database to catalog 50 million galaxies and 100,000 quasars. (

    Two-Degree Field survey (2dF). The survey is a collaboration among astronomers from three observatories and a university in Australia and two observatories and four universities in Great Britain. They are using the Anglo-Australian Telescope in Australia to survey part of the southern sky, collecting optical spectra of 250,000 galaxies and 250,000 quasars. (

    Sloan Digital Sky Survey. The Sloan is a collaboration of the University of Chicago, The Institute for Advanced Study, The Johns Hopkins University, Princeton University, the University of Washington, Fermi National Accelerator Laboratory, the U.S. Naval Observatory, and the National Optical Observatory of Japan. Starting in 1999, a dedicated 2.5-meter telescope in New Mexico will take digital pictures and spectra of 100 million galaxies and a million quasars over half of the northern sky in five colors, ranging from ultraviolet through optical to infrared. (

    Galaxy Evolution Explorer. GALEX, a small satellite carrying detectors for two ultraviolet wavelengths blocked by the atmosphere, is a collaboration among Caltech, The Johns Hopkins University, Laboratorie Astronomie Speciale in Marseilles, the University of Puerto Rico, the Jet Propulsion Laboratory (JPL), and Orbital Sciences Corp. Starting in 2000, GALEX will image 10 million sources of ultraviolet light, taking spectra on 100,000 of them, over the whole sky.

    Two Micron All Sky Survey. 2MASS, a collaboration between the University of Massachusetts and the Infrared Processing and Analysis Center at Caltech and JPL, is using two telescopes, one in Arizona and one in Chile, to survey a million galaxies and 400 million stars over the entire sky at 2-micrometer near-infrared wavelengths. (

    Faint Images of the Radio Sky at Twenty-centimeters (FIRST). Astronomers at the University of California at Davis, Columbia University, and STScI have imaged hundreds of thousands of galaxies with the Very Large Array of the National Radio Astronomy Observatory (NRAO). As of mid-June, FIRST had surveyed about a sixth of the northern sky, “half the area we intended to cover,” says Richard White of STScI, “but all the time NRAO gave us.” NRAO recently declined to grant time for the other half. “We've learned a great deal from the 5000 square degrees of FIRST,” says NRAO's director, Paul Vanden Bout, but citing a tight Very Large Array schedule, he adds, “the question was, is the increment worth the pain?” (


    Amateur Sky Survey Keeps It Simple

    1. Ann Finkbeiner

    At the other end of the spectrum from the $54 million, 100-astronomer Sloan Digital Sky Survey is The Amateur Sky Survey (TASS): $50,000 so far, and 10 to 12 people from varying professions. About all TASS shares with the new large surveys is a fondness for acronyms and astronomy.

    TASS is a product of the imagination and bank account of Tom Droege, a semiretired engineer at Fermi National Accelerator Laboratory in Batavia, Illinois, who builds cameras out of 135-millimeter lenses and reject charge-coupled device (CCD) chips, then gives them away to anybody willing to operate them and, more important, collaborate on the software that links them into a survey. Droege advertised his offer over the Internet*—“where the programmers are,” he says. So far, he has sent out 23 cameras to amateur astronomer-programmers in California, Maryland, Canada, and points between. The amateurs set the cameras out in their backyards, uncover the lenses at night, let the sky rotate over the lenses, then cover them up again in the morning. The sky's movement and the readout from the CCDs are synchronized, so the resulting image is sharp—a method also used by the Sloan.

    TASS's amateurs, says Droege, come from the cadre of good scientists who can't find jobs doing research and are, he says, “just itching for an excuse to do science.” The science is a survey of a 3-degree-wide strip of the sky in which, with any luck, TASS will find 1000 unmapped variable stars, which astronomers will study to fill out their theories of stellar life history. The observations will go into a digital database at the Rochester Institute of Technology in New York and will eventually be searchable over the Internet. When TASS gets all its talent trained, it will go on to watch for asteroids that might be aiming at Earth.

    When Droege goes to meetings and hears astronomers lamenting the lack of funding for instruments and salaries, he says, “I just smirk inside.” With cheap instruments and no salaries, he says, “we amateurs are setting out to take over the world.” Such amateurs do “strong and competitive science,” agrees Alex Szalay of Johns Hopkins University and the Sloan Survey. “They give us a run for our money.”


    X-rays Hint at Space Pirouette

    1. James Glanz

    If you spin in your office chair while holding a full cup of coffee, centrifugal force is likely to leave you wet—even if the entire office building and the surrounding city should somehow spin with you and hide any hint that you are moving. That's because, according to one of the more baffling concepts from Einstein's theory of relativity, the spin is measured against an abstract “frame of reference” determined by the average positions of all the stars in the universe—not a nearby speck of matter like a city. But if a chunk of matter is dense and massive enough—say, a planet or, better yet, a superdense star—all bets are off: Relativity predicts that such objects can “drag” reference frames right along with them.

    What a drag.

    The jump in x-ray intensity at 67 CPS may reveal a spinning black hole dragging the space surrounding it.


    Measurements announced this week may back up that claim. At a meeting of the American Astronomical Society's High Energy Astrophysics Division in Estes Park, Colorado, two teams announced their analyses of data from the 2-year-old Rossi X-ray Timing Explorer, a satellite carrying sensitive detectors that allow it to observe faint, rapidly varying x-ray signals. They found that tiny wobbles in the x-rays from matter being sucked into stellar cinders called neutron stars, and perhaps into even denser black holes, appear to be signs of “frame dragging”—as if space, like black coffee, were a substance that could be stirred and swirled.

    “It's an intriguing interpretation,” says Mitchell Begelman, an astrophysicist at the University of Colorado, Boulder. “It would be very important if it turns out to be the correct one.” If so, he says, it could mean that reference-frame dragging might serve as a tool for studying black holes, objects now observed only indirectly.

    In one presentation, Luigi Stella of the Astronomical Observatory of Rome gave an analysis of x-rays from matter spinning around several neutron stars. As this matter—probably torn from a companion star by each neutron star's powerful gravity—spirals into a spinning accretion disk and sporadically plunges toward the neutron star, fluctuations in its x-ray glow tip off astronomers to how fast the matter in the accretion disk is moving and even how fast the neutron star is spinning.

    But the Rossi's sensitive detectors have also picked up much slower jitters, explains Stella. That got him and his colleague Mario Vietri of the University of Rome thinking about frame dragging. The effect had been seen indirectly in the 1970s, through its influence on the way a pair of pulsars—neutron stars that emit beams of radio waves—orbit each other. And in a paper soon to appear in the journal Classical and Quantum Gravity, a team led by Ignazio Ciufolini of the University of Rome reports detecting the minute effects of frame dragging by Earth's own gravity in the orbits of the so-called LAGEOS satellites. While that detection remains controversial, the Gravity Probe B satellite, to be launched by 2000, will attempt to measure Earth's frame dragging precisely using gyroscopes.

    Stella and Vietri thought that a neutron star's powerful magnetic field might create the conditions needed to see frame dragging in the x-ray signal. Like the whirring blades of an egg beater, the magnetic field punches a hole in the middle of the accretion disk. Because the field's north and south poles are generally out of line with the neutron star's spin axis, its whirling lines of force fling matter from the accretion disk out of the disk's plane. There, like a tilted toy top, the material should wobble, or precess, at the same frequency that the reference frame—and therefore the very space in which it exists—is being dragged around the star. Being off kilter is crucial to detecting frame dragging: Within the plane, dragging would produce only indetectable changes in the disk's speed. The wobble frequencies measured by the Rossi satellite are broadly consistent with the amount of dragging expected from the neutron stars' spin rates and the distance at which the material is whirling around them.

    Other experts are cautious. “All we can say is that the order of magnitude is right,” says Sharon Morsink, a relativity expert at the University of Wisconsin, Milwaukee. The uncertainty stems from a lack of knowledge about the internal structure of neutron stars, which are about 20 kilometers across and roughly as dense as atomic nuclei. Astronomers don't know how that density varies with depth in the star, says Morsink, which would affect the distance of most of the star's matter to the accretion disk and the magnitude of the frame-dragging effect.

    Still, a second team, led by Wei Cui of the Massachusetts Institute of Technology, reported that Rossi's measurements of low-frequency x-ray oscillations might have also detected frame dragging around far denser objects: black holes (see graphic above). In this case, the researchers don't have a direct measure of the black holes' spins. But if the result is correct, notes Colorado's Begelman, then the whirling of space could be used to measure how fast other black holes are spinning. And that, in turn, could provide clues as to whether spinning black holes are the engines behind such spectacular displays as quasars, plasma jets, and gamma-ray bursts.


    Fast-Forward Aging in a Mutant Mouse?

    1. Wade Roush

    When the television or the toaster oven breaks down a week after its warranty expires, it's tempting to believe the manufacturer designed it to do so. Similarly, the infirmities and breakdowns that herald old age are so predictable that some scientists believe our genes evolved with their own kind of warranty, one that runs out shortly after reproductive age. These researchers speculate that if they could figure out which genes underlie this planned obsolescence, they might be able to delay it—in effect, extending the warranty. Now, by studying mutant mice that resemble senior citizens by the time they're only 60 days old, a team in Japan may have discovered what such a gene would look like.

    Defects in the previously unknown gene, called klotho, cause mice to die prematurely with a skein of disorders commonly found in elderly humans, such as arteriosclerosis, osteoporosis, skin atrophy, and emphysema. The finding, reported by a team led by Makoto Kuro-o, a physician and molecular geneticist at the National Institute of Neuroscience in Tokyo, in this week's issue of Nature, suggests that a specific set of genes suppresses all these age-related conditions. The klotho gene, which appears to produce a protein that circulates in the blood, may be the signal that keeps this genetic program turned on while an organism is young, says Kuro-o. And because klotho—named after the Greek goddess who spins the thread of life—is also found in people, Kuro-o says he has high hopes “that this notion will be effective for understanding aging mechanisms in humans.”

    “A wonderful surprise” is how George Martin, a medical geneticist at the University of Washington in Seattle, describes the finding. Indeed, it's such a surprise that skepticism is called for, says Michal Jazwinski, a geneticist studying aging at Louisiana State University Medical Center in New Orleans. “They're fooling around with a gene that's normally expressed in the mouse and getting a syndrome that looks like aging in humans,” Jazwinski notes. “That's a bizarre twist”—and a possible sign, he says, that the mutation doesn't accelerate the mice's own aging, but merely kills them prematurely.

    Kuro-o started out looking for genetic changes that contribute to hypertension, not hoariness. To test whether one cause of high blood pressure in mice and humans might be overproduction of a protein that plays a role in transporting sodium across cell membranes, Kuro-o injected one-celled mouse embryos with multiple copies of the corresponding gene. These new genes took up random positions in the embryos' DNA, sometimes landing in locations where they disrupted native genes.

    Kuro-o noticed that mice belonging to one such strain stopped growing 3 to 4 weeks after birth and died after only 8 to 9 weeks instead of the usual 2 to 3 years. And when Kuro-o examined tissues from the mice under the microscope, he found further changes that, in humans, would be signs of aging: Their arterial walls and other tissues had calcified, their bone density had decreased, the alveoli of their lungs had deteriorated, and they had lost hair follicles and skin thickness. Although these symptoms aren't normally part of the aging process in mice, they might at least make the klotho mouse a good laboratory model for the study of human aging, Kuro-o realized.

    By homing in on the inserted transgene, Kuro-o's team cloned klotho itself and determined from its DNA sequence that it encodes a protein similar to β-glucosidase, an enzyme found in bacteria, plants, and mammals that can break apart fat-soluble molecules such as glycolipids. From its amino acid makeup, Kuro-o believes that the enzyme has an active portion that circulates in the blood, where it may break down glycolipids to generate ceramide, a compound known to help regulate programmed cell death. He's now testing mouse and human blood plasma for signs of the enzyme, and is also looking for variations in klotho carried by people with common age-related disorders, such as osteoporosis.

    Other scientists will want to know those results, not to mention the enzyme's true function, before they'll fully believe Kuro-o's theory. The klotho mutants might be suffering from a metabolic disease rather than aging, says Jazwinski. “Are they seeing premature aging, or simply premature death? This is the essential question.” But Kuro-o says he's found no metabolic abnormalities in the mutants. He sums up: “I think these mice died of old age.”


    NMR Maps Giant Molecules As They Fold and Flutter

    1. Michael Balter

    OXFORD, UNITED KINGDOM—Thirty years ago, the late Cyrus Levinthal, a protein chemist at the Massachusetts Institute of Technology, posed one of the most daunting riddles in structural biology: How do polypeptides—linear chains of up to 300 or more amino acids—arrive at the intricately folded, three-dimensional conformations characteristic of biologically active proteins? Levinthal ruled out the possibility that proteins randomly fold until they stumble upon their native state. A typical protein, he calculated, would take many orders of magnitude longer than the age of the universe to reach that state if it had to randomly search out all possible conformations. Most proteins, however, fold up on time scales ranging from milliseconds to seconds.

    Magnetic machine.

    An NMR spectrometer at Oxford University's Centre for Molecular Sciences.


    At the time he posed what has become known as the Levinthal paradox, protein chemists could do little more than scratch their heads over the problem. But over the decades since then, powerful analytical techniques such as nuclear magnetic resonance (NMR) spectroscopy have begun to unravel the paradox by giving structural biologists a glimpse into the dynamics of proteins. At a meeting* here late this summer on the biological uses of NMR, researchers presented some of the fruits of new NMR techniques that are providing much higher levels of resolution (see next story). Using these new methods, researchers are opening a window on the intricate steps in the folding process and learning that proteins may be much more flexible and dynamic than previously thought—a finding that could have profound implications for how these macromolecules carry out their functions.

    NMR exploits the fact that many atomic nuclei behave like magnets. When a magnetic nucleus encounters a strong external magnetic field, its orientation is restricted by quantum mechanics to a small number of directions, each with a different energy level. These energy levels depend on the type of nucleus and its environment—what other atoms are nearby in the molecule. If the sample is then exposed to radio waves of varying frequencies, some of the radio photons will have just the right amount of energy to cause a nucleus to jump from one energy level to the next and so will be absorbed. As the nucleus drops back to a lower energy level, it emits a photon. Researchers can record these photons as an NMR spectrum, which carries a wealth of detail about the structure of the molecule.

    In recent years, NMR spectroscopists have found ways to extract even more detail. These new techniques rely on heavy-isotope tags and rapid pulses of radio waves to generate highly resolved, multidimensional spectra. “There have been huge advances recently,” said Christopher Dobson of Oxford University's Center for Molecular Sciences at the meeting.

    Protein paradox. The key strength of NMR for studying protein dynamics is its ability to map proteins in solution—their native environment. “The behavior of proteins in solution is largely inaccessible by other structural techniques, such as crystallography,” says Mark Williams of Britain's National Institute for Medical Research in London.

    At the meeting, Jane Dyson at The Scripps Research Institute in La Jolla, California, presented NMR evidence supporting a possible solution to Levinthal's paradox. Levinthal's own proposal was that polypeptides follow a specific sequence of folding steps, determined by their amino acid sequence. But in recent years, this view has fallen from favor, in part because theoretical models showed that the fastest route would be for the protein to collapse rapidly into a conformation that roughly approximates the native state, then edge more slowly toward its precise final conformation. “Yesterday, it seemed that it was impossible for proteins to fold to their native states,” says Martin Karplus of Louis Pasteur University in Strasbourg, France, a leading theoretician in this field. “Today, we think protein folding should be easy.”

    Dyson's studies provide what may be glimpses of these partly folded intermediate states. She and her co-workers studied the behavior of apomyoglobin—a modified version of the protein myoglobin, which carries oxygen to muscle tissues—in solutions of various acidities, or pH. At a pH close to neutral, apomyoglobin is a highly compact globule consisting of eight tightly wound helices. But at very low pH (high acidity), the protein unfolds into a relatively unstructured “random coil.” By gradually increasing the pH, Dyson hoped to simulate the stages of folding in slow motion.

    NMR spectra of the solutions showed that as the pH was increased, the polypeptide began to develop structures found in the native state. For example, at pH 2.3, about 12% of the molecule was in the form of helices, while at pH 4.1 about 35% was tightly wound. Moreover, the first helix formed even at very low pH, and the pH 4.1 form, which corresponded to an intermediate state called a “molten globule,” consistently contained three of the final eight helices.

    Dyson and others caution that these pH-dependent forms of apomyoglobin might not exactly represent the protein's folding mechanism, because they are stable equilibrium states, while a protein reaches its native state through a series of extremely rapid folds. Nevertheless, Dyson says, the results are “snapshots of how the protein might become more compact as it folds.”

    Dyson says her results are consistent with the idea that protein folding follows neither a random process nor a strictly defined pathway: “Apomyoglobin doesn't fold by sampling every single conformation, but by starting off with certain regions that are more inclined to make helices. As things move along, these helices interact to make a more compact form.” Thus, the solution to the Levinthal paradox, Dobson says, may be that each polypeptide in a solution is free to find its own way through the various stages in the folding process: “Each molecule does something differently.”

    Molecular motion. While a protein in its native conformation usually has a regular and predictable structure, the molecule still has a considerable amount of give. “No protein is totally rigid,” says Peter Wright of Scripps. “The emerging view of proteins is that there is significant internal motion.” Working with a bacterial enzyme called dihydrofolate reductase (DHFR), Wright has used NMR to investigate how these internal motions might affect the protein's biological function. DHFR catalyzes the conversion of a compound called dihydrofolate to tetrahydrofolate, a coenzyme that enables cells to make numerous organic molecules, including some amino acids. The reaction takes place in a large cleft running along the center of the DHFR enzyme, and also requires a reducing compound called NADPH.

    Wright and his co-workers exposed the enzyme to nonreactive analogs of dihydrofolate and NADPH to “freeze” the reaction at various steps in the catalytic cycle, and then looked at the enzyme's internal motions at each step. The team found that several regions in the molecule became much less mobile once the inhibitor molecules were bound to the active site. For example, in the unbound state a region called loop 1—which acts as a “cap” over the active site—fluctuates between two slightly different conformations about 30 times per second. But when the inhibitors were bound to the enzyme, these fluctuations stopped. Another highly mobile region near the active site, centered on a glycine molecule in the polypeptide chain, also became much more rigid in the bound state.

    “Wright was able to map the changes in protein motion at various steps of the reaction,” says Desiree Tsao of the Genetics Institute in Cambridge, Massachusetts. To take things a step further, Wright's collaborator Stephen Benkovic and his colleagues at Pennsylvania State University in University Park made mutant versions of DHFR in which the highly mobile glycine was either replaced with another amino acid or deleted entirely. These changes, Benkovic found, greatly decreased the enzyme's catalytic power—implying that these internal motions play an important role in its biological function.

    Exactly what this role might be remains to be determined. But NMR aficionados are confident that their technique will help point the way to solving this mystery, as it has done with the Levinthal paradox. Says Dobson: “NMR is allowing these questions not only to be posed, but to be answered.”

    • * NMR in Molecular Biology, Oxford, United Kingdom, 23 to 28 August.


    Lining Up Proteins for NMR

    1. Robert F. Service

    The tried-and-tested method for mapping the three-dimensional (3D) structure of proteins, x-ray crystallography, has a troublesome shortfall. The technique can pinpoint the location of atoms with extreme accuracy by bouncing off innumerable copies of the protein stacked in a crystal. But many proteins don't readily form such regular assemblies. A rival technique, known as nuclear magnetic resonance (NMR) spectroscopy, can map proteins in their native solution environment and therefore doesn't suffer this problem. But it has never been able to match the peak precision of its rival.

    Getting oriented.

    By aligning proteins (red), liquid crystals (green) sharpen NMR's atomic mapping.

    That may soon change. On page 1111, NMR experts Nico Tjandra and Ad Bax of the National Institutes of Health in Bethesda, Maryland, report modifying a long-known NMR technique to dramatically improve NMR's protein-mapping skills. Conventional NMR maps protein structure by determining the identity of atoms in a molecule as well as the distance between given pairs of atoms. With the new technique, which gently aligns the protein molecules in a bath of liquid crystals, researchers can also determine how each bond between neighboring atoms is oriented with respect to the rest of the molecule. By methodically building up a list of such orientations for all neighboring pairs of atoms, researchers should be able to complete a far more precise map of a protein.

    So far, Tjandra and Bax have not reported mapping the complete structure of a protein using their technique. But they and others think it is likely to stack up well next to x-ray crystallography. “It's quite impressive,” says Stephen Fesik, an NMR spectroscopist at Abbott Laboratories in Chicago, Illinois. “I think this is going to have a big effect on the field.” One obvious area of impact, he says, could be in drug design, because designers need detailed structures of proteins so that they can tailor their drugs to interact with them.

    All NMR techniques rely on the fact that some atomic nuclei act like tiny bar magnets. When these nuclear magnets are placed in an external magnetic field, they align along the magnetic field lines. If excited by a burst of radio-frequency photons, their magnetic axes precess like wobbling tops around about these lines. As they relax back, they give off a signal at a frequency that betrays their elemental identity. Researchers solve the structure of proteins using a variation of the technique, in which they send radio waves into a sample at frequencies designed to excite particular nuclei. This excess energy can be transferred to a neighboring atom. The rate of this energy exchange is related to the distance between the two atoms.

    After painstakingly building up distance information between many hundreds of pairs of nuclei, researchers can calculate a 3D structure of the molecule that fits the data. Extracting that information is not easy, however. In practice, the energy transfer between two atoms is also affected by their constant motion with respect to one another. And there has been no direct way to learn the orientation of the pairs of atoms relative to the rest of the molecule. The resulting structures tend to be fuzzy and imprecise.

    Knowing how the bonds are oriented would sharpen the structures considerably. To tease out this extra information, Tjandra and Bax rely on another effect that NMR researchers have known about for decades but have never been able to use to solve protein structures. When radio waves probe a nuclear magnet, the presence of another close by can cause a signal to emerge that is split between two frequencies because of a magnetic interaction called dipolar coupling. Just how different these two frequencies are from each other, a measure known as “splitting,” is very sensitive to the orientation of the bond between the atoms with respect to the external magnetic field. The splitting reaches a peak when the bond is parallel to the external magnetic field. By comparing the splitting seen in many different pairs of atoms, researchers can map the bond orientations and greatly sharpen the structure.

    “The problem is that in solution we normally can't see dipolar coupling,” says Tjandra. Proteins in solution normally tumble in all directions, thereby washing out the signal. Researchers have tried different techniques to get their proteins to line up in solution, including orienting them with magnetic fields and wedging them among self-aligning molecules known as liquid crystals. But the magnetic fields only produced a small effect, while the liquid crystals had too big an effect. They confined the proteins so firmly that they lined them up perfectly. When the proteins were held in such a regular pattern, dipolar coupling could be seen not just between near neighbor atoms, but between hundreds of atoms in a molecule, swamping researchers with signals. “For anything but small molecules, the data are uninterpretable,” says Bax.

    That's where Tjandra and Bax made their advance. They diluted a type of fat-based liquid crystal, so that its molecules align themselves in solution with plenty of space between them. That gives the proteins room to move between the liquid crystalline walls, only occasionally bumping into a wall. The proteins themselves are slightly oblong, so repeated bumps cause them to align more or less in the direction of the liquid crystals. The liquid crystals “align their molecules enough so that they can measure something but not so much that the data are uninterpretable,” says Lewis Kay, an NMR expert at the University of Toronto. “That's the beauty of this method.”

    Tjandra and Bax tested the technique's ability to chart the orientation of atom pairs in a protein called ubiquitin. They found that their measurements agreed precisely with the picture provided from a high-resolution x-ray crystal structure. Since then, says Tjandra, they have gone on to sharpen the focus of other NMR structures, although they are not yet ready to reveal those results. If the new structures manage to match the sharpness of x-ray crystal structures, NMR could be in for a whole new focus.


    Rain Forest Fragments Fare Poorly

    1. Nigel Williams

    The massive clearing of tropical rain forests over recent decades is having a profound effect on Earth's atmosphere—adding carbon dioxide and exacerbating other human causes of global warming. Now it seems that the fragments of forest left when tracts of rain forest are cut are also making their own, unsuspected contribution to the carbon dioxide equation. On page 1117, William Laurance of Brazil's National Institute for Research in the Amazon in Manaus reports on a 17-year study suggesting that, once separated from the bulk of the forest, fragments below a certain size are unable to maintain the structure of the original forest. They lose considerable amounts of biomass as large trees, exposed to wind and weather extremes, are killed or damaged—reducing the amount of biological material in the fragment able to absorb carbon dioxide during growth.

    “There are so few long-term studies, and this tells us what is actually happening today,” says tropical rain forest expert Ghillean Prance, director of Kew Gardens in London. “And the findings are crucial if we are to plan for the future.” The results suggest that forestry plans that require patches of forest to be preserved should set a minimum size. They also suggest that climate modelers will need to consider the effects of biomass loss not just in isolated forest patches but also near the edges of intact forest, where the same processes should be at work.

    Between 10 and 17 years ago, Laurance's team selected a series of forest patches of 1, 10, and 100 hectares in size that were recently isolated when the forest around them was cleared for cattle pastures. The researchers also marked out a number of identically sized control patches in native forest. The team then estimated the amount of biomass in the different patches by measuring the diameter of all trees along sections within the patches. The original measurements, of more than 50,000 trees, were repeated several times, with the latest measurements taken earlier this year. “The long-term nature of this study is the great thing about it,” says ecologist Roger Leakey at the Institute for Terrestrial Ecology in Edinburgh, U.K.

    Using a theoretical model, the team converted the diameter measurements into an estimate of the total changes in biomass since the start of the study. The team found that within the patches there was a substantial loss of biomass among the trees up to 100 meters from a forest edge. More than a third of biomass was lost in these regions over the study period compared with control patches, and there was no evidence of recovery. The biomass loss occurred rapidly in the 4 years following clearance of the surrounding forest as trees were killed or damaged by exposure to wind and other changes in microclimate, then stabilized at the lower level.

    The team does not yet know whether, over longer time scales, the patches will recover to levels found in the original forest, but they think it unlikely because wind damage will be an ever-present danger to trees near the forest edge. “Original complex forest will more likely be replaced by shorter, scrubby forest with less volume and biomass,” says team member Thomas Lovejoy, director of the Smithsonian Institute for Conservation Biology in Washington, D.C.

    For climate modelers, the results put another gloomy figure into their calculations. Not only does the carbon capacity from felled trees need to be considered, but also the massive loss of biomass within the edges of all remaining forest and forest fragments. “If you're thinking about forests in the carbon cycle, then these results are important,” says Prance.


    Growth, Death, and Climate Featured in Salt Lake City

    1. Richard A. Kerr

    SALTLAKECITY—More than 5600 geologists and paleontologists gathered here from 20 to 23 October for the annual meeting of the Geological Society of America (GSA). Change over geologic time figured in several highlights from the meeting: evolutionary change 250 million years ago in the world's greatest mass extinction, climate change in the warm intervals between glacial epochs, and size change among mammals of the past 80 million years.

    Ancient Climate Shivers Strike Close to Home

    You might not expect to find similar climate histories at a Club Med in the Bahamas and at chilly Lake Baikal in Siberia. But both sites have yielded unsettling hints that global climate may be unstable—prone to sudden cold snaps—during warm interludes between ice ages, like the one we now enjoy.

    Four years ago, researchers studying deep layers of the Greenland ice sheet thought they had found evidence for brief cold spells during the warm interglacial period 120,000 to 130,000 years ago. But the ice record of that time later proved unreliable. Now, two groups reported at the meeting that signs of a brief interglacial chill have turned up again, in an ancient coral reef blasted open for a new marina by a Club Med and in bottom muds from the Siberian lake.

    Because some corals thrive only in the brightly lighted waters just below the sea surface, ancient reefs are a good gauge of past sea level, which in turn reflects the amount of water locked up in polar ice. Geologists Brian White and Allen Curran of Smith College in Northampton, Massachusetts, had studied reefs in the Bahamas that grew during the warm period from 120,000 to 130,000 years ago, after the melting of glaciers from the previous ice age had raised sea level by about 100 meters. Any cold spell in the warm interval would have caused sea level to fall, exposing reefs to erosion by waves and weather and producing a distinctive “erosional surface.”

    But White and Curran saw no clear signs of a chill during that period until they brought paleontologist Mark Wilson of The College of Wooster in Ohio—a specialist in erosional surfaces—to the island of San Salvador in the Bahamas. When they went to the marina construction site, they discovered that the dynamiting had exposed a dramatic example of a reef exposed and eroded by a sea-level fall.

    “We missed [the erosional surface] before,” says Wilson. “But once you saw that fresh exposure, you could trace it through the whole reef,” and even on another island. Sea level apparently dropped about 4 meters as an abrupt global cooling froze water into glaciers. That was about 125,000 years ago, as gauged by radiometric uranium-thorium dating of the coral. Sea level stayed low for perhaps 1000 years and then rose quickly to near its former level as the cold spell eased.

    Work that Eugene Karabanov of the University of South Carolina, Columbia, and colleagues in the United States and Russia reported at the meeting reveals a mid-interglacial cooling as well. During the winter of 1996, Karabanov and his colleagues let their drilling barge freeze into Lake Baikal for stability and drilled out sediment cores that reached as far back as 5 million years ago. (See the Report on p. 1114 for more results from the Baikal drilling.) The researchers traced climate by counting remains of lake diatoms—tiny, silica-shelled plants that flourish in warmer conditions. In a section of core laid down during the previous interglacial, the diatoms suddenly became much more scarce, indicating a sudden chilling in south-central Siberia. The diatoms just as quickly recovered, leaving a record of a brief cold spell roughly 121,000 years ago.

    Whether the global chill recorded by Bahamian coral and the one in central Asia are the same isn't clear, given the uncertainties of dating. Either way, they have unsettling implications for current climate (Science, 17 December 1993, p. 1818). Greenland ice cores and deep-sea sediments have shown that during the last ice age, abrupt climate swings—warmings, in this case—were common. But researchers believe that the buildup and sudden collapse of huge ice sheets triggered those swings. Between glacial periods, that wouldn't occur, so some kind of instability in ocean circulation seems to be the best candidate for shaking up the climate. And unstable ocean circulation could produce some unpleasant surprises as greenhouse gases build up in decades to come.

    The Biggest, Baddest Extinction Gets Worse

    The extinction that did in the dinosaurs at the end of the Cretaceous period may be the world's most famous extinction, but it wasn't the worst. That honor goes to an event 250 million years ago that exterminated 90% of the genera in the oceans and ushered in the age of the dinosaurs on land. This ecological disaster at the end of the Permian period has long been viewed as “the most profound in the history of the planet,” as Samuel Bowring of the Massachusetts Institute of Technology put it at the GSA meeting. Now the catastrophe looks even more devastating, thanks to Bowring and paleontologist colleagues, who presented evidence suggesting that the marine extinctions were concentrated during a geologic moment—just a few hundred thousand years.

    That makes for an “awfully fast” event, says paleontologist Douglas Erwin of the National Museum of Natural History in Washington, D.C., one of Bowring's collaborators. What's more, work by another group shows that the extinctions on land took place at the same time. Better timing of the extinctions may offer clues to their cause, which may have involved a combination of lethal forces, perhaps including a huge volcanic eruption.

    Paleontologists had already mapped out the order of extinctions of such organisms as corals and trilobites near the boundary between the Permian and Triassic periods, but determining how fast it all happened was more difficult. Presuming that the distinctive fossils used to mark time in the rock record had evolved at a steady pace during the late Permian, paleontologists had estimated that the dying extended for millions of years. But without absolute dates, that was only a guess. “No dates, no rates,” as geochronologist Bowring puts it.

    Now, Bowring and Erwin, in cooperation with Jin Yugan of the Nanjing Institute of Geology and Paleontology, have applied an established dating technique—based on the clocklike radioactive decay of uranium to lead—to marine rocks around the Permo-Triassic boundary at Meishan, China. By dating zircon minerals from volcanic ash beds just above and just below the extinction layer, the team has narrowed the interval of intense extinction. “It looks like appreciably less than 500,000 years,” says Erwin.

    At this point the team can't say whether the extinction really did span that period or actually took place in a geological instant, like the end-Cretaceous event. To decide, paleontologists will need to intensify their sampling of the extinction interval to see whether all the Permian species vanished simultaneously.

    If terrestrial plants and animals—which were also hit hard between the Permian and Tertiary—died out earlier or later than the marine species, the extinction interval would have to be lengthened. But because land and sea share no fossils that could be used to mark time in both, researchers couldn't say whether disaster befell both realms at the same time.

    At the meeting, paleontologist and geochemist Kenneth MacLeod and his colleagues reported a geochemical marker that ties extinctions on land to those in the sea. Numerous researchers had found that the abundance of the lighter isotope of carbon, 12C, suddenly increased in marine sediments right at the time of the Permo-Triassic extinctions. No one is sure of the cause, but because carbon flows freely between land and sea as atmospheric carbon dioxide, that sudden spike of light carbon should turn up on land as well. Indeed, MacLeod found it preserved in ancient soil minerals and in fossil tusks of mammal-like reptiles called therapsids from southern Africa. And the Permo-Triassic boundary on land, as marked by the extinction of the therapsid Dicynodon, also fell at the time of the carbon-isotope spike.

    The coincidence of Permo-Triassic extinctions on land and in the sea means “you really need to invoke a global forcing mechanism,” says MacLeod. A leading candidate has been the largest volcanic eruption ever, the Siberian Traps. They poured out 2 million cubic kilometers of lava in a million years or so and would have created a global, cooling haze. Bowring's dating at Meishan confirms an earlier suggestion that the bulk of the Siberian Traps eruptions coincided with the extinctions within dating uncertainties of a few hundred thousand years (Science, 6 October 1995, p. 27).

    Still, says Erwin, the eruption isn't likely to be the sole cause, because its emissions couldn't have contributed enough isotopically light carbon to create the carbon-isotope spike. And another suspect, a lethal shot of carbon dioxide from the deep sea into shallow waters (Science, 1 December 1995, p. 1441), seems an unlikely killer of plants and animals on land. But then, asks Erwin, “who says there has to be only one cause?” Perhaps only a convergence of stresses can explain this most disastrous interval in the history of life.

    For Mammals, Bigger Is (Usually) Better

    Rules are made to be bent, or so the history of life seems to imply. Nineteenth-century paleontologist Edward Drinker Cope studied North American mammals and came up with a simple rule: Average body size in mammals gets bigger over time. Although Cope's rule, as it later came to be known, hasn't always proved true when applied to other animals, such as mollusks (Science, 17 January, p. 313), a comprehensive analysis confirms its broad outlines in North America, where Cope worked. But a study by paleobiologist John Alroy of the National Museum of Natural History in Washington, D.C., shows subtleties Cope never suspected: Mammals have indeed grown larger over time, but they have also tended to avoid intermediate sizes. Part of the reason, Alroy suspects, may be found in North America's climatic patterns.

    At the GSA meeting, Alroy presented his analysis of an 80-million-year record of North American mammal diversity amassed from the literature (Science, 27 June, p. 1968), combined with published estimates of body size based on fossil teeth. Like other researchers before him, Alroy found that mammals in the late Cretaceous period, 80 million to 65 million years ago, were all small, in the range of 5 grams to 1 kilogram—typical of today's small mammals such as rodents. After the dinosaurs were wiped out 65 million years ago, mammal size jumped. By 55 million years ago, the typical large mammal weighed in at 5 kilograms, and by 6 million years ago, it was a hefty 300 kilograms. On average, Alroy found that new species were 10% bigger than their ancestors.

    That pattern—exactly what Cope's rule predicts—held until about 45 million years ago. At that point, a gap opened in the distribution of body mass between small and large mammals. Intermediate-size mammals—those weighing 1 kilogram to about 10 kilograms, or about the size of a raccoon—became scarce.

    Researchers have explained Cope's rule by theorizing that for mammals, bigger is better in many different environments. The larger the animal, the better it can run from or fend off predators, for example; increasing size also brings greater physiological efficiencies, such as staying warm using a minimum of energy. But Alroy suspects that the lack of intermediate sizes may be due at least in part to a particular external influence: climate.

    North American climate began cooling and drying just about the time the size gap began to open up, he notes, transforming the densely wooded landscape into a broken woodland with more open space. In such dry, open environments today, intermediate-size animals are scarce, says Alroy, suggesting that the ancient drying of North America helped open the gap. For example, some intermediate-size mammals tend to move from tree to tree eating fruit—and so perhaps faced extinction when the trees became too far apart for hopping from one to the next.

    Other paleontologists are interested by Alroy's data, although like Alroy they are not ready with a full explanation. “The overall pattern is intriguing,” says paleontologist Catherine Badgley of the University of Michigan, Ann Arbor. “I and a lot of other people think that [patterns like the gap] are driven by climate conditions.” But to nail down the link, someone should “go to other regions that have similar—or different—patterns in climate change and test whether the trend in body size is borne out,” says paleontologist David Jablonski of the University of Chicago. Don't hold your breath. For his North American study, Alroy consulted more than 4000 lists of mammal species that detailed where and when they lived. Such a study in another region may be a while in the making.

  16. Frontiers in Cancer Research

    1. Paula Kiberstis,
    2. Jean Marx

    In the past 2 decades, researchers have made remarkable progress in assembling a detailed profile of the genetic changes that lead to cancer. Although the task of explaining how cancer develops is not yet complete, the information at hand is already being applied toward improving diagnosis, treatment, and prevention of the disease. Science takes a look at some of these efforts in this special issue on cancer research.

    An overview Article by E. R. Fearon describes the wealth and complexity of knowledge that has emerged from studies of the rare inherited cancer syndromes and the more than 25 genes that have been causally linked to them. Those gene discoveries have raised thorny issues of genetic testing, which B.A.J. Ponder explores. In addition, D. Sidransky discusses several ways in which even nonhereditary tumor-specific genetic alterations might be exploited for earlier diagnosis of the more common cancers in the general population.

    A. T. Look's discussion of oncogenic transcription factors in leukemia emphasizes how fly and worm genetics have helped dissect the cellular growth control pathways that are disrupted in human cancer. And even yeast may provide valuable lessons about cancer, if L. H. Hartwell and colleagues are successful in the efforts they describe, aimed at using yeast genetics to streamline anticancer drug development.

    In an Editorial on the causes of cancer, J. M. Bishop reminds us that defective genes are not the complete story. F. P. Perera expands on that theme as she looks at the role in cancer of environmental factors and of individual variations in response to those factors. Finally, W. K. Hong and M. B. Sporn discuss reinvigorated efforts to identify effective cancer chemopreventive agents and bring them to clinical trials promptly.

    The News component of the special issue includes three stories centering on cancer drug development. One deals with current efforts to find more potent and specific drugs by targeting the precise gene changes leading to cancer. A second focuses on the crop of new biotech firms that have sprung up to engage in the search for such drugs and the difficulties these firms confront. The final story describes a challenge facing drug developers in both academia and industry: the shortcomings of current drug-screening assays.


    From Bench Top to Bedside

    1. Marcia Barinaga


    In the 25 years since then-President Richard Nixon declared the “War on Cancer,” researchers have learned a great deal about the enemy. In particular, they have uncovered a host of genetic blunders that can drive cells to become cancerous and grow out of control. They have learned that the balance of power shifts within many cancer cells, as genes called oncogenes, whose protein products foster cell growth, become overactive, while so-called tumor-suppressor genes, whose products normally act to keep cell growth in check, are disabled. So far, these intelligence efforts have yet to accomplish the war's objectives: better treatments that can vanquish this dread disease. But that could be changing

    View this table:

    Researchers in both academe and industry, including a host of new biotech companies (see p. 1039), are developing an arsenal of drugs aimed at counteracting the genetic changes leading to cancer. Some of them have already yielded promising results in cell culture studies and in animals, and are moving into tests in humans—and more are on the way. Indeed, Ivan Horak, director for oncology at Janssen Research Foundation, a subsidiary of Johnson & Johnson in Titusville, New Jersey, predicts that in the next few years there will be an “explosion” of new therapeutic strategies, with “attacks on every gene that people feel plays a significant role in carcinogenesis.”

    Fatty anchor.

    Ras must be linked to a fat group such as farnesyl in order to stick to the cell membrane where it can be activated by growth signals from outside the cell. Active Ras then turns on kinase enzymes that take the growth signal to the genes in the cell's nucleus.


    The research efforts of the last 20 years have laid out a host of targets for these new drugs. For example, several oncogenes make cell surface receptors through which growth factors exert their effects, and researchers are working on antibodies and on small-molecule drugs that block the activity of those receptors. Other drugs seek to block oncogene products that transmit growth stimulatory signals inside cells, like the protein made by the ras oncogene. Still others aim to make up for oncogene or tumor-suppressor gene mutations that impair cells' ability to initiate a form of suicide, known as programmed cell death or apoptosis. Researchers hope that these new therapies will be more specific than current chemotherapeutic drugs and thus kill cancer cells more effectively, with less harm to normal cells.

    There is no evidence yet that these treatments are going to fare any better than other, more general treatments, like interferon, that failed to meet high expectations set by animal experiments and early clinical trials. But cancer researchers are upbeat, in part because the treatments are so logical—rooted as they are in decades of basic research on the genetic basis of cancer. “We are at last beginning to move into blocking some of the signaling pathways that we now know are overactive in many cancers,” says Alex Bridges, a chemist with Parke-Davis in Ann Arbor, Michigan. “I am very optimistic that we will get very useful agents out of this.” But, Horak counters with a note of caution, “we don't have any proof for that optimism in our hands yet.”

    Block that receptor

    Drug designers' first line of attack—dating back 15 years—is directed at the oncogenes that make growth-factor receptors, molecules embedded in the membranes of cells that receive growth signals from outside the cells. In some cancers, these receptors are either produced in greater than normal amounts or have mutations that cause them to be overactive. Either way, the cell receives a revved-up growth signal. In the early 1980s, researchers began exploring whether they could shut off the receptors with antibodies that bind to them.

    Antibodies against one such receptor, the product of the HER2 oncogene, are being tested in large-scale clinical trials by Genentech Inc. of South San Francisco as a treatment for breast cancer. Interest in HER2 began in 1986 when Dennis Slamon's group at the University of California, Los Angeles, reported that 25% to 30% of breast tumors overproduce the HER2 protein, suggesting that it helps drive the growth of the cancers. And these were the nastiest tumors: “Patients who had this genetic alteration had a very poor clinical outcome,” says Genentech senior scientist Mark Sliwkowski. An encouraging sign came, though, when experiments in several laboratories showed that antibodies to the receptor could block the growth of breast cancer cells that overexpress HER2.

    Buoyed by those findings, Genentech began a series of small trials in 1992. In one, tumors shrank by more than 50% in five of 44 women treated with the antibodies alone. In another study, 36 women with advanced breast cancer that had not responded to chemotherapy were given the antibodies in combination with a standard anticancer drug, cisplatin. The tumors of nine of the women shrank by more than 50%. That 25% response rate, Sliwkowski says, is much higher than expected for cisplatin alone.

    The antibody had minimal side effects, and so Genentech moved on to a large-scale trial, including 650 women, in which it is testing the antibodies alone or with cisplatin. The results are expected by the middle of next year, and if the antibodies again prove effective, they might be tried against other cancers that also overproduce HER2, such as ovarian cancer.

    The epidermal growth factor (EGF) receptor also seems to make mischief in cancer; its gene—one of the first oncogenes to be identified in the early 1980s—is overactive in one-third of all cancers of epithelial origin, including breast, lung, and bladder cancer. John Mendelsohn, president of the University of Texas's M. D. Anderson Cancer Center in Houston, has developed monoclonal antibodies against this receptor, which block its activity and stymie tumor cell growth in lab and animal tests. The New York biotech company, ImClone Systems, has begun early-stage human trials to test the antibodies, either alone or in combination with traditional chemotherapy, on kidney, prostate, breast, and head and neck cancers (which include cancers of the tongue, soft palate, and upper airway, but not of the brain).

    Small is beautiful

    Antibodies are expensive, because they have to be made in animals and purified. Because they are proteins and would be digested if given by mouth, they must be injected. It may be, Mendelsohn says, that the advantages of antibodies, most notably their specificity for their targets, which should minimize toxicity, may outweigh their disadvantages. But, he says, “if everything else were equal…cheaper, smaller molecules certainly would be desirable.”

    Drug companies agree. And many are looking for small molecules that can block growth-factor receptors. In contrast to the antibodies currently being tested, which react with the portion of the receptor that projects out of the cell, the small-molecule inhibitors now being considered act on the other end of the receptor molecule, the inside part that transmits the growth signals to the molecules of the internal signaling pathways. For most cancer-causing growth-factor receptors, these inside segments are tyrosine kinases, enzymes that activate the signaling proteins by adding phosphate groups to them. And a small compound stuck in the right place can often shut an enzyme down.

    Companies have already found a variety of tyrosine-kinase inhibitors that work by wedging into the enzyme's binding site for the phosphate-donating molecule, ATP. Because hundreds of kinases bind ATP, though, researchers worried that these compounds would interfere with many normal enzymes, leading to crippling side effects. “The conventional wisdom held that you could never fashion a specific [inhibitor] going after the ATP binding pocket,” says Alex Matter, director of oncology research at the Swiss drug company Novartis.

    Defying the conventional wisdom, however, drug developers have tried to take advantage of small differences in the structure of the ATP pockets of the kinases to design inhibitors with the desired specificities. And in animal trials of these compounds, they got better selectivity than they expected, as the tyrosine-kinase inhibitors shrank tumors but produced few side effects. “These compounds appear to be very active in doses [that don't produce] obvious toxicity,” says Dick Leopold, senior director of cancer biology at Parke-Davis Pharmaceutical Research, a division of Warner-Lambert in Ann Arbor, Michigan.

    On the basis of the promising animal results, several companies have taken the inhibitors to human trials. Sugen of Redwood City, California, was first, with Su-101, a compound that selectively inhibits the receptor for platelet-derived growth factor (PDGF). The drug passed initial safety tests on 150 patients, showing low toxicity at moderate doses, says Peter Langecker, vice president of clinical affairs at Sugen. What's more, those doses showed some effects against glioblastoma, a pernicious brain cancer whose growth depends on PDGF. “There are patients…who were expected to be dying soon, and who had failed all other treatments…who have now been stable for over a year,” says Langecker. Sugen is about to go to the U.S. Food and Drug Administration (FDA) with a proposal for a large-scale trial of Su-101 for glioblastoma.

    Su-101—which must be administered intravenously because it breaks down in the stomach—is only the first of a long line of tyrosine-kinase inhibitors headed for the clinic. Orally active ones, as well as ones that have gone through more rounds of engineering for specificity, are on the way. The British drug company Zeneca and Oncogene Science of Cambridge, Massachusetts, both have small-molecule inhibitors of the EGF receptor in early human trials for a variety of cancers, and Novartis plans to begin trials within the year of an inhibitor of the Abl tyrosine kinase, which is activated in several types of leukemia.

    Ras-ional drug design

    Tyrosine kinases are just the first step in an internal signaling pathway that triggers cell growth. Another key protein in that pathway is produced by one of the most commonly mutated oncogenes in human cancers—ras. Ras is activated by tyrosine kinases, and in turn draws other kinases to the cell membrane where they are activated and can then transmit the growth signal in the cell. There are three ras genes, and 25% to 30% of all cancers have mutations in one of them, making the Ras proteins obvious targets for drug design. Efforts to design Ras inhibitors have resulted in an unexpected bonus: the discovery of a class of drugs whose effects are not limited to tumors with ras mutations.

    To block Ras activity, researchers took advantage of work done in the late 1980s showing that an enzyme called farnesyl transferase (FT) has to hook a 15-carbon fatty chain to Ras, before it can function. Researchers found drugs that inhibit FT, but then a possible obstacle to using them for cancer treatment emerged: K-Ras, the form of Ras that is by far the most often mutated in human tumors, can duck the farnesylation block and receive a fatty chain from an alternate enzyme. That discovery suggested that blocking FT shouldn't stop K-Ras-driven tumors.

    But it does. Working with various cancer cell lines, researchers found that FT inhibitors block the growth of some tumors with a mutant K-ras gene, some with other ras mutations, and even some that have no ras mutations at all. “There is no correlation of the sensitivity of the [tumor] cell line with its mutant ras status,” says Parke-Davis's Leopold. “That implies that there is another target besides Ras that is farnesylated and is very important for the growth of these tumors,” adds Saïd Sebti, of the University of South Florida in Tampa, who worked with Andrew Hamilton at Yale University to develop FT inhibitors. Because FT modifies more than 20 proteins, researchers will have to sort through a lot of possibilities to find that key target.

    Surprisingly, given their potential to affect many proteins, animal studies have shown that FT inhibitors are not very toxic. They also have “a great deal of activity” against tumors in animal models, says cancer biologist Neal Rosen of the Memorial Sloan-Kettering Cancer Center in New York City, who tested FT inhibitors on 50 different cultured tumor cell lines. “I'm very anxious to test [them] in patients.”

    Janssen has the first FT inhibitor in clinical trials, although no results are yet available. Other companies, including Schering-Plough, Merck & Co., and Parke-Davis, plan to have FT inhibitors in patients soon. Says Leopold: “It is an example of using a very rationally selected fishhook and catching a type of fish you didn't expect to catch.”

    Putting on the brakes

    Perhaps the biggest fish in the pond are the tumor-suppressor genes. That's because most cancers have inactivating mutations in one or more of these genes, which normally act to control cell growth, and some tumor-suppressor genes are inactivated by mutation in as many as 50% of all cancers. That means that ways to restore the genes' function would likely be widely applicable as cancer treatments. But compensating for an inactivated tumor suppressor is much harder than reining in an overactive oncogene protein. “It is a very difficult concept, to think about replacing the function of a large protein with a small-molecule drug,” says molecular oncologist Kenneth Kinzler, of Johns Hopkins University Oncology Center.

    But by looking downstream of the tumor suppressors at the proteins they influence, drug developers hope to find good targets. For example, the tumor suppressor p16, which is mutated in the skin cancer melanoma and other cancers, normally holds up cell division by blocking the activity of cyclin-dependent kinase 4 (CDK4), one of several related enzymes called CDKs that together propel cells through the cell cycle. A small-molecule inhibitor of CDK4 might therefore replace the function of p16, and many companies are working to develop such specific blockers.

    Meanwhile, one nonspecific CDK blocker has already arrived in the clinic. The drug, flavopiridol, is a general kinase inhibitor, but researchers at the drug company Hoechst Marion Roussel found that the CDKs are most sensitive to its effects. Even though its broad effects on CDKs might seem to threaten normal cells as well as cancer cells, animal studies showed relatively few side effects, says Dagmar Oette, head of clinical research in cancer therapeutics at Hoechst. The drug has now been given to more than 100 cancer patients in clinical trials, again with few signs of unwanted effects. What's more, while the trials to date focused on safety testing, the tumors in a number of patients stopped growing or shrank, and there was one complete remission. Encouraged by those results, the company plans to begin effectiveness trials soon.

    Restoring suicidal drive

    While some mutations of oncogenes and tumor suppressors make their mischief by boosting cell growth, others, including those in the tumor suppressor p53 and the oncogene bcl-2, impair another key process of normal cells: apoptosis, or programmed cell suicide, which serves among other things to remove cells whose DNA has been damaged. These mutations are particularly nasty because they pack a double punch. Not only do they allow damaged cells to avoid suicide and possibly turn cancerous, but they also make cancer cells resistant to the many chemotherapeutic drugs that work by triggering apoptosis. Researchers are now looking for ways to repair this self-destruct switch in cancer cells.

    One of the most hotly pursued involves using gene therapy to replace p53, a tumor-suppressor gene that plays a role in apoptosis and is defective in about half of human cancers. First off the mark was Jack Roth, a thoracic surgeon at the M. D. Anderson Cancer Center, who decided 8 years ago to try using retroviruses and, later, the respiratory virus adenovirus to deliver normal copies of p53 to lung cancer cells. Because the immune system would clear the viruses from the bloodstream if they were given systemically, they must be injected directly into the tumor or its surroundings, and that limits their potential for treating metastatic disease. But animal tests suggest that the injection strategy can shrink primary tumors, which can be the major causes of death in some types of cancer, including lung and head and neck cancers. And when the researchers injected the p53-bearing viruses into human lung tumors growing in mice, says Roth, “we saw rather substantial tumors regress.”

    Austin biotech company Introgen Therapeutics took Roth's viruses into early-stage clinical trials for lung and head and neck cancers, and Canji Inc., a San Diego subsidiary of Schering-Plough, has a similar p53-carrying adenovirus in human trials. So far, neither the Schering-Plough nor the Introgen viruses have caused any troublesome side effects, and while Schering declined to comment on the effectiveness of its virus, Introgen reported that tumors have regressed or at least stopped growing in some patients treated with the virus alone or together with the drug cisplatin. Based on those results, Introgen, in collaboration with RPR Gencell, a division of the French drug company Rhône-Poulenc Rorer, has just begun a larger trial aimed at evaluating the treatment's effectiveness for head and neck cancers.

    Other efforts focus on knocking out the function of the oncogene bcl-2, an inhibitor of apoptosis, which is overactive in about half of all human cancer types. “Cells with up-regulated bcl-2…are very difficult to kill with anything you throw at them,” says cell-death researcher John Reed of the Burnham Institute in San Diego. Several companies are trying to develop drugs to inhibit the Bcl-2 protein, says Reed, while others are already testing a more direct assault on Bcl-2: antisense nucleotides designed to prevent the protein from being made in the first place.

    In April, Andrew Webb of the Royal Marsden Hospital in Sutton, Surrey, in the United Kingdom, and his colleagues reported the results of the first human trial of anti-bcl-2, in nine patients with advanced non-Hodgkin's lymphoma. One patient had a complete remission and another had partial reduction of his tumors. Based on the promise of those results, medical oncologist Howard Scher of Sloan-Kettering in New York, along with the San Diego antisense biotech company Genta, which developed the drug, are waiting for FDA approval to begin testing it in 10 to 20 patients with a variety of solid tumors that overexpress bcl-2.

    Once the roadblocks along the suicide pathway have been removed, says Reed, it may be necessary to trigger apoptosis by damaging the cells. As a result, he says, efforts to restrain bcl-2 or replace p53 may be most effective, he says, when used “as a sensitizer in combination with traditional cytotoxic therapies.”

    Although traditional therapies are likely to remain a part of the anticancer armamentarium, the new approaches could transform treatment strategies. If even a few of them pay off, they may mean a future in which doctors will screen tumors for the mutations they carry, then target the defects in the cancer cells directly to prime the tumor cells for killing. “It won't matter whether it is a breast, ovarian, or prostate tumor,” says Peter Hirth, executive vice president for research and development at Sugen. “[The mutation] will be the target for therapeutic intervention.”


    Treatment Marks Cancer Cells for Death

    1. Marcia Barinaga

    Tumor-suppressor genes are an obvious target for cancer treatment, because they are lost or inactivated in many cancers. Most efforts to exploit these genes have taken a straightforward approach: trying to replace them or mimic their function with some other molecule (see main text). But one candidate cancer treatment tries instead to turn the absence of a tumor suppressor—the p53 gene—into an advantage.

    In order to infect and kill cells, the human respiratory virus, adenovirus, has to disable p53, because the gene's activities include preventing viral DNA replication. Reasoning that an adenovirus unable to disarm the gene would only be able to infect cells in which p53 was nonfunctional, researchers at Onyx Pharmaceuticals of Richmond, California, removed the viral gene that disables p53. Subsequent tests with cultured cancer cells and tumors growing in mice confirmed that the modified virus specifically kills tumor cells lacking p53 (Science, 18 October 1996, p. 342).

    Since then, the company has completed early human trials. The modified virus has to be injected directly into tumors, as the immune system would eliminate it if it were infused into the bloodstream. In 32 patients with head and neck cancers, these injections produced no problematic side effects, says Frank McCormick of the University of California, San Francisco, Cancer Center, and former vice president of research at Onyx. Even more encouraging, the virus caused the tumors to shrink in 12 of the patients, in some cases by as much as 90%. McCormick says Onyx has begun a second effectiveness trial for head and neck cancer and safety trials for additional cancers.

    It “is a very clever approach,” says Allen Oliff, executive director for cancer research at Merck Research Laboratories in West Point, Pennsylvania. But he notes that the need to inject the virus into tumors will limit its use for metastatic cancer. “It is not going to be penicillin for cancer,” he says, but for certain tumors it “still could be very successful.”


    On the Biotech Pharm, a Race to Harvest New Cancer Cures

    1. Wade Roush

    More than 1400 biotechnology companies make their homes in North America, according to the investment newsletter Biotech Navigator, yet fewer than 50 biotechnology products have been successfully commercialized to date. With statistics like that, it's easy to see why biotech has gained a reputation as “one of the worst investments on the street,” in the words of David Tomei, founder and chief executive officer (CEO) of the Richmond, California, firm LXR Biotechnology Inc. Yet the former Ohio State University pharmacologist is among a cadre of scientist-entrepreneurs who believe they can reverse that reputation—by developing drugs that remedy the genetic and molecular defects behind most cases of cancer.

    View this table:

    The goal of this new effort in “molecular oncology” is to devise drugs that correct the specific defects that cause cancer in the first place—the abnormal activation of growth-promoting oncogenes, for example, or loss of tumor-suppressor genes. The hope is that treatments will turn out to be more effective and have fewer side effects than conventional cancer chemotherapeutic drugs (see p. 1036). More than two dozen firms, with a combined capitalization in the hundreds of millions of dollars, are competing for leading positions in this emerging market.

    Sell division.

    Scientists at Mitotix seek drugs to block inappropriate cell proliferation, cancer's hallmark.


    The potentially huge profits available to the inventors of better cancer drugs explain their eagerness. The American Cancer Society estimates that 1.4 million new cases of cancer will be diagnosed in the United States in 1997, with the overall medical costs from cancer amounting to $35 billion. Of the world's eight top-selling anticancer drugs, four—the prostate cancer drugs Casodex, Eulixin, Lupron, and Zoladex—are merely palliative, yet have combined annual sales of $1.7 billion, while sales of the breast cancer drugs tamoxifen and taxol are approaching $500 million and $800 million, respectively.

    But will tapping into this revenue stream be any easier for these new firms than for many of their predecessors in biotech? Tomei and others say yes, because companies such as LXR aren't merely applying ideas developed by academic scientists, but are generating many of their own advances in basic cancer biology. That kind of innovation “will succeed in reversing the feeling that biotech was wishful thinking,” says Tomei. Still, any new molecular oncology company faces other hurdles that have tripped up many biotech companies before, leaving them cashless and without a product to sell.

    To avoid this fate, a new company must have a scientific advance that promises a practical treatment. It must beat out competitors for precious start-up capital and find revenues to supplement that capital during the protracted process of preclinical and clinical testing. It must protect its intellectual property, and if it raises money by agreeing to share scientific advances with larger pharmaceuticals firms—a standard practice among young biotech firms—it must actually deliver the goods.

    For the many cancer researchers who, like Tomei, have left academic posts to seek their fortunes in biotech, there's a clear message: Bucking the biotech trend won't be simple. “If you have a great idea, solid science, and earthshaking discoveries, you are still only 10% of the way there,” Tomei admits.

    Because the best available strategies against tumors—poison them, burn them, or cut them out—are such blunt instruments, therapies that counter cancer where it starts, with the alteration of genes whose job is to promote or limit cell growth, have an enormous attraction to cancer victims and entrepreneurs alike. “Cancer is going to be treated as a genetic disease,” says Thomas Needham, director of business development at Mitotix Inc., a Cambridge, Massachusetts, biotech firm that is applying its knowledge of cell division in yeast to develop novel drugs aimed at halting the inappropriate cell proliferation that is the hallmark of cancer. “That plays to our favor, because the current modes of therapy are not great.”

    New targets. Many investors apparently agree. Ten venture capital firms, for example, have invested more than $23 million in Mitotix, founded by cell-cycle researchers David Beach of Cold Spring Harbor Laboratory in New York state and Giulio Draetta of the European Institute of Oncology in Milan. “The original focus was on cell division in yeast, and Mitotix was taking a high-risk bet on the applicability of this to human cancer,” says oncologist Jason Fisherman, a partner at one of the venture capital firms, Boston's Advent International. “[But] the company has the molecular, cellular, and genomic tools to continue to develop new targets [for anticancer drugs]. These companies are fundamentally discovery companies—engines for churning out molecular targets.”

    Because targets, not profits, are all a new biotech company is likely to churn out for several years, the central challenge for molecular oncology companies is simply to stay in business until revenues exceed losses. And while each firm has its own unique business strategy, it's possible to spot a few patterns in the noise.

    One approach most firms deliberately avoid today is to go it alone—to try to become a “fully integrated” biopharmaceuticals company that develops drugs all the way from discovery to testing, manufacturing, approval by the U.S. Food and Drug Administration (FDA), and marketing. A few big biotech firms such as Genentech, Amgen, and Genzyme have successfully shot these rapids, although not with compounds that treat cancer. And Cell Pathways Inc., a privately owned molecular oncology firm with laboratories in Aurora, Colorado, is attempting the same feat. Its lead product, a compound called FGN-1 that induces programmed cell death selectively in precancerous cells, is already in phase III clinical trials, and the firm is now on the verge of its initial public offering of stock. But few firms can afford the infrastructure full integration requires, and most are therefore aiming to be something less than the next Amgen or Genentech.

    A far more common—indeed, almost obligatory—path for young molecular oncology firms is to form partnerships with big pharmaceutical firms. For example, under a collaborative agreement, the German firm BASF Pharma will pay up to $48 million for Mitotix's help in developing drugs that inhibit cdc25 phosphatases, enzymes discovered by Beach that are essential for cell division and may be hyperactive in cancer cells proliferating out of control.

    In such arrangements, the biotech partners get the money they need to stay afloat and are freed from the expense of testing and marketing candidate drugs, while the pharmaceutical partners are freed from the high risk of failure inherent in the early drug-discovery process. But such collaborations do have a downside. The pharmaceutical companies usually pay only part of their commitment up front, with the rest coming in chunks later, on the condition that the collaboration reaches certain milestones by certain dates—for example, entering clinical trials or filing for FDA drug approval. That creates deadline pressure and limits the number of creative ideas a small biotech firm can afford to explore. It's for precisely these reasons that Advanced Cellular Diagnostics, an Elmhurst, Illinois-based biotech firm developing drugs that would arrest the growth of cancer cells by turning up expression of the tumor-suppressor genes p53 and p21, has avoided partnerships, says molecular biologist Sarah Bacus, who founded the company with proceeds from the sale of a previous venture. “We try to meet our own internal milestones, but we don't have to answer to anyone, and therefore we can work with a lot of different therapeutics,” Bacus says.

    Another way for a small biotech firm to reduce financial worries while remaining relatively independent is to sell out. The San Diego-based biotech firm Canji Inc., for example, became a wholly owned subsidiary of one of its former collaborators, pharmaceutical giant Schering-Plough, in 1995 at a cost to Schering of $55 million. And although the deal has meant a loss of autonomy for Canji—“The decision-making process is a little bit slower, since you can't just run upstairs and talk to the boss,” says Dan Maneval, Canji's director of pharmacology—the arrangement has been mutually beneficial, both he and Schering-Plough officials say.

    Data in the bank. An entirely different way to make money in molecular oncology, Advent's Fisherman points out, is to offer tools instead of targets. “Small companies don't have to discover drugs themselves to be successful,” he says. He points to Myriad Genetics, based in Salt Lake City, Utah, and Incyte Pharmaceuticals, of Palo Alto, California, which sell information about the genes that cause cancer and their patterns of inheritance in the population, rather than potential therapies. Myriad is already marketing BRACAnalysis, a test for mutations in the BRCA1 and BRCA2 genes recently linked to hereditary breast cancer, and has plans to develop other potentially lucrative diagnostic products.

    And Incyte, unlike most small biotechs, is already turning a profit with its LifeSeq database of gene sequences and gene expression patterns. LifeSeq brought in $12.2 million in pharmaceutical company subscriptions in 1995, $42 million in 1996, and $39 million in the first half of 1997 alone, according to CEO Roy Whitfield. “These databases cover all medical research applications, but we put in them what the drug companies want to see, and I can tell you that oncology is what they are most interested in,” Whitfield says.

    But survival in the crowded world of molecular oncology may depend as much on luck as on scientific and financial savvy. For example, Cell Pathways, one of the firms closest to actually marketing a cancer-prevention therapy, has benefited from the serendipitous fact that its cell-death activator FGN-1, which has been shown in preliminary clinical trials to prevent precancerous colon polyps from becoming fully malignant, is a metabolic byproduct of the anti-inflammatory drug sulindac. That may significantly speed trials of the drug's safety, because “it's been floating around in human bloodstreams for years and already has a bit of a track record,” according to gastroenterologist Rifat Pumukcu, the company's scientific founder.

    With such advantages, a few molecular oncology firms will sooner or later pull ahead of the pack, concludes Canji's Maneval. “It's going to be difficult to have all two dozen companies move along together forever. There's going to be a weeding out, and [those with] the good technologies”—plus good fortune—“will prevail.”


    Systems for Identifying New Drugs Are Often Faulty

    1. Trisha Gura
    1. Trisha Gura is a writer in Cleveland, Ohio.

    Screening potential anticancer drugs sounds easy. Just take a candidate drug, add it to a tumor type of choice, and then monitor whether the agent kills the cells or inhibits cancer growth. Too bad it hasn't been that simple. Even as investigators try to develop a new generation of more effective and less toxic anticancer drugs that directly target the gene changes propelling cells toward uncontrollable division (see p. 1036), they face a long-standing problem: sifting through potential anticancer agents to find ones promising enough to make human clinical trials worthwhile.

    Indeed, since formal screening began in 1955, many thousands of drugs have shown activity in either cell or animal models, but only 39 that are used exclusively for chemotherapy, as opposed to supportive care, have won approval from the U.S. Food and Drug Administration. “The fundamental problem in drug discovery for cancer is that the model systems are not predictive at all,” says Alan Oliff, executive director for cancer research at Merck Research Laboratories in West Point, Pennsylvania.

    Pharmaceutical companies often test drug candidates in animals carrying transplanted human tumors, a model called a xenograft. But not only have very few of the drugs that showed anticancer activity in xenografts made it into the clinic, a recent study conducted at the National Cancer Institute (NCI) also suggests that the xenograft models miss effective drugs. The animals apparently do not handle the drugs exactly the way the human body does. And attempts to use human cells in culture don't seem to be faring any better, partly because cell culture provides no information about whether a drug will make it to the tumor sites.

    View this table:

    The pressure is on to do better. So researchers are now trying to exploit recent discoveries about the subtle genetic and cellular changes that lead a cell toward cancer to create cultured cells or animal models that accurately reproduce these changes. “The real challenge for the 1990s is how to maximize our screening systems so that we are using the biological information that has accumulated,” says Edward Sausville, associate director of the division of cancer treatment and diagnosis for the developmental therapeutics program at the NCI. “In short, we need to find faithful representations of carcinogenesis.”

    The first efforts to do so date back to the end of World War II, when hints began emerging that some chemicals might have cancer-fighting effects. That evidence encouraged many chemists to explore the anticancer potential of similar agents shelved in their laboratories. And after commercial interests decided against helping the academics set up an efficient way to screen their chemicals, the NCI stepped in.

    The institute started by pulling together mouse models of three tumors: a leukemia, which affects blood cells; a sarcoma, which arises in bone, muscle, or connective tissue; and a carcinoma, the most common type of cancer, which arises in epithelial cells and includes such major killers as breast, colon, and lung cancers. Initially, many of the agents tested in these models appeared to do well. However, most worked against blood cancers such as leukemia and lymphoma, as opposed to the more common solid tumors. And when tested in human cancer patients, most of these compounds failed to live up to their early promise.

    Researchers blamed the failures on the fact that the drugs were being tested against mouse, not human, tumors, and beginning in 1975, NCI researchers came up with the xenograft models, in which investigators implant human tumors underneath the skin of mice with faulty immune systems. Because the animals can't reject the foreign tissue, the tumors usually grow unchecked, unless stopped by an effective drug. But the results of xenograft screening turned out to be not much better than those obtained with the original models, mainly because the xenograft tumors don't behave like naturally occurring tumors in humans—they don't spread to other tissues, for example. Thus, drugs tested in the xenografts appeared effective but worked poorly in humans. “We had basically discovered compounds that were good mouse drugs rather than good human drugs,” says Sausville.

    The xenograft models may also have missed effective drugs. When Jacqueline Plowman's team at NCI tested 12 anticancer agents currently used in patients against 48 human cancer cell lines transplanted individually into mice, they found that 30 of the tumors did not show a significant response—defined as shrinking by at least 50%—to any of the drugs.

    Researchers have not yet figured out why so many of the xenografts were insensitive to the drugs. But the NCI team says that the result means that drugs would have to be screened against six to 12 different xenografts to make sure that no active anticancer drugs were missed. That's an expensive proposition, as the average assay costs about $1630 when performed by the government and $2900 when done commercially. “I cannot get on my pulpit and say that the way we are doing this is the best way, because I don't think there is a good way to do it,” says Sausville.

    To create better models of cancer development in humans, investigators are now drawing on the growing knowledge of human cancer-related gene mutations. They are genetically altering mice so that they carry the same kinds of changes—either abnormal activation of cancer-promoting oncogenes or loss of tumor-suppressor genes—that lead to cancer in humans. The hope is that the mice will develop tumors that behave the same way the human tumors do.

    So far, the results from these mouse models have been mixed, however. One mutant mouse strain, for example, lacks a working APC gene, a tumor suppressor that leads to colon cancer when lost or inactivated. This mouse seems to do well at re-creating the early signs of colon cancer. But in the later stages of the disease, the type of mutations in the tumors begin to diverge from those in human colon cancer, and the disease manifests itself differently as well. It spares the liver, for example, unlike the human cancer.

    Other new mouse models have fared even worse. Take the one in which the retinoblastoma (RB) tumor-suppressor gene was knocked out. In humans, loss of RB leads to a cancer in the retina of the eye. But when the gene is inactivated in mice, the rodents get pituitary gland tumors. And BRCA1 knockouts—which are supposed to simulate human breast and ovarian cancer—don't get any tumors at all. “One might expect that these animals would also mimic human symptoms, not just the genetic mutations,” says molecular biologist Tyler Jacks of the Massachusetts Institute of Technology. “In fact, that is usually the exception, not the rule.”

    Why gene knockouts in mice have effects so different from those of the corresponding mutations in humans is unclear. One possibility is that in mice, other genes can compensate for a missing gene, such as BRCA1. Another, says Jacks, is that “the genetic wiring for growth control in mice and humans is subtly different.”

    The limitations of animal models have spurred the NCI, among others, to test drug candidates in cultures of human cells. The institute now relies on a panel of 60 human tumor cell lines, including samples of all the major human malignancies. Drugs to be tested are fed to subsets of the panel, based on tumor cell type, and their cell-killing activity is monitored.

    Over the last 7 years, the panel has been used to screen almost 63,000 compounds, and 5000 have exhibited tumor cell-killing activity. But that has created another dilemma, because so many compounds show antitumor cell activity in culture, and the cost of bringing them all to clinical trials—where most don't work anyway—would be daunting. As Sausville asks: “How do you prioritize so many compounds for clinical trials?” For that, the NCI uses a computer database to sift through past antitumor agents and look for only those compounds with novel mechanisms of action. Computer screening has whittled the number of promising agents down to about 1200, according to Sausville.

    Those compounds are then tested in what is known as a hollow fiber model, in which tiny tubes filled with tumor cells are implanted into mice in a variety of sites. By monitoring the tumor cell-killing effects of drugs on the implants, researchers can test which drugs actually make it to the tumor sites when the drugs are administered in different ways: intravenously versus orally, for example. Sausville cautions, however, that it's still too early to tell how predictive these screens are, because only a few of the drugs tested have gone far enough to show efficacy in humans.

    Both drug screeners and doctors also use another cell culture method, the so-called clonogenic assay, to sift through potential anticancer drugs. They grow cell lines or a patient's tumor cells in petri dishes or culture flasks and monitor the cells' responses to various anticancer treatments. But clonogenic assays have their problems, too. Sometimes they don't work because the cells simply fail to divide in culture. And the results cannot tell a researcher how anticancer drugs will act in the body.

    What's more, new results from Bert Vogelstein's group at Johns Hopkins University School of Medicine add another question mark about the assay's predictive ability. Todd Waldman, a postdoc in the Vogelstein laboratory, found that xenografts and clonogenic assays deliver very different messages about how cancer cells lacking a particular gene, p21, respond to DNA-crippling agents. Radiation, like many of the drugs used to treat cancer, works by damaging the cells' DNA. This either brings cell replication to a halt or triggers a process known as apoptosis in which the cells essentially commit suicide. Waldman wanted to see how p21, one of the genes involved in sensing the DNA damage and halting cell replication, influences that response to radiation.

    In the mouse xenograft assay, Waldman and his colleagues found that the radiation cured 40% of the tumors composed of cells lacking p21, while tumors made of cells carrying the gene were never cured. But this difference was not apparent in the clonogenic assay, where the radiation appeared to thwart the growth of both dispersed tumor cell types. “We showed this gross difference in sensitivity in real tumors in mice and in the clonogenic assay,” Waldman says.

    He suggests that the different responses in the two systems have to do with the fact that a subset of p21 mutants die in response to radiation, while cells with the normal gene merely arrest cell division. Either way, the dispersed tumor cells in the clonogenic assay will fail to grow. However, in the xenograft tumors, which consist of many cells in a solid mass, the arrested, but nonetheless living, p21+ tumor cells may release substances that encourage the growth of any nearby tumor cells that escaped the effects of the radiation. But tumor cells lacking the p21 gene die, and because dead cells cannot “feed” neighboring tumor cells, the entire tumor may shrink.

    The finding indicates that the clonogenic assay can't always predict how a tumor will respond to a drug in an animal. Still, by linking the different responses in two models to the presence or absence of a specific gene system, the Waldman team's results help clarify why tumor cells might respond differently in culture and in animals. Indeed, the general idea that a tumor's drug sensitivity may be linked to the genetic mutations it carries has led others to try to use cells with comparable mutations to identify better chemotherapeutic agents.

    Leland Hartwell, Stephen Friend, and their colleagues at the Fred Hutchinson Cancer Research Center in Seattle are pioneering one such effort. They are building on previous work in which Hartwell's team discovered a series of yeast genes, called checkpoint genes, that normally stop cells from progressing through the cell cycle and dividing if they have abnormalities such as unrepaired DNA damage. Because mutations in checkpoint and other cell cycle-related genes have been linked to human cancers, looking for drugs that restore normal growth control in mutated yeast might be one way to find new cancer therapies (see Article on p. 1064).

    The NCI is taking a similar tack. They are looking to see if they can reclassify the cells in their panel, which was set up based on tissue type—breast cancer versus colon cancer, for example—according to the types of genetic defects the cells carry. To enable drugs that counteract specific defects to be prescribed most effectively, researchers are also developing technologies for analyzing the gene defects in each patient's tumors. That way, if drugs that correct specific defects can be identified, they could then be matched to each individual's tumor cell makeup. “This would be so valuable,” says Homer Pearce, vice president of cancer research and clinical investigation at Eli Lilly and Co. in Indianapolis. “It would help to identify patients that have the greatest chance of benefiting from therapy, while minimizing the number that would be exposed to a treatment that would not work.”

    Indeed, Merck's Oliff says, “the future of cancer drug screening is turning almost exclusively toward defining molecular targets.” If the approach works, drug developers would finally have an easy way to identify promising cancer drugs, and cancer patients might have an array of new treatments.

Stay Connected to Science