News this Week

Science  12 Dec 2003:
Vol. 302, Issue 5652, pp. 1138

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    E.U. Stem Cell Debate Ends in a Draw

    1. Gretchen Vogel

    An 18-month tug of war ended in a deadlock last week when European science ministers failed to agree on regulations governing embryo research, including the derivation of new human embryonic stem (ES) cell lines. The lack of consensus means that the European Union's $20 million, 5-year Framework 6 science funding program will have no comprehensive policy on embryo research. As a result, when a moratorium on funding for the controversial work expires at the end of the year, the European Commission will evaluate research proposals involving destruction of human embryos on a case-by-case basis.

    E.U. member countries have been fighting over the issue since summer 2002, when several countries threatened to block final approval of the Framework 6 program if it did not rule out funding for work banned in some member countries (Science, 13 September 2002, p. 1784). Germany and Austria, for instance, have laws against research that involves destruction of a human embryo. So if a German university professor were to participate in an E.U.-funded project that derived new ES cell lines, he or she could face prosecution (Science, 1 August, p. 577).

    But other countries, including the United Kingdom and Sweden, argued that the stringent regulations of some member countries should not limit scientists E.U.-wide. A delay was averted when science ministers agreed to a 1-year moratorium on E.U. funding for all research involving embryos, including human ES cells, while the commission and the countries attempted to forge a compromise.

    In July, the commission proposed allowing research using embryos that had been created before 27 June 2002, the day Framework 6 was adopted. That compromise satisfied Ireland. But Germany, Spain, Austria, Italy, and Portugal said that any destructive embryo research was unacceptable.


    The commission's position got a boost last month when the European Parliament approved an even more permissive set of regulations, which endorsed embryo research and eliminated any cutoff date for embryo creation or donation. But that support was largely symbolic; the final decision lay with the E.U.'s Council of Ministers.

    At the meeting on 3 December, the Italian delegation, which holds the 6-month rotating E.U. presidency, proposed a policy similar to that in effect for federal funding in the United States: that the Framework fund research only on existing human ES cell lines. That failed to win majority support, however.

    A compromise proposal from the commission would have allowed Framework 6 to fund only the characterization and testing of newly derived cell lines—not the actual derivation process itself, during which an embryo is destroyed. But the Italian delegation rejected that idea, saying it did not have enough time to study the proposal carefully, and the meeting adjourned without a decision.

    The deadlock is likely to remain. Deputy Prime Minister Mary Harney of Ireland, which will assume the E.U. presidency in January, told reporters after the meeting that she “deeply regretted” the failure to agree on guidelines. However, she said, because prospects for consensus look grim, there is no reason to raise the issue again.

    The failure to reach a decision creates uncertainty for scientists hoping to use E.U. funds to derive new stem cell lines. Although such research is not ruled out, any controversial application could be blocked by a review commission that includes representatives from all member states. Nevertheless, Stephen Minger of King's College London, who derived some of Britain's first human ES cell lines, says he is optimistic about the prospect of obtaining Framework funds. The program “is a hell of a lot of money,” he says. With assurances that the proposal would be considered on equal footing with others, “we would definitely put something in,” he says.

    He might not have much competition. In the commission's first round of proposals for Framework 6, 126 of more than 12,000 focused on stem cells, says a commission spokesperson. However, he says, only one of those dealt with human ES cells as opposed to adult or animal-derived cells. That project planned to use already existing cell lines, which would have been allowed under even the most stringent guidelines.


    Defunded VA Grants Restored; Wray Returns to Texas

    1. Jennifer Couzin

    The controversial head of the Office of Research and Development in the Department of Veterans Affairs (VA), Nelda Wray, is returning to her home base of Houston, Texas, according to a memorandum released last week by VA Under Secretary for Health Robert Roswell.

    The memo to senior staff members attributed Wray's move to “a family health concern” and reported the immediate appointment of Roswell's deputy, Jonathan Perlin, as acting chief of research. One of Perlin's first actions was to reverse Wray's April “defunding” of 15 basic research grants. The defunding “just wasn't done in a fair way,” says Mindy Aisen, deputy chief of the VA research office. “So we fixed it.”

    Wray's departure comes after 3 weeks of confusion within VA headquarters. Last month, following a meeting between Wray and Roswell, the research chief was said to be leaving her post (Science, 21 November, p. 1306). No official pronouncement was made, however, and Wray spent much of the time that followed in Houston.

    Wray's 11-month tenure was marked by mounting tension as she sought to implement her vision of VA research. She launched plans to move funds from basic science to the division of health outcomes studies. Wray also began instituting new rules for peer review that awarded a separate score for productivity, measured, for example, by the number of papers published. After months of protest from scientists, which helped prompt a congressional VA subcommittee to call for hearings, Wray reversed course in October. By then, however, the VA inspector general's office had initiated an investigation into the research office. The inspector general's report has not been released.

    The defunding reversal has delighted the affected researchers. “I didn't expect them to do it that quickly” after Wray's departure, says Ronald Bach, a biochemist at the Minneapolis VA Medical Center in Minnesota. Aisen notes that VA's peer-review system isn't perfect, and VA research could adopt a more translational bent. Still, “we hope to be much more inclusive in the way we make decisions,” she says, adding that she and Perlin are planning a 2-day meeting for the VA's associate chiefs of staff.

    Perlin will have to make some immediate personnel decisions: Three of the office's four divisions, including laboratory research, are without permanent directors. Aisen is acting head of all three. New leaders, however, are “ready to come on board momentarily,” she says.


    Bush Plan for NASA: Watch This Space

    1. Andrew Lawler

    U.S. space enthusiasts are blessed and cursed by President John F. Kennedy's 1961 call to go to the moon by the end of the decade. Although his bold vision was realized, other presidents have had less luck winning support for human space flight. President Ronald Reagan's 1984 proposal to build a space station has taken vastly more time, money, and political capital than anticipated, and President George H. W. Bush's 1989 plan to return to the moon and land on Mars never left the ground.

    Now Bush's son is pondering whether to propose a dramatic new direction for NASA in the wake of February's Columbia accident. Sources say that a small team of senior White House and federal agency officials has finished work on a tightly held set of options for the president. But in a Washington focused on the war against terrorism, rising federal deficits, and the 2004 election campaign, any human space flight proposal faces skepticism—or worse.

    NASA Administrator Sean O'Keefe has already battled skeptics in the Administration—and lost. White House officials say that O'Keefe pushed this fall for a dramatic increase in the agency's $15 billion budget to begin work that would ultimately lead to human Mars missions. But that high price tag—$5 billion annually for at least the next 4 years, according to those sources—was rejected as unrealistic.

    Wild Card.

    Chief of staff, Andrew Card, says the president will make a “bold decision” on space.


    O'Keefe's ambitious plan surprised those who knew him as Bush's deputy director of the White House Office of Management and Budget and previously as a say-no Department of Defense comptroller. “He's gone native,” grouses one senior Pentagon official. Among the less costly options under consideration, he added, are a return to the moon and a base between Earth and the moon that could be used for assembly of planetary spacecraft or large telescopes. O'Keefe declined an interview with Science, but on 3 December he told the NASA Advisory Council that 2004 “is a seminal time for the agency.” He added that “options are being framed” by a team that has been working since the summer.

    That team, led by the National Security Council, includes O'Keefe as well as Pentagon and Commerce and State department officials. Its product is being vetted by Vice President Dick Cheney, Presidential Chief of Staff Andrew Card, and the president's top political adviser, Karl Rove. “I guarantee [Bush] will have a bold decision for this country,” Card said 7 December on a network news show. Later that day Card added that “the president said that we would not give up on space exploration.”

    The timing of any announcement is the subject of rampant speculation. The 100th anniversary of the Wright brothers' first flight on 17 December at Kitty Hawk, North Carolina, would be a logical time for an announcement; so would the landing of NASA's two Mars-bound spacecraft in January. The annual State of the Union speech, set for 20 January, is a favorite launching pad for presidential initiatives. The White House, however, is trying to lower expectations. “There are no plans to make any policy announcements on our space program at any immediate upcoming speeches,” the president's press secretary, Scott McClellan, declared on 5 December.

    Congressional aides and other observers warn that any new mission will have to fit into an agency budget already strained by repairs to the space shuttle, the completion of the space station, and a new series of technologically advanced planetary probes, not to mention development of a shuttle alternative. NASA should first resolve the problems facing the shuttle and station systems, they say. “We've gotten so far into a hole in the past 30 years, it will take a lot of money and political capital to get us out before you go searching for something new,” says George Washington University political scientist John Logsdon, a member of the panel that reviewed the Columbia accident.


    Scientists Make Sperm in a Dish

    1. Gretchen Vogel

    This spring, scientists and bioethicists were rocked by a report that mouse embryonic stem (ES) cells could become egg cells in a dish. The news spawned speculation that babies might someday be born whose genetic “mother” was a stem cell line. Now, two independent teams have shown that sperm, too, can develop in the lab from mouse ES cells. But bioethicists have plenty of time to puzzle out the implications before any babies-from-a-dish become reality. No one has yet repeated the feat with human cells or shown that the dish-derived mouse gametes can produce live offspring.

    The work is generating excitement nonetheless. In the near term, the technique may make it easier to create genetically altered mice. It will also enable scientists to study the mysterious processes that produce sperm and egg cells in the developing fetus—particularly imprinting, in which certain genes are turned on and off depending on whether they are inherited from the father or mother.

    In a paper published online this week by Nature, George Daley of Children's Hospital and the Dana-Farber Cancer Institute in Boston and Niels Geijsen of nearby Massachusetts General Hospital and their colleagues describe how they isolated male germ cells from differentiating mouse ES cells. They allowed mouse ES cells to form so-called embryoid bodies, clumps of differentiating cells that produce a variety of cell types. In those clumps, the researchers were able to identify and isolate cells expressing genes typical of immature sperm.

    Because undifferentiated ES cells express many of the same genes as germ cells, the researchers put their putative germ cells to a test: They exposed them to retinoic acid. Developing germ cells typically reproduce in the presence of retinoic acid, whereas ES cells stop dividing and start to differentiate. The cells continued to divide.

    Artificial insemination.

    Sperm derived from mouse ES cells (carrying a green marker gene) can fertilize eggs and form blastocyst-stage embryos.


    To test whether the cells were undergoing some of the normal stages of sperm development, the team looked at whether they had undergone imprinting. After 4 days of development, most of the germ cells still carried their original imprinting pattern, but after 10 days in culture, all such signals had been erased, suggesting that the first step of normal imprinting had occurred.

    The production of immature sperm was not very efficient, but the cells that were produced did seem to be nearly as potent as their natural counterparts. When the team injected the immature sperm into oocytes, one in five developed into blastocysts, in which the embryo forms a hollow ball of cells. The scientists have tried to establish a pregnancy with their dish-derived sperm, but so far without success.

    The work extends experiments published in September by Toshiaki Noce of the Mitsubishi Kagaku Institute of Life Sciences in Tokyo and his colleagues in the Proceedings of the National Academy of Sciences. Noce's team showed that germ cells that had been derived using techniques similar to Daley's and transplanted into the testes of adult mice could apparently develop into mature sperm. The group did not show that the sperm could fertilize oocytes.

    Although the new work raises the specter of children derived from artificial sex cells, such a development “is really science fiction for now,” says Geijsen. “What we have is a great system where we can study imprinting” and other details of sex cell development, he says. Daley's team is trying to reproduce the feat using human ES cells, but “we're certainly not taking it toward assisted reproduction,” he says. Developmental biologist Hans Schöler of the University of Pennsylvania School of Veterinary Medicine in Kennett Square, who showed in May that oocytes could develop from mouse ES cells, cautions against attempting to apply the findings to human infertility. The risk of introduced mutations is too high, he says, because there would be no way to test whether the ES cells had acquired mutations in the lab.


    Genome Comparisons Hold Clues to Human Evolution

    1. Elizabeth Pennisi

    Despite decades of study, geneticists don't know what makes humans human. Language, long arms, and tree-climbing prowess aside, humans and our kissing cousins, chimpanzees, share practically all of our DNA. Genomic studies have suggested that the regulation of genes, rather than the genes themselves, set the two primate species apart.

    But genes are still an important part of the story, says Michele Cargill, a geneticist at Celera Diagnostics in Alameda, California. She and her colleagues found key differences between chimp and human genome coding sequences, differences that propelled human evolution and sometimes lead to genetic diseases. Genes for olfaction and hearing, among others, have undergone distinct, rapid changes over the course of human evolution, whereas chimps' skeletal system genes have been similarly labile. The work, reported on page 1960, “is a valuable start to the genomic approach to identify ‘humanness’ and the genetic bases for human and ape disease differences,” says Ajit Varki, a glycobiologist at the University of California, San Diego.

    Cargill and her colleagues narrowed the number of genes to consider by culling all but those found in each of three species: chimp, human, and mouse. They gathered sequence data from Celera's complete human and mouse genomes and used the human data to identify chimp genes. (A public consortium has completed a draft chimp genome sequence; see below.) Of 7600 shared genes, the researchers identified those that had more than the expected number of base changes, given the normal mutation rates in each species. They concluded that 1547 human genes and 1534 chimp genes had experienced relatively rapid changes that likely endowed a survival advantage. The mouse data helped the team determine what genes to use and revealed the degree and direction of change in the primate genomes.

    The study confirmed a report from earlier this year that a gene called the forkhead-box P2 transcription factor experienced unusually rapid evolutionary change in humans. The gene is necessary for proper speech development, and it's thought to have played an important role in human evolution. By analyzing the function and biochemical makeup of the proteins encoded by the genes they studied, Cargill and colleagues discovered other genes that might have had a similar effect.

    In both chimps and humans, about 107 of the 1000 genes involved in cell signaling and 11 of 78 involved in amino acid metabolism have undergone major changes since the time of the species' last common ancestor 5 million years ago. But the genes didn't follow the same track in the two species, suggesting that they faced different pressures from natural selection. In humans, 27 of 48 olfactory proteins and three of 21 hearing proteins showed significant accelerated change, whereas that was not true in the chimp. In contrast, the chimp's genes for mesoderm development and skeletal structure proved unusually active.

    Hear no evil.

    Changes in genes for hearing, olfaction, and speech helped prompt human evolution.


    At first, some of the patterns didn't seem to make sense, says co-author Andrew Clark, a population geneticist at Cornell University in Ithaca, New York. For example, one wouldn't expect rapid evolution of enzymes that modify proteins, because these enzymes are an essential part of basic metabolism and are not easily changed. But humans had 11 genes that ran contrary to this assumption. Upon reflection, the researchers realized that the alterations likely were adaptive when early humans began to consume more meat and needed the enzymes to digest the extra protein.

    Rapid evolution of olfactory genes is more of a puzzle. Other research has shown that many of the human genes for proteins involved in recognizing odors are accumulating mutations. Mutations can have positive, negative, or no effect on the protein's utility. Because humans no longer rely strongly on their sense of smell to survive, any changes were assumed to have had no effect or a negative effect. Surprisingly, according to the new study, some olfaction genes have been evolving in a positive way: Their changes appear to have been selected for during human evolution. The genes may have promoted certain dietary changes or informed sexual selection, says Clark. Alternatively, olfactory receptors could be involved in more than just smell, says James Sikela, a genome scientist at the University of Colorado Health Sciences Center in Denver, who was not involved in the study.

    The researchers were surprised to find out how many of the genes they studied are known to cause disease if damaged, including seven involved in amino acid metabolism. The discoveries bode well for geneticists: Looking for genes that have diverged since the time of humans' and chimps' common ancestor “could be another way to shorten the time to find these disease genes,” says Cargill.

    The mouse sequence served as a ready and useful reference, but Varki and others say that comparing human and chimp genes with the genome of a closer relative, such as another great ape, would reveal even more genes that defined the branches of the primate family tree.


    Chimp Genome Draft Online

    1. Elizabeth Pennisi

    The relationship between humans and chimps just got a little easier to understand. This week, the consortium that has been unraveling the DNA sequence of our closest cousin for the past year put its results into the public domain. Robert Waterston of the University of Washington, Seattle, and his colleagues at the Broad Institute in Cambridge, Massachusetts (including the former genome center of the Whitehead Institute for Biomedical Research), and at the Washington University Genome Sequencing Center in St. Louis, Missouri, have determined the order of many of the 3.1 billion bases of a single male chimp's genome. Until now, only pieces of deciphered DNA were available, and their placement along the two dozen chromosomes was uncertain.

    On average, each base was sequenced only four times. That's far short of the current 10-fold coverage of the human genome, but it's enough to put together a rough draft with many bases in the right order, which the researchers deposited online in GenBank. The consortium matched up the human and chimp genomes base by base as much as possible, an alignment that will make it easier for researchers to find elusive genes and regulatory regions. The matchup will also highlight specific differences between the two genomes, perhaps further hinting at what set us apart. The sequence is more than researchers could have hoped for a few years ago, says Ajit Varki of the University of California, San Diego, and it “will be most useful” for geneticists trying to find genes responsible for inherited diseases.

    Work on the chimp sequence continues, but in the meantime, the consortium is taking the next few months to analyze the data it has. It expects to publish results early next year.


    Faults May Gang Up on Los Angeles

    1. Robert Irion

    SAN FRANCISCO—One of southern California's active earthquake faults could trigger an adjoining fault to break, even though the two faults rupture in totally different ways, seismologists have learned. Such an event—although extremely rare—would unleash a magnitude 7.5 to 7.8 earthquake around the heavily populated edges of the Los Angeles basin, according to a report here this week at a meeting of the American Geophysical Union and on page 1946 of this issue.

    Seismologists already knew that surface faults in the region have cascaded like dominos in the past—most recently in the magnitude 7.3 Landers earthquake in 1992. However, the new research is “extremely valuable” for revealing possible linkages with the network of deeper, hidden faults that lace underneath cities, says geophysicist Jian Lin of the Woods Hole Oceanographic Institution in Massachusetts. “It's unsettling that one type of quake has the potential to trigger another type,” he says.

    Researchers took stock of potential fault interactions near Los Angeles in the wake of the magnitude 7.9 Denali earthquake in Alaska in November 2002 (Science, 16 May, p. 1113). That quake started innocuously as a moderate shock on a previously unknown thrust fault, so named because one side of the deep fault thrusts upward over the other side. But within seconds, the shaking leapt onto an adjacent strike-slip fault, where motion occurs horizontally at the surface. “That's more or less the opposite of what you would expect, because the strike-slip fault was much bigger,” says seismologist Greg Anderson of UNAVCO Inc. in Boulder, Colorado.

    The combined rupture gashed across 340 kilometers of remote Alaskan terrain. Because deeply buried thrust faults crisscross the Los Angeles basin, seismologists wondered whether a similar trigger might set off the mighty San Andreas or another strike-slip fault in southern California.

    The answer came as a surprise. Anderson and his colleagues at the U.S. Geological Survey in Pasadena examined the Sierra Madre-Cucamonga thrust fault system, site of the magnitude 6.7 San Fernando earthquake in 1971, and the San Andreas and San Jacinto strike-slip faults to the north and east. Using detailed three-dimensional models of the faults, the team found that breakage of the thrust faults would not stress the strike-slip faults enough to make them fail. The reverse case, however, was not as benign: Large ruptures along the northern San Jacinto fault could cascade onto the thrust fault system—rifling through cities with a devastating earthquake as large as magnitude 7.8. A San Andreas quake would not have a similar ripple effect, the model predicts.

    Only rare circumstances would trigger a multiple break, Anderson emphasizes. “We assumed the faults were equally loaded [with stress] and very close to failure, which is clearly not the case at most times,” he notes. Indeed, he says, many thousands of years might pass before one of the frequent San Jacinto earthquakes jumps onto the thrust fault.

    Continued mapping of thrust faults—largely from oil company seismic surveys—is essential to quantify the risks, Lin says. Others agree. “This is the best guess about what would happen if the whole system is stressed up,” says seismologist John Vidale of the University of California, Los Angeles. “But it really is a guess. We don't know which faults are loaded up and ready to go.”


    Arctic Is First Call for New Global Program

    1. Dennis Normile

    TOKYO—The new Integrated Ocean Drilling Program (IODP), which debuted in October, unveiled its first schedule of cruises last week. The big news for 2004 is a $12 million European expedition next summer to a unique Arctic Ocean ridge, using a rented drill ship. That's also when the JOIDES Resolution, the workhorse of international drilling efforts for nearly 2 decades, will resume drilling along the Juan de Fuca Ridge in the northeastern Pacific to study the region's hydrogeology. It will be followed by a series of 6- to 8-week expeditions to probe geodynamics and climate change in the North Atlantic.

    The European expedition will punch holes in the Lomonosov Ridge, which runs 1800 kilometers from a point near Canada's Ellesmere Island, nearly crossing the North Pole and then heading south toward Siberia. The Lomonosov Ridge separated from the northern European plate about 53 million years ago and has been barely touched by previous drilling efforts. “It's an unprecedented opportunity to get a complete record of marine deposits from 50 million years ago to the present,” says Millard Coffin, a marine seismologist at the University of Tokyo's Ocean Research Institute, who represents Japan on and chairs the IODP Science Planning Committee. The 6-week expedition, he adds, is expected to produce a mother lode of data on climate change, Arctic paleoceanography, and the ridge's tectonic history.

    Cold cuts.

    IODP's first year includes several Northern Hemisphere cruises.


    The Europeans were still working out the details of their participation when Japan and the United States signed off on IODP earlier this year (Science, 18 April, p. 410). Next week 13 European countries will formally join the European Consortium for Ocean Research Drilling (ECORD). And in January ECORD will finalize the details of its participation in IODP with the U.S. National Science Foundation and Japan's Ministry of Education, Culture, Sports, Science, and Technology. Canada and Ireland are likely to join ECORD in 2004.

    The Arctic cruise represents ECORD's entire first-year contribution to IODP, which will collect $13 million from Japan and $15 million from the United States. The program's overall annual operating budget is expected to exceed $150 million once Japan's $475 million Chikyu drill ship joins the U.S.-provided Resolution in 2006. ECORD officials say they don't know whether Europe will be able to shoulder a third of the financial burden, although they expect some support from the 7th European Framework Programme, which begins in 2007.


    Patent Sprawl: From Genes to Gene Interpretation

    1. Jocelyn Kaiser

    A patent on the use of gene arrays issued last month to genomics researcher Eric Lander's group has sent a tremor through the genomic medicine community. The patent privatizes a method of using gene expression arrays to analyze clinical samples. “It will have a significant impact on the field,” predicts proteomics researcher Emanuel Petricoin of the U.S. Food and Drug Administration. Some experts, however, say it's too early to judge.

    The patent (6,647,341), issued by the U.S. Patent and Trademark Office on 11 November, is based on work by Todd Golub, Lander, and others at the Whitehead Institute in Cambridge, Massachusetts. (Golub is also a faculty member at the Dana-Farber Cancer Institute in Boston, which is also a patent assignee; both Golub and Lander are now at the Broad Institute in Cambridge.) They demonstrated how gene expression array data can be used to distinguish two types of leukemia (Science, 15 October 1999, p. 531). The patent claims any use of their “method”—an algorithm—for evaluating gene-expression patterns to distinguish between samples.

    A Whitehead official downplays the patent's reach. The technique it involves is “just another method of analyzing expression data” among “25 others,” says Thomas Ittelson, the institute's director of intellectual property. “It's a run-of-the-mill patent,” Ittelson says. He adds that scientists doing academic research will not need a license. As for those interested in commercializing a discovery, “we haven't made any decisions” about terms, he says.

    Sorry, taken.

    Method used by Golub and Lander to study gene expression has been patented.

    CREDIT: T. R. GOLUB ET AL., SCIENCE 286, 531 (1999)

    Some researchers say it's not so trivial, however. “Many, many people are using the methods they used,” says stem cell researcher Mahendra Rao of the National Institute on Aging branch in Baltimore, Maryland. And one scientist at a major pharmaceutical company says the claims cover “pretty much any application of gene-expression arrays for clinical data.” If so, the Golub patent could be as important as patents on other basic research tools, such as transgenic mice.

    It will take time to sort out the legal details, says Stanford University emeritus law professor John Barton, who thinks “the real question” is whether it could be invalidated if other researchers did similar work prior to the April 2000 filing date.

    Regardless of how important the Golub patent turns out to be, some say it adds to a troubling trend toward patenting biomedical research tools. “I just find it to be destructive for science,” says Michael Eisen of Lawrence Berkeley National Laboratory in California, who with Patrick Brown at Stanford helped develop the gene expression array technique.

    Petricoin and the National Cancer Institute's Lance Liotta, on the other hand, have applied for an equally broad patent for proteomics techniques that, because it would be held by the government, would be free for academic uses. Such patents “could actually drive the field forward faster and be a good thing,” Liotta claims—illustrating that a patent's virtue lies in the eye of the beholder.


    Critics Say New Law Is a Bit Thin on Science

    1. Robert F. Service

    Forest scientists say that a new federal law may be playing with fire. Last week President George W. Bush signed legislation aimed at reducing the risk of catastrophic wildfires that have become a regular summertime occurrence throughout the American West. But tight limits on environmental and judicial reviews of timber sales and inadequate monitoring of forest thinning practices could hamper its effectiveness, say researchers and environmental groups.

    “The legislation does some very good things in bringing a focus to parts of the Western forest that are in serious condition. But it's weak on the science,” says Hal Salwasser, dean of the Oregon State University College of Forestry in Corvallis. The measure (Public Law 108–148) authorizes spending of $760 million a year—an increase of $340 million over this year's level—to thin forests on federal lands that have grown dense due to a century of fire suppression. The fuel buildup encourages what would normally remain small spot fires to grow out of control and wipe out hundreds of thousands of hectares at a time.


    Environmentalists say controversial new rules to thin U.S. forests may not really curb forest fires.


    Bush proposed the initiative, which he dubbed “Healthy Forests,” during an August 2002 visit to a fire-ravaged forest in Oregon. The House passed the bill in May, but senators were concerned that too much thinning would take place away from urban areas, where the threat to homes and property is the greatest. But opposition melted away after October's wildfires in California, which killed 22 people and burned 300,000 hectares. Salwasser and others had hoped that 5% of the annual forest-thinning budget would be reserved for regional centers to study the effectiveness of different thinning practices. But their suggestion didn't make it into the final bill.

    Of greater concern to many environmentalists are the limits on the public review process that has been their most successful tool in halting potentially damaging logging. Among other provisions, the law allows timber sale opponents only 60 days to file for an injunction to stop a proposed sale. That's often not enough time to do the fieldwork needed to evaluate a sale, says Kieran Suckling, who directs the Center for Biological Diversity, a research and advocacy group in Tucson, Arizona. “It's a very cynical targeted strategy to ensure that timber sales go forward,” he says. The law also “stacks the deck” of judicial reviews, he says, by directing judges to weigh whether not proceeding with a timber sale might increase the chances of a devastating wildfire.

    Despite these concerns, some forest ecologists worry that the new legislation doesn't go far enough. “It treats only 20 million acres [8 million hectares] out of about a couple of hundred million acres that are in trouble,” says Wally Covington, a forest ecologist at Northern Arizona University in Flagstaff, who has long pushed for widespread efforts to reduce the buildup of forest biomass (Science, 27 September 2002, p. 2194). “So, it's a step in the right direction. But only a first step.”


    Vest Steps Down as MIT President

    1. Andrew Lawler

    Last week Charles Vest announced that he will be leaving one of the top jobs in academia. After a dozen years as president of the Massachusetts Institute of Technology (MIT) in Cambridge, Vest says he wants more time to think and write. However, the 62-year-old mechanical engineer doesn't plan to leave the public policy arena, where he has played a significant role in national debates ranging from federal support for research to the tensions between science and security. “I'd like to stay engaged,” he says.

    During his tenure, MIT's endowment tripled, its physical plant increased by one-third, and a number of new institutes encompassing engineering, biology, and neuroscience were created. Vest says he delayed his announcement for a year to cope with a “bump in the road” brought on by rising costs and a $1.2 billion drop in the school's $5 billion endowment. The university recently froze salaries above $55,000 for 1 year and cut back its institutional support for graduate students. “Our financial fundamentals are sound,” he says.

    Bowing out.

    Charles Vest is leaving MIT but not the policy arena.


    Vest came to MIT in 1990 from the University of Michigan, Ann Arbor, and he has spent a good deal of time since then shuttling between Cambridge, Massachusetts, and Washington, D.C. Vest headed a panel that redesigned the international space station in 1993, fought cuts to research budgets sought by House Republicans in 1995, and is currently a member of the President's Council of Advisors on Science and Technology.

    “He's always gone after the tough issues,” says William Wulf, president of the U.S. National Academy of Engineering, to which Vest was elected in 1993. “And he has certainly increased the visibility of MIT.” Nils Hasselmo, president of the Association of American Universities in Washington, D.C., adds that Vest's “quiet and unassuming demeanor” concealed forceful views and an ability “to be extremely succinct in his analyses.”

    Vest made a lasting contribution to the status of women in academia by approving the public release of a 1999 report that was highly critical of MIT's dealings with its women faculty members (Science, 12 November 1999, p. 1272). His realization that the playing field for women scientists was not level was a “monumental contribution … that led to greater equity for women scientists everywhere,” says Nancy Hopkins, the MIT biologist who chaired the panel that wrote the report.


    Animal Models: Live and in Color

    1. Jean Marx

    Fusion of imaging technology and molecular techniques is revealing biological changes in living animals. In cancer research, this is leading to new ways of studying tumors, as well as novel diagnostics and potential therapies

    Not long ago—a mere decade or so—researchers studying the intricacies of cell and molecular biology and those working on new techniques for imaging the body led essentially separate lives. Over the past few years, however, the two camps have found that they have surprisingly much in common. Their convergence is producing a host of new technologies that are providing researchers with an unprecedented ability to see into the living animal—a development that promises a wealth of information about basic biological processes as well as, eventually, better ways of diagnosing and treating disease.

    Two trends have contributed to these advances. On the imaging side, researchers learned how to image small animals, chiefly lab mainstays such as mice and rats. In some cases, they adapted technologies previously limited to humans or large animals, such as magnetic resonance imaging (MRI) and positron emission tomography (PET); in others, they devised optical imaging techniques to detect fluorescent or bioluminescent molecules introduced into animals.

    At the same time, molecular biologists were producing molecular probes that seek out specific proteins—say, an oncogene product in a cancer cell—and label the cells carrying them so that they can be detected by MRI, PET, and other imaging techniques. Although the resulting technologies are being widely applied (Science, 17 October, p. 382), nowhere are they proving more valuable than in the cancer field.

    In recent years, researchers have become adept at creating small-animal models that more accurately resemble various human cancers (Science, 28 March, p. 1972). Now, the new imaging technologies allow them to see disease in action, by following the molecular changes underlying the development and spread of animal tumors in their natural environment.

    In addition to providing a better understanding of cancer biology, the imaging technologies should also be a boon to drug development, because they allow researchers to determine whether a therapy is effective in the living animal. “This is a necessary lab tool,” says Daniel Sullivan, who heads up the imaging program at the National Cancer Institute (NCI) in Bethesda, Maryland. “You can monitor molecular changes in a realistic setting. … Cancer cells don't behave in the laboratory dish the way they behave in the animal.”

    The ultimate goal is to use the imaging technologies to detect cancers and monitor their responses to therapy in human patients. And although that's not yet possible for most of the new probes—and for some may never be—a few imaging techniques are wending their way toward clinical trials. These include MRI variations that detect brain tumors' responses to therapy after days rather than weeks and pick up cancers that have spread to the lymph nodes, which may allow detection without invasive biopsies or surgery.

    To bolster these efforts, over the past 5 years NCI has been providing support for a series of In vivo Cellular and Molecular Imaging Centers (ICMICs, pronounced “ick-micks”). They've been established so far at seven locations: the University of California, Los Angeles (UCLA); Harvard's Massachusetts General Hospital in Boston; Johns Hopkins University School of Medicine in Baltimore, Maryland; Memorial Sloan-Kettering Cancer Center in New York City; the University of Michigan School of Medicine in Ann Arbor; the University of Missouri, Columbia; and Washington University School of Medicine in St. Louis, Missouri. In addition, NCI is funding several “pre-ICMICs” and small-animal imaging centers. So far, Sullivan says, the institute has put $150 million into all the centers, which are aimed at generating animal-imaging methods and resources and also at developing better means of diagnosing and treating cancers.

    A tiny patient.

    The mouse is about to undergo an exam by MRI, one of the imaging technologies now being adapted for studying molecular changes in living animals.


    Seeing genes in action

    Standard imaging technologies depict primarily anatomical information. MRI, for example, gleans its structural information from differences in the way water is concentrated in various tissues. Even PET, which can detect tissues that are burning glucose at a high rate, can't identify the biochemical events that give rise to that high activity. Researchers who wanted to get at those underlying events—looking, say, at gene expression patterns in a tissue—had to take a different tack.

    One technique allows researchers to look directly at gene activity in animal tissues; it was introduced about 10 years ago and quickly adopted by labs worldwide. It uses so-called reporter genes that are genetically engineered into an animal's cells to light up or activate a dye when a gene of interest is active. For example, researchers attach the regulatory sequences that enable a target gene to be turned on to the gene for an enzyme called β-galactosidase. Both genes become active at the same time, which can be detected with a stain that turns blue in the presence of β-galactosidase. One problem, though: The animals have to be killed so that their tissues can be examined. This requires the use of many animals if an investigator wants to observe the activity of a gene at several time points.

    The current work solves that problem by marrying reporter genes and live-imaging technology, making it possible to watch as the genes turn on inside the animal. “A lot of this [technology] has been in place,” says imaging expert Ronald Blasberg of Memorial Sloan-Kettering Cancer Center. “What's new is doing reporter gene imaging in vivo.” Indeed, another pioneer of the new imaging work, Sanjiv Gambhir, who recently moved from UCLA to Stanford University, predicts that the work will make killing animals just to observe gene expression “obsolete.”

    Assessing treatment.

    The mouse received a therapeutic drug on day 0 (leftmost panel; time increases at 4-day intervals from left to right). The luciferase-labeled tumor continued to grow, then shrank, but then relapsed and grew again.


    Investigators now have numerous ways of peering at gene activity inside live animals. One common strategy for producing PET images of reporter gene expression involves a viral enzyme called thymidine kinase that adds phosphate groups to various compounds, including the antivirals ganciclovir and FIAU. PET detects positrons emitted by certain radioisotopes, so the antiviral compounds are labeled with a positron emitter. In their unphosphorylated state, the labeled compounds are taken up by cells. But if the cells contain active thymidine kinase, the compounds acquire a phosphate and get trapped inside, allowing the cells to be imaged by PET.

    A different type of approach, optical imaging of living animals, has also been proving its mettle. Here the reporter genes either code for fluorescent molecules, such as green fluorescent protein (GFP), or make enzymes such as luciferase—the agent responsible for fireflies' glow—that can produce bioluminescent products. Animals carrying the reporters are imaged with extremely sensitive cameras based on charge-coupled devices that pick up the small number of photons transmitted through tissues.

    At first there was some skepticism that optical imaging would work, says Christopher Contag of Stanford, whose team pioneered the technique beginning in the mid- to late 1990s. The big question was a simple one, he says: “Can you get enough light through the tissue to get an image?” At least for small animals, such as mice and even rats, the answer is turning out to be yes.

    Most recently, researchers have been combining reporter genes for two imaging modalities—say, thymidine kinase for PET and a luciferase gene for optical imaging—in the same genetic construct for transfer into animal cells. With such double-duty reporters, “you can use the imaging technology that best suits your experimental questions,” says David Piwnica-Worms of Washington University School of Medicine.

    PET and other methods that detect radioisotopes are very good at imaging deep structures but require expensive equipment—for example, a nearby cyclotron to produce short-lived positron-emitting isotopes. Optical imaging is simpler and less expensive but has some drawbacks of its own. Its resolution is lower, and it can't currently provide three-dimensional images as the other techniques can. But technology under development may soon solve that problem. One possibility is to photograph the animal from numerous angles and then use computers to generate three-dimensional images, much as is currently done with x-ray computed tomography scanning.

    Although their potential is still being explored, the imaging methods are already proving their versatility in numerous phases of cancer research—from tracking gene expression to following the development of metastases to assessing the effectiveness of therapies. In particular, researchers want to design drugs that work by specifically targeting the gene defects that lead to cancer development. The new imaging systems should have “a major impact” on those efforts, Piwnica-Worms predicts: “You can build reporters and see the action of a drug at the target site in vivo.”

    For example, several researchers, including Blasberg, have imaged expression of an important tumor-suppressor gene, p53, which is mutated in some 50% of all human cancers. Blasberg, working with Juri Tjuvajev, now at M. D. Anderson Cancer Center in Houston, Texas, and colleagues created a dual reporter containing the GFP and thymidine kinase genes linked to regulatory sequences that turn on the p53 gene. To see if the reporter could track p53 expression in animals, they introduced it into tumor cells containing a normal p53 gene and then inoculated mice with the altered cells, which grew into tumors.

    The researchers then treated the animals with a DNA-damaging chemical known to lead to p53 activation. This should—and did—activate the reporter. The cells in which p53 was active were visible both by PET imaging of animals treated with FIAU and by fluorescence imaging of the tumor cells. This may aid, for example, in the development of therapies aimed at restoring p53 activity.

    A shade of difference.

    As depicted in this reconstruction, lymph node-seeking nanoparticles enabled MRI to distinguish metastatic nodes (red) from normal nodes (green).


    In this case, the imaged tumors grew from introduced cells, but cancer researchers are also developing mouse models in which cancers arise spontaneously—a situation much more akin to what happens in humans. These cancers, too, can be imaged with the new techniques.

    An example comes from Anton Berns's team at the Netherlands Cancer Institute in Amsterdam. In a fairly complicated feat of genetic engineering, the researchers engineered mice that spontaneously develop pituitary tumors due to pituitary-specific inactivation of the retinoblastoma tumor-suppressor gene and that also carry a luciferase gene that is active only in pituitary cells. The growth of the pituitary tumors could be followed using bioluminescence imaging, as could the tumors' shrinkage in response to therapy.

    Researchers have been using systems such as these to study another kind of cancer therapy. In so-called adoptive immunotherapy, immune cells that have been “trained” to recognize cancer cells are infused into the body so that they can seek out the cells and destroy them. So far, results in the clinic have been generally modest, but researchers hope that tracking the immune cells' fates in the body will provide information that will aid in improving the therapies.

    In work reported this year in Blood, Contag's team, along with Rob Negrin, also at Stanford, described a mouse model for doing just that. They tagged the trained immune cells with a GFP-luciferase reporter before infusing them into animals bearing transplanted lymphomas. Using optical imaging, “you can watch [the cells'] trafficking to the tumor sites,” Contag says. The researchers found that only a subset of the cells actually made it to the tumor, however. Aided by the GFP labeling, which can be used to sort or count cells, Contag and his colleagues are now trying to beef up the number of immune cells that reach each tumor.

    Visible interactions

    Gene expression is by no means the only cellular activity that can now be imaged; some of the others also provide alluring targets for therapy. For example, interactions between proteins help control a host of cellular activities, including gene transcription and responses to hormones, growth factors, and other signaling molecules. Both the Gambhir and Piwnica-Worms teams have come up with ways of imaging such interactions, which may go awry in cancer.

    The Gambhir team split the firefly luciferase gene in two, attaching one half to the gene for one of two interacting proteins and the other half to its partner, and then put these constructs into cells. The proteins produced by the luciferase gene halves were inactive until brought together by binding of the proteins to which they were attached. The resulting bioluminescence could then be detected in tumors growing in mice. The Piwnica-Worms team produced a dual reporter, detectable both by PET and optical imaging, designed to be expressed only in the presence of two interacting proteins.

    Another aim is to image the activity of enzymes, particularly the proteases that cleave various proteins, including some involved in cancer development or spread. Here the trick is to design a reporter protein that becomes active and detectable only when it is cut by the target protease. One such effort comes from Ralph Weissleder and his colleagues at Mass General, who formulated a reporter for a protease called cathepsin B. Colon polyps, precancerous lesions that develop into full-fledged cancers if not removed, express high levels of the enzyme.

    In studies of a mouse model genetically programmed to produce numerous intestinal polyps, the reporter could detect polyps as small as 50 micrometers in diameter. The probe is slated to go into clinical trials in about a year, and if the work pans out, Weissleder says, it may lead to a new diagnostic test for colon cancer that doesn't require the rigorous intestinal cleansing that people facing colonoscopy now dread. He estimates that in humans the procedure would detect lesions with a diameter of a millimeter or less—about one-tenth the size of those detectable by current methods.

    Proteases are also involved in apoptosis, or programmed cell death. Many cancer drugs kill cells by triggering apoptosis, so it would be a boon to have a good way of visualizing the process in living animals. Brian Ross, Alnawaz Rehemtulla, and their colleagues at the University of Michigan School of Medicine devised a luciferase probe that detects the activity of an apoptotic protease called caspase-3 and used it to image apoptosis in tumors growing in mice.

    In those animal studies, the Michigan workers also imaged the tumors using a technique new to cancer research called diffusion-weighted MRI, which detects water movement, and noted that those movements increase in tumors that respond to chemotherapy. This apparently happens because the tumor cells become less dense.

    Ross, Rehemtulla, and their colleagues have since gone on to use diffusion-weighted MRI to assess the effects of therapy in human patients undergoing treatment for brain tumors. With ordinary imaging techniques, Rehemtulla says, “it takes several weeks to see if a therapy is working, but we could see cells dying in 5 to 7 days.” Such early assessment could be very helpful, Rehemtulla adds: If a therapy doesn't work, the patient can be switched more quickly to an alternative treatment that might be more effective.

    MRI may also help with another difficult problem in assessing cancer therapies. Drugs that target the vessels that supply nutrients necessary for tumor growth, rather than the tumor cells themselves, may freeze tumors in their present state but not cause them to shrink. So researchers want to visualize the effects of these antiangiogenic drugs on the tumor vasculature.

    Recent work on a mouse model of prostate cancer performed by Zaver Bhujwalla and her colleagues at Johns Hopkins University School of Medicine suggests that this can be done by injecting a contrast agent, a material containing the protein albumin and the metal gadolinium, into the animals' bloodstream before applying MRI. This makes the vasculature and also blood penetrance into the tumor easier to see. The team could detect decreases in the vascular volume of tumors treated with an antiangiogenic drug called TNP-470. But although blood penetration into some areas of the tumor went down, it went up in others, perhaps because the tumor cells mounted a compensatory response.

    Although most of these imaging techniques are still in the preclinical stage, a few have made it into clinical trials. The Michigan researchers have so far tested their diffusion-weighted MRI method on 30 patients. And Weissleder and his colleagues at Mass General have obtained what he describes as “fairly spectacular results” with another MRI technique for detecting spread of cancer to the lymph nodes.

    For this work, the researchers used nanoparticles consisting of a core of iron oxide crystals coated with a polysaccharide called dextran. When injected into the body, the nanoparticles make their way into the lymph nodes, where they concentrate in immune cells called macrophages. Metastatic cancer cells growing in the nodes disrupt their architecture, and this disruption can be detected by MRI, Weissleder and his colleagues reported in the 19 June issue of The New England Journal of Medicine.

    The study included 80 men undergoing surgery or biopsy for prostate cancer who had MRI exams both with and without the nanoparticles before the surgical procedures. The patients' physicians found that 33 of the men had metastatic lymph nodes. MRI with the particles identified all 33, Weissleder says, whereas MRI without the particles missed more than half of them.

    Variations on these new imaging methods are already too numerous to list, and researchers are furiously inventing more. But even a partial description reveals their great potential. As the imaging and molecular fields continue to merge, it's becoming clear that the images they create will cast light on cancer's hiding places and perhaps illuminate how to cure it.


    Who's No. 1? Academy Hopes New Rankings Will Say More

    1. Jeffrey Mervis

    The National Research Council once again hopes to rate the quality of U.S. doctoral research programs. But this time it wants to address the needs of students, too

    The United States may be a consumer-oriented society. But when it comes to graduate education, the consumers—students—are consistently left in the dark. Instead of having access to hard data such as attrition rates, levels of financial aid, and the fates of their predecessors, most prospective graduate students in the sciences are forced to ferret out meager bits of information before making a decision that could shape the rest of their lives. But help may be on the way from an unlikely source.

    Next summer the National Research Council (NRC), the working arm of the august National Academies, hopes to launch a more user-friendly version of its assessment of U.S. research doctorate programs. The controversial but popular exercise, first done in 1982 and repeated in 1995, is best known for its rankings of departments based on their reputations. The rankings are used by university administrators to see how their programs stack up against the competition and what they must do to move up. But NRC officials hope that the third edition will also help give prospective students the information they need to select a program that is right for them. The lack of such a guide has left the door open to commercial ventures, notably the weekly magazine U.S. News and World Report, to publish annual rankings of the “best” graduate schools in the country.

    “Graduate education is something that the U.S. seems to do very well, and we think [an assessment] can be of tremendous value,” says Jeremiah Ostriker, an astrophysicist at Princeton University and chair of an NRC panel that last month recommended proceeding with the assessment.* “It's not going to be easy, but we think the NRC can do it better than anyone else.”

    NRC officials say they will need $5 million to do a good job of collecting and analyzing the data and disseminating the results, and they have already begun asking federal agencies and private foundations to ante up. If all goes well, says NRC's Charlotte Kuh, the surveys would go out next spring and the report would be issued in mid-2006.

    Swamped with data

    The new assessment will be a massive undertaking. The Web-based exercise will cover more than 4000 programs across 57 fields at some 350 institutions. The number of fields, always a contentious issue, has grown from 41 a decade ago, and that doesn't count another eight “emerging” areas—from systems biology to science and technology studies—that will be tracked but that are too immature for a full-blown comparison. “The taxonomy was the biggest headache,” says Ostriker, noting that the current lineup is at best a snapshot of academe's ever-changing definition of itself. “If I had said yes to everybody, we could have ended up with 100 fields.”


    The next thorniest issue facing the NRC panel was whether to continue to rank programs based on their reputations. In the previous survey, for example, thousands of faculty members were each asked to rate 50 to 60 programs in their field of expertise. That yielded 200 to 300 ratings per program, which were boiled down to a single number to represent scholarly reputation (Science, 23 June 1995, p. 1693). Ostriker and other panelists don't hide their distaste for this aspect of the assessment, which they label “a horserace.” But they concede that “reputation is one part of the reality of higher education, … and it is the most widely quoted and used statistic from previous studies.”

    In the new assessment, NRC will try to modify the worst aspects of the horserace, Ostriker says, by producing a range rather than a single position; for example, it would declare that Hometown University's chemistry program ranks from 48th to 53rd. It also hopes to update the rankings, perhaps even annually, by using a cluster of quantifiable measures—productivity and citation rates, levels of funding, and so on—that would be surrogates for reputation. And it plans to ask each faculty member to rate fewer programs in hopes of reducing the number who confessed that they weren't familiar with a particular program but nevertheless rated it.

    Graduate school deans say that the rankings are invaluable for self-assessments. “It helps tell us what the top-quartile programs look like,” says Dianne Harrison of Florida State University, Tallahassee, which was one of eight schools that pilot-tested the questionnaires last spring. “Then it's up to us to decide what we need to do to get into that group.”

    Those decisions, in turn, affect a program's ability to attract top-notch students. But past assessments have mostly ignored the students themselves, an oversight that the NRC panel is determined to correct. “The students have a lot to say, and we have a lot to learn from them,” says Joan Lorden, provost and vice chancellor for academic affairs at the University of North Carolina, Charlotte.

    Hearing from students, however, promises to be more difficult than collecting other types of data. A pilot questionnaire sent to students at five institutions yielded a disappointing 25% response rate, the report notes, far below an acceptable minimum. As a result, NRC has decided to concentrate on only five fields—not yet chosen—in hopes of boosting participation. The students will be asked about their backgrounds, career goals, and research and teaching experiences on and off campus.

    Data on what happens to students after graduation—what types of jobs they take, their salaries and productivity, and the value of their training, for instance—are even harder to come by. In fact, NRC has already thrown in the towel after discovering that programs typically collect such information only anecdotally. “It's really a cultural barrier,” says Joseph Hellige, dean of the graduate school at the University of Southern California in Los Angeles, another pilot institution. “It's just not something that they've ever done.” Kuh says NRC instead will ask programs if they collect student outcomes data and if they publicize it. “It will serve as a warning to them: Be prepared for us to ask for the data the next time,” she says.

    Hellige and Harrison say they will heed such a warning. But Bob Morris, who oversees the annual rankings for U.S. News and World Report, says that he'll be very surprised if NRC can collect meaningful data from either the current cohort of students or recent graduates. “They've admitted that they're doing something that has never been done before,” he says. “There are all sorts of definitional questions to solve, along with the logistical problems of tracking down these students. I think that they're on the right track. But they may be overly optimistic about their chances of succeeding.”

    Ostriker denies wearing rose-colored glasses. “I came in as a skeptic,” he says. “But now I'm enthusiastic about what we can accomplish.” Still, he's also a realist. “It's essentially impossible to get everything right,” he avers. “But I think this will be the best one we've ever done.”


    Age Bar Forces Europe's Senior Researchers to Head West

    1. Giselle Weiss*
    1. Giselle Weiss is a freelance writer in Allschwil, Switzerland.

    Mandatory retirement ages are common at European universities. With the prospect of their careers ending so abruptly, many researchers are opting to jump on a plane

    BASEL, SWITZERLAND—Klaus Rajewsky remembers his father's obligatory retirement as so painful that he determined to escape his own at all costs. So when the renowned German immunologist hit 64, he jumped at a chance to move to Harvard University. The shift meant braving the famously Darwinian U.S. system of research funding and reducing his 40-member group to eight people. Now, however, his group has almost doubled. Even better, he says, “I have a contract which says ‘without limit of time.’”

    For European researchers over age 60, staying put is not an option. Many European countries impose a mandatory retirement age for public employees, ranging from a low of 60 in France to a high of 67 in Norway and Sweden. Because the majority of universities in Europe are state-run, that cutoff generally applies to professors too. Although many senior researchers relish the opportunity to hang up their lab coats for good, others see it as a needless waste of their skill and experience. Some people dream of retiring, says physicist Claudine Williams, a recently appointed research director emeritus at the Collège de France in Paris. But “we [scientists] all seem to have problems with it.”

    Those who are determined to stay at the bench past retirement often live a precarious life, relying on the generosity of younger colleagues for lab space and access to infrastructure. Some of the biggest names in European science have managed to cut special deals to stay on, but failing that, the only alternative is to seek employment where there are no such age restrictions, most often in the United States. “Europe, and Switzerland in particular, need to give serious consideration to a potential brain drain to the United States due to the different laws governing age-related retirement,” says biophysicist Kurt Wüthrich of the Swiss Federal Institute of Technology (ETH) in Zürich. But this exodus may not go on for too much longer. In response to pension problems and calls for an end to age discrimination, moves are afoot in Europe to either raise mandatory retirement ages or scrap them altogether.

    Mandatory retirement for tenured university employees in the United States ended in 1994 under an amendment to the Age Discrimination in Employment Act of 1967. Australia, New Zealand, and some provinces of Canada have similar provisions. But Europe has not yet relented. In order to continue his work on prion diseases after his retirement from the University of Zürich, 72-year-old molecular biologist and biotechnology pioneer Charles Weissmann left Switzerland in 1999 for the Medical Research Council Prion Unit at University College London. There he has a 5-year contract that is extendable year by year. He is also currently considering an offer from the United States. Weissmann's younger collaborator in Zürich, Adriano Aguzzi, head of neuropathology at the University Hospital, says his departure left a gap. Indeed, says Howard Schachman, a molecular biologist who fought his own retirement at the University of California, Berkeley, in 1990 and still teaches there at 85, “it's preposterous to drive [Weissmann] out of the country.”

    Moving out.

    Kurt Wüthrich plans to spend most of his time in California when he retires from ETH Zürich.


    One forced retirement triggered a national crisis. ETH's Wüthrich had already shifted part of his operation to the Scripps Research Institute in La Jolla, California, in anticipation of retirement when he shared a Nobel Prize in chemistry in 2002 for his work on the structure of biological molecules. The prospect of the new Nobelist leaving Switzerland for the United States so inflamed public opinion that the Swiss Parliament modified the law governing retirement at the federal polytechnics, including ETH Zürich, to consider special cases. Had this change—popularly known as the Lex Wüthrich—come 3 years earlier, Wüthrich wrote in the Swiss weekly Die Weltwoche, he would have had no reason to go to America. “Switzerland's loss is very much Scripps's gain,” says Peter Wright, chair of the molecular biology department at Scripps. Beginning with retirement in 2004, Wüthrich will spend 8 months a year at Scripps and the rest at ETH.

    Of course, to go abroad you have to be asked. And even when asked, many would rather avoid the personal upheaval. As a “first-class” research director funded by the CNRS, France's basic research agency, Williams applied for and was granted a 5-year extension beyond age 65, but it offers little reward for her experience and years of service. The emeritus position comes with office space, a small amount of money for travel, consumables, and meals in the cafeteria. Her salary is paid from her pension. She can no longer be a principal investigator on grants, but she can apply as a co-investigator. “It's a very good system,” she says, stressing that she is privileged.

    Even international organizations such as the European Molecular Biology Laboratory (EMBL) in Heidelberg and CERN, the European particle physics lab near Geneva, subscribe to mandatory retirement. “The director-general is going when he's 65,” says Keith Williamson, EMBL's deputy administrative director. CERN researchers likewise must retire by 65, but pensioners still have access to the labs and the computers. Jack Steinberger, who shared a Nobel Prize in physics in 1988 for the discovery of the muon neutrino, told Science that he still shows up every day at CERN, where he has an office and a computer, and “tries to study cosmology.”

    Staying on.

    Claudine Williams got a 5-year contract from Collège de France when she hit 65, but little support.

    Critics of mandatory retirement lament the squandering of experience acquired over decades. “Everybody who is doing good science and thinking critically is part of the public wealth,” says Rajewsky. Others support Europe's approach, however. Austrian-born Gottfried Schatz, who retired relatively early (at 63) from the University of Basel's Biozentrum and now heads the Swiss government's Science and Technology Council, argues that the U.S. system violates the contract between generations, particularly in Europe, where young professors find few slots and must often wait for an older colleague to retire. “Tenure track would break down, because universities cannot plan promotions if professorships are open-ended,” says Schatz. Even people who themselves wish to carry on don't necessarily think it's right for everyone. Says Weissmann, “[Older scientists] may have fun doing research, but they do not always contribute to the edifice of science.”

    A graying scientific workforce may be inevitable, however. According to the Organisation for Economic Co-operation and Development in Paris, fewer than half of European workers aged 55 to 64 are currently employed. With people living longer, pension providers across Europe are having trouble meeting their commitments, and so the European Union wants to raise by 5 years the age at which Europeans actually leave the labor market. The E.U. has also approved a directive abolishing age discrimination. Although the directive does not prohibit mandatory retirement, the United Kingdom plans to end most of its restrictions or outlaw them altogether by 2006, the date the directive comes into effect. “I expect there will be a loosening of the regulations regarding retirement,” says Weissmann.

    In the meantime, the age of 65 will loom large for many. “There are some people who like to be out in their garden,” says André Dreiding, professor emeritus of organic chemistry at the University of Zürich, “and others who suffer a great deal.”


    The Vitamin D Deficit

    1. Erik Stokstad

    An apparent resurgence of rickets is just one manifestation of a deficiency of vitamin D, which could be linked to a range of other diseases

    BETHESDA, MARYLAND—Thinking his 10-month-old daughter Kiana had a bad case of flu, Ian Barrow took her to the emergency room earlier this year. Doctors immediately noticed something more serious: soft bones, an enlarged heart, and organs close to shutting down. The diagnosis was a shock: rickets. Barrow, a technician at the National Cancer Institute, says, “Rickets is something that had supposedly disappeared.”

    A low-calcium condition that can deform limbs, rickets is most commonly caused by a deficiency of vitamin D, which the body acquires primarily through exposure to sunlight. In the United States, vitamin D has been added to milk for decades. Part of Kiana's trouble was that she had been breastfed, and bad weather meant she and her mother had spent little time in the sun. As African Americans, their risks were increased: Dark skin blocks some of the ultraviolet rays needed to synthesize vitamin D. And if a mother is deficient, breastfeeding without supplements makes it likely that her infant will be, too. “I didn't know that breastfeeding and not being out in the sunlight could be disastrous,” says Ian Barrow.

    The seeming elimination of rickets through fortified milk has long been heralded as a public health success. Yet in 2000, doctors in North Carolina sounded an alarm, reporting what seemed to be an unusual number of cases. The Centers for Disease Control and Prevention (CDC) held an expert meeting the next year and is monitoring what may be a resurgence of the disease—possibly abetted by public health messages urging that infants be breastfed and kept out of the sun. In October the National Institutes of Health (NIH) organized a conference here to discuss the “alarming prevalence of low circulating levels of vitamin D.”

    A deficiency of vitamin D has implications far beyond rickets, argues endocrinologist Michael Holick, a longtime vitamin D researcher at Boston University. Not only does deficiency lead to bone fractures in the elderly, but preliminary research suggests that it could be involved in cancer and autoimmune disorders. It's not known exactly how much people need. Most researchers are convinced that the recommended intake levels set by the Food and Nutrition Board (FNB) of the National Academy of Sciences—and used to set the menus of all federally subsided meals—are too low. “It's been a rude awakening,” says Connie Weaver, a nutrition scientist at Purdue University in West Lafayette, Indiana, who as an FNB member helped set the levels. But the question of what is optimal remains tricky, so there's a lot more work to be done before scientists know what to recommend.

    Sun medicine

    Vitamin D is a steroid hormone that helps regulate calcium, an element vital not just for strong bones and teeth, but for nerves and the heart as well. It's found in very few foods except for salmon and other fatty fish. Most vitamin D is made when ultraviolet B (UVB) light hits a precursor molecule in the skin. As 25(OH)D, it then travels through the blood to the kidneys, which turn it into an active metabolite that regulates calcium levels. Other tissues make this metabolite locally to maintain cell health.

    Vitamin D can fall to critically low levels in the blood for genetic or nutritional reasons, with serious consequences. For an infant, the cartilage in developing bones fails to mineralize—the condition known as rickets. As calcium levels in blood go down, “the body steals what little calcium is left in the bones,” says Holick. Once the toddler begins to walk, the soft bones will warp and bow. In adults, it's called osteomalacia. Vitamin D deficiency can also send blood levels of calcium plummeting, which in rare cases can cause seizures or heart failure.

    Rickets became widespread in Europe and the United States in the 19th century, when air pollution and indoor labor reduced childhood exposure to UVB radiation. Once milk was fortified with vitamin D, beginning in the 1930s, the disease was all but eradicated. It remains a problem in some developing countries.

    Although the U.S. government does not track the incidence of nutritional rickets, the condition may be making a comeback. Between 1990 and 1995, nine children per million were discharged from hospitals with rickets; during the next 5 years, there were 13 per million. A smaller study of pediatric clinics in 1999–2000 suggests that the rate is about 23 cases per million children.

    The fact that nutritional rickets exists at all is ironic, say vitamin D researchers. They blame public health experts who have urged women to breastfeed without emphasizing the need for supplements. And they're even more angry at those who recommend that infants under 6 months avoid all sunlight to reduce cancer risks. “The people who are most savvy at listening to what we recommend … are the ones who are getting the short end,” Joan McGowan of the National Institute of Arthritis and Musculoskeletal and Skin Diseases told the NIH workshop.

    The vast majority of cases are in African-American children who are breastfed without supplements. Darker skin raises the chance of vitamin D deficiency, but the risk goes up for anyone living at latitudes higher than 38 degrees, where few UVB photons make it through the atmosphere during the winter.

    Maintaining adequate levels of vitamin D is also important for the elderly, whose skin is less efficient at making the hormone. Blood levels are often low, especially for those confined indoors. A study published in The New England Journal of Medicine in 1992 showed that supplementing 1634 elderly women with 800 international units (IUs) of vitamin D a day (33% more than the current U.S. recommendations for that age) for 18 months lowered the number of hip fractures by 43%. Those fractures “are as much manifestations of nutrient deficiency as are the bleeding gums of scurvy,” comments endocrinologist Robert Heaney of Creighton University in Omaha, Nebraska.

    Complete meal.

    Supplements help ensure that breastfed babies get enough vitamin D.


    Most of these problems could be solved by simply testing for deficiency and giving vitamin D supplements. But it can be “a major undertaking to get physicians to do this,” says nutritional biochemist Bruce Hollis of the University of South Carolina, Columbia. Although the American Academy of Pediatrics does recommend that exclusively breastfed infants get supplements, there is no guidance for testing of the elderly. In an attempt to get more doctors on board, John Cannell, a psychiatrist and advocate of vitamin D supplementation, says he has informed medical boards and testing labs of potential liability with copies sent to the association of trial lawyers. “They're the agents of change in this country,” explains Cannell, who works at Atascadero State Hospital in California.

    How much is enough?

    The FNB recommendations also need to change, according to researchers, including some of those who helped draft them. Last issued in 1997, the daily intake levels are now “totally inadequate,” says Holick, who was a member of the panel. Although experts agree that the recommended 200 IUs per day prevents rickets in infants, no one knows if it's really enough for optimal health. The 600 IUs recommended for the elderly is clearly short of what's needed to prevent fractures, Holick and others say. More disputed is whether current standards for ages in between—which recommend 200 IUs for adults up to age 50 and 400 IUs for those between the ages of 51 and 70—are ideal.

    Evidence is coming in that higher levels may be desirable. Heaney and others conducted a study of 34 postmenopausal women and found that when they received vitamin D supplements they absorbed 65% more calcium than they usually did. That's the most compelling evidence so far that the FNB recommendations may be too low for adults, says Weaver.

    But it's tricky to generalize about how much vitamin D should be consumed, Weaver notes, because so many variables come into play, such as individual vitamin storage. One study suggests that adult intake could be many times higher than the FNB's recommended level. In this trial, published in the January issue of the American Journal of Clinical Nutrition, Heaney, Holick, and others gave vitamin D supplements to 67 men in Nebraska for 20 weeks, over autumn and winter. They calculated that 500 IUs per day would keep the men at their baseline value of 28 nanograms/milliliter. People who can't or don't go outdoors may need supplements of as much as 3000 to 5000 IUs per day, this research suggests. Toxicity is not a concern at those levels, researchers say.

    Some studies suggest that higher levels of vitamin D may provide other health benefits. Children born in Finland in 1966 were 80% less likely to have developed type I diabetes if they had been given the recommended dosage of 2000 IUs of vitamin D, according to a November 2001 study in The Lancet. Other epidemiological studies, published in the early 1990s, suggest that vitamin D deficiency might be connected to a higher risk of some cancers. There are also tentative links to autoimmune disorders, such as inflammatory bowel disease. “I have to be cautious,” comments James Fleet of Purdue University. These early studies are suggestive, he says, “but not truly ready for prime time.”

    Until larger trials are completed, the prudent course may be to stick with the current recommendations, says Nick Bishop, a pediatrician at the University of Sheffield, U.K. Although most people think the levels should be raised, Bishop says, the data are still rather sketchy—probably too slim to justify a radical increase in recommended intake levels.

    While researchers work out the details, most healthy people can probably get enough vitamin D by simply soaking up midday direct sunlight on their face and hands for 10 to 15 minutes three times a week, or longer if they're darker skinned. Ian Barrow, for one, says that he's making sure that Kiana and her brothers spend more time outside.


    Pacific Migration Arrested by Meltdown's High Waters

    1. Richard A. Kerr

    SEATTLE, WASHINGTON—Last month, 6600 members of the Geological Society of America gathered here for their annual meeting. Presentations ranged from early peregrinations of humans in the Pacific to the wanderings of scraps of tectonic plate.

    The human urge to go where no one has gone before seems to have been frustrated only rarely. Many of our ancestors went out of Africa into Asia and Europe. They went on to Australia and made their way into the Americas, perhaps after a pause until the melting of the great ice sheets 15,000 years ago opened a passage. But people on a rush eastward thousands of kilometers across Pacific Oceania were brought to an abrupt halt 3000 years ago. They were stopped not by a geographic obstacle but by a lack of livable land ahead, according to evidence presented at the meeting. The idyllic atolls of palm-dotted islets along reefs fringing quiet central lagoons were yet to form, says a geologist, their ancient underpinnings still flooded by the melted glacial ice. Only a slow, deep flexing of Earth's interior—which came long after the melting—let the migration across Oceania proceed after a 1500-year delay.

    Archaeologists have offered a variety of possible explanations for the “long pause” of migration east of New Guinea and northeast of Australia. Perhaps the greater distances across open waters daunted Polynesian and Micronesian sailors until they developed additional skills and sailing technology. Or perhaps they needed a change of climate, for weather conditions more favorable for sailing that way. Geologist William Dickinson of the University of Arizona in Tucson now has evidence of a more geological explanation. The key to human migration in Oceania, he says, is the inevitable movement of sea level up and down over the millennia.

    Charles Darwin got it basically right when he concluded that atolls are coral-crowned volcanoes, says Dickinson. But Darwin assumed that sea level is fixed. In reality, sea level plunged about 125 meters when ocean water went into building the glacial ice sheets of the last ice age. Old coral rims and even central lagoons were left high and dry. When the ice melted at the end of the ice age, it actually flooded over the tops of the once high-standing islands. Between 4000 and 2000 years ago, they were still beneath the surface in eastern Oceania, where atolls are now more typical than in the west. Even if some sediment piles had managed to breach the water's surface, says Dickinson, such low, shifting sands would hardly have been a reliably dry, attractive place to live.

    Paradise now.

    The idyllic islets and protective fringing reefs of the southwest Pacific may have formed only 1000 to 2000 years ago as sea level fell, creating newly habitable atolls for restless migrants.


    While restless sailors were cooling their heels in the west of Oceania, geophysics was coming to their rescue, says Dickinson. In a process worked out by geophysicists Jerry X. Mitrovica and Richard Peltier of the University of Toronto in 1991, the great weight of glacial ice near the poles during the ice age would have pressed down the land beneath it, which in turn would have pushed up the sea floor in a surrounding ring at mid-latitudes. It would work somewhat like sitting on a waterbed. When polar ice melted away and the land beneath it began rebounding, the elevated sea floor would eventually begin to subside, and the mid-latitude ocean basins would deepen, making room for seawater that until then had been in the tropics. As a result, sea level in the tropical Pacific would begin subsiding toward its present level.

    From his gauging of shoreline movement, analysis of cores into atolls, and other measures of sea level change, Dickinson has now confirmed this scenario of flooding and reemergence in Oceania. What Mitrovica and Peltier “expect to see, one does see,” he says. And the latest fluctuations, he found, came at just the right time to explain the long pause in migration. Sea level peaked about 4000 years ago and then slowly fell about 2 meters to its present level. Between 2000 and 1000 years ago, the drop in sea level exposed particular coral-topped volcanoes enough to form solid, attractively habitable islets around a protected lagoon. And in each case, that moment of emergence just preceded the recorded time of first occupation.

    Dickinson's work “is an elegant illustration of the connection between geological history and, in this case, one of the grandest migrations in human history,” says geologist Robert Ginsburg of the University of Miami. “It seems he's got a good shot at being right.” Dickinson's timing of the drop in sea level “fits entirely with archaeology and linguistics,” says archaeologist Roger Green of the University of Auckland, New Zealand. “It all makes wonderful sense to me. I can't say all my colleagues follow me yet.” They will have to learn to appreciate geology more, he says, in order to understand what could have frustrated humans' urge to roam for 1500 years.


    Wanderlust in the Western Margin

    1. Richard A. Kerr

    SEATTLE, WASHINGTON—Last month, 6600 members of the Geological Society of America gathered here for their annual meeting. Presentations ranged from early peregrinations of humans in the Pacific to the wanderings of scraps of tectonic plate.

    More and better data are supposed to help scientists resolve even the most confounding problems. But the latest evidence on the travels of scraps of tectonic plate along the western margin of North America is doing little to clear up a 30-year debate over whether chunks of plate now in British Columbia started out 3000 kilometers to the south in the vicinity of Baja Mexico or formed in the neighborhood.

    “Ten years ago, you could say, ‘We need more data,’” says paleomagnetician Randolph Enkin of the Geological Survey of Canada in Sidney, British Columbia. “But a lot of data has come in, and I still see very strong arguments on both sides.” Some researchers are beginning to look for a fundamental flaw in the science.

    At the meeting, geologist Brian Mahoney of the University of Wisconsin, Eau Claire, and his colleagues presented the latest evidence that the plate scraps, or “terranes,” were homebodies. On the presumption that rivers would dump sediment onto any terrane moving by on its way to British Columbia, they looked for distinctive sediment grains in the Nanaimo Basin of British Columbia that were eroding westward from the rocks of western Montana in the late Cretaceous period more than 65 million years ago.

    One grain can look pretty much like another, so Mahoney and his colleagues radiometrically dated individual grains of the volcanic mineral zircon from the Nanaimo. Zircons from the so-called Belt supergroup of rocks in western Montana have distinct ages, dating to a billion years or more, and the combination of ages gives them a unique fingerprint. But Mahoney found no sign of Belt sediments in British Columbia's Nanaimo, suggesting that it never even reached California's Sierra Nevada, which is well north of Baja.

    Time tag.

    Dating a zircon grain such as this one gives its sedimentary source, a sign of where the plate carrying the zircon has been.


    Whereas sediment tracking suggests that the terranes of British Columbia didn't travel that far, the latest paleomagnetic data offer a counterargument. Enkin, Mahoney, and colleagues have added new measurements from a site called Churn Creek in British Columbia that support the reliability of earlier paleomagnetic data requiring long-range terrane transport. The inclination of Earth's magnetic field that is locked into a rock as it forms depends on the latitude of formation; the higher the latitude, the more inclined the locked-in field. Earlier results have been criticized for being liable to various subtle errors, but the new data pass all the required tests, says Enkin: “The paleomag has been confirmed.”

    Yet geologic connections that Enkin, Mahoney, and colleagues have recently found between Churn Creek and a seemingly reliable paleomagnetic site to the east mean that the “Baja-British Columbia [terrane] was moving as a very large body and very quickly,” says Enkin. In fact, their interpretation would require a 2500-kilometer-long terrane to range southward 2000 kilometers and then 3000 kilometers back northward, passing the Belt sediment source twice without picking any up. That's highly unlikely, they note. In addition, it would have needed to move more than twice as fast as today's fastest tectonic plate. “We contend that our results, while robust and reproducible, are tectonically unreasonable,” they write.

    Something's wrong somewhere, everyone agrees. “It makes you think, ‘What is the utility of paleomagnetism to other problems?’” says Enkin. Paleomagnetism worked well for showing that continental drift opened the Atlantic Ocean, he notes, but it has failed to solve several tectonic jigsaw puzzles, such as the construction of the supercontinent Pangea. “It's time to take a much closer look at what's going on with Earth's magnetic field in late Cretaceous time,” says geologist Mahoney, who would look for a shift in its structure.

    Enkin, the paleomagnetician, isn't so sure the fault lies with the paleomagnetics. “The paleomag is leading us to make some pretty outlandish paleogeographic statements,” he says. “But it's done that before” and turned out to be right, he notes, as with the Atlantic's opening.

    The contradictory data are raising levels of frustration and pessimism. “We don't have the right rocks of the right age to prove this once and for all,” says paleomagnetician John Geissman of the University of New Mexico in Albuquerque. “It's just really unfortunate.”