News this Week

Science  25 Sep 1998:
Vol. 281, Issue 5385, pp. 239

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Raising the Stakes in the Race for New Malaria Drugs

    1. Jocelyn Kaiser

    A group of scientists and funders last week gave an initial thumbs-up to a new strategy for bankrolling what could amount to a $30-million-a-year program to develop drugs against malaria, one of the world's biggest scourges. Although details are still being worked out, drug company representatives and potential donors—who gathered at a closed-door meeting on 17 September at the Rockefeller Foundation in New York City—believe they have overcome key hurdles that undermined a similar effort last November. “Real offers of genuine cash are now on the table,” says initiative proponent Trevor Jones, director-general of the Association of the British Pharmaceutical Industry.

    If the plan stays on track, it would amount to a welcome reversal of fortune for public health officials. They have been clamoring for years for new drugs against malaria, a disease that kills up to 2.7 million people a year, mostly in developing countries. Because the disease strikes relatively few people in rich countries, it has failed to attract much interest from Western drug companies. To tackle this problem, the World Health Organization (WHO) and other groups last fall proposed that drug companies pool resources and invest the lion's share of funds needed to launch a nonprofit that would develop new treatments. But the effort began to unravel last November, when industry leaders balked, saying the $180 million project was too costly and that some companies were already developing malaria drugs (Science, 5 December 1997, p. 1704).

    Taking a new tack, officials at the WHO and other organizations are soliciting support from foundations and other public sources. The idea is to create “the public-sector equivalent of a venture capital fund for one product,” says Tim Evans, head of the health sciences division at the Rockefeller Foundation. Acknowledging that industry isn't likely to offer substantial cash, organizers of the project—dubbed Medicines for Malaria Venture (MMV)—plan to hit up companies and government agencies for in-kind support. Such contributions could include access to chemical libraries and other “technologies that don't exist in the public sector,” says Robert Ridley, a malaria researcher at Hoffmann-La Roche in Basel, Switzerland, on leave to help WHO develop the project.

    Like other venture capital funds, the MMV would look to bet heavily on labs that are poised to move a tested idea closer to the marketplace. It would disburse research funds competitively, most likely to academic groups teamed up with drug companies. Evans says the grantees would develop potential drugs to the point where they are ready for phase I clinical trials or an investigational new drug application. “That's the pump that we're trying to prime,” he says. After that, the drug companies would run the show. The goal will be to develop on average one new drug every 5 years. Intellectual property rights would “be worked out on a case-by-case basis,” Evans says, although he anticipates that some royalties would get plowed back into the fund to help sustain it.

    The organizers hope to raise $15 million a year for starters and eventually ramp up to $30 million a year within 3 to 5 years. “That's probably the kind of commitment that would be required in the private sector to develop a drug,” says John La Montagne, deputy director of the National Institute of Allergy and Infectious Diseases. Last week's meeting, held to drum up support from foundations, drew an enthusiastic response, participants say. “It was a pretty positive meeting,” La Montagne says. Among the possible donors are the World Bank, the Rockefeller Foundation, and the United Kingdom's Department for International Development. Although organizers decline to comment on how much money has been committed so far, Ridley says enough funding is available “to get the show on the road in the coming year.”

    Lending impetus to the MMV is “Roll Back Malaria,” a global campaign to cut malaria deaths by 75% by the year 2015 launched in May by new WHO director Gro Harlem Brundtland (Science, 26 June, p. 2067). The MMV is also expected to build on the Multilateral Initiative on Malaria, an international effort to coordinate malaria research funding. The stepped-up commitments from public health agencies will only help in building a groundswell of support for MMV among foundations and other potential players in the fight against malaria, says Evans: “There's a strong sense of optimism that there is really a window of opportunity” to make headway against this disease.


    Semiconductor Beacons Light Up Cell Structures

    1. Robert F. Service

    Quantum dots are all the rage among physicists and chemists. Now these multitalented flecks of semiconductor, which can serve as components in tiny transistors and emit light in rainbow hues, look set to catch biologists' eyes as well. In this issue of Science, two separate teams of researchers report using quantum dots as fluorescent tags capable of tracing specific proteins within cells.

    Because dots that glow in different colors should be easier to use in tandem than combinations of conventional fluorescent dyes, “there's a real application here,” says Louis Brus, a chemist and quantum-dot expert at Columbia University in New York City. “It's quite likely these particles will replace conventional organic dyes” for many applications. D. Lansing Taylor, a biologist who specializes in fluorescence imaging at Carnegie Mellon University in Pittsburgh, agrees. The new particles, he says, appear to have “important advantages.”

    The current generation of fluorescent tags, made from small organic dye molecules, can be toxic, they burn out quickly, and they are difficult to use in tandem, for each dye must typically be excited with photons at a different wavelength. Quantum dots are a tempting alternative. They can match dyes color for color because their electrons, like those of all semiconductors, exist at discrete energy levels, known as bands. Adding energy—say, from a photon of light—kicks an electron up from a lower “valence” band to a higher “conduction” band. When the excited electron drops back into the valence band, it can give up its excess energy as a photon with an energy equaling the gap between the bands. In quantum dots, this bandgap increases as the dots get smaller, confining the electrons into tighter spaces. Thus, smaller dots with larger bandgaps give off more energetic, or bluer, photons.

    These 1- to 5-nanometer-sized particles, chemically synthesized at high temperatures, also lack many of the drawbacks of organic dyes. They are nontoxic and fluoresce up to 100 times longer. “That means that you can get better signal to noise and thus better detection,” notes Taylor. And laser photons energetic enough to excite small dots can also excite fluorescence from larger dots at the same time. “They can all be excited with one laser,” says A. Paul Alivisatos, a chemist at the University of California, Berkeley, who led one of the two teams. “That's important in biology,” he adds, “because it allows you to do multiplexing”—watch many different colors, and therefore different biomolecules, at once.

    To test this promise, Alivisatos, together with Lawrence Berkeley National Laboratory's Shimon Weiss and their colleagues, and another team led by Shuming Nie at Indiana University, Bloomington, started with what are known as core-shell quantum dots, which have an inner core made from one semiconductor surrounded by an ultrathin shell of a semiconductor with a higher bandgap. The shell, Alivisatos explains, helps confine all of the excitation energy in the dots to the core, resulting in a purer color.

    After selecting their dots, both teams chemically altered their surfaces so the dots would dissolve in water, enabling them to diffuse throughout cells. The researchers then linked the light emitters to molecules that would guide them to specific cellular targets. Alivisatos and his colleagues, for example, turned to a molecule called avidin, which binds to another molecule, biotin, like a key in a lock. They linked avidin to red-light-emitting dots and, in mice fibroblast cells, used biotin to label a filament-forming protein called actin. When they added the dots to the cells, the avidin keys found their biotin locks and lit up the filaments in red. In the same experiment, described on page 2013, the group decorated green-emitting dots with negatively charged urea and acetate groups, which helped direct the dots into the cells' nuclei, turning them green.

    To get the quantum dots into the cells, the Berkeley group had to pretreat the cells with acetone, which eats holes in the cell membrane, killing the cells in the process. But on page 2016, Nie and his graduate student Warren Chan describe a different approach that works on live cells. They linked their dots to transferrin proteins, which help ferry compounds through a living cell's membrane.

    Neither group is ready to stop there. Alivisatos says his group is developing quantum-dot probes that can light up DNA and might replace organic fluorophores in gene-sequencing machines. Nie plans to take advantage of the dots' bright fluorescence to improve the sensitivity of diagnostic tests, such as those that detect minute quantities of the AIDS virus. If either effort succeeds, biologists can expect a bright future from quantum dots.


    Kenya Parks Chief Ousted--Again

    1. David Malakoff

    Kenya's fickle political winds have again blown conservation leader David Western out of office—this time permanently. Just 4 months after losing and then regaining his post as head of the Kenya Wildlife Service (KWS), which manages some of the world's best known natural areas, Western was abruptly sacked again last week by Kenyan President Daniel arap Moi.

    The unexpected ouster, which came just weeks after Western had secured a $5 million grant from the Kenyan government that will allow the embattled KWS to survive a financial crisis, prompted dismay among observers in Kenya and international conservation circles. “What an end to a sad, sordid story,” says David Woodruff, a University of California, San Diego, biologist who supported Western's sometimes controversial efforts to reorient the KWS (Science, 5 June, p. 1518). Western, however, is taking his dismissal philosophically. “Conservation is an extremely tough business—one has to accept reversals and go on,” he told Science.

    Western was appointed head of Kenya's premier conservation agency in 1994, after the resignation of Richard Leakey, a noted anthropologist who is now a leading opposition politician. Almost immediately, Western faced financial problems brought on by a decline in tourism and the end of several large grants provided by foreign donors. He also faced withering criticism from Leakey and others over his management style, his moves to cut staff, and his efforts to enlist people living on wildlife-rich lands outside the parks in conservation. The simmering controversy boiled over in May, when Moi fired Western, only to rehire him 6 days later following an international outcry from conservationists—and threats from donor agencies to withhold millions of dollars in grants. At the time, some of Western's supporters charged that Leakey was behind the ouster, but Western himself said that mining interests hoping to gain access to park lands were responsible.

    This time, however, Western says he is “very puzzled” about why Moi prevented him from serving until his contract was to expire in February 1999, adding that the decision appeared almost “whimsical.” Editors at The Nation, Kenya's leading newspaper, appear equally confused. In a 20 September editorial, they demanded that government officials explain “in fuller detail why Dr. Western was fired.” Whatever the explanation, Western says he will continue to “do everything possible to support the KWS.” One lesson his own tenure teaches, he says, is that the agency's governing board—rather than Kenya's president—should be given the power to hire and fire directors. “The crucial point is to keep politics out of the KWS,” he says. He adds that he is “unaware of anyone waiting in the wings” to take his old job, which is being filled on an acting basis by KWS Deputy Director David Kioko.

    Western plans to spend the next few years writing about his conservation experiences. He is disappointed that he won't be able to finish several tasks he started at the KWS, such as developing a long-term funding strategy and a process for identifying key areas in need of conservation. Western is proud, however, of gains he made in involving Kenyans in conservation efforts. “Conservation has filtered right down to the grassroots,” he claims. “We began a process of engaging people in conservation and the role it plays in their lives.”


    Canada to Draw Up Strategic Plans

    1. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    Ottawa—With $520 million to spend on refitting the nation's academic laboratories, the Canada Foundation for Innovation (CFI) has generated a lot of interest from university researchers. Too much, as it turns out.

    This month, after sifting through more than 300 proposals for its first round of grants, CFI officials decided that they couldn't choose among virtually identical projects without first seeking a community consensus on priorities in a dozen or so fields for which applicants were seeking funding. That exercise will force a delay in the bulk of awards and could lead to collaborations and significant revisions among what are now competing projects. University administrators warn that it also could pose quite a challenge for a community accustomed to going its own way.

    CFI was created last year with government funds, and it instantly became the country's largest foundation. The upcoming awards are seen as a badly needed shot in the arm to the nation's sagging academic research infrastructure, and university officials had no problem generating $785 million worth of requests for an initial pot of $260 million, despite a requirement for matching funds. An initial review earlier this month eliminated about one-third of the applications, but the original goal of issuing all awards by the end of the year has been pushed back indefinitely.

    The new approach involves drawing up what David Strangeway, president of CFI, calls “a coherent regional or national strategy” for several fields. Without such a strategy, he says, CFI can't be sure that its money is being put to the best use. In the area of genomics, for example, CFI received 18 applications for genetics centers, all dealing with human genomics. Strangeway says the national interest might be better served if some of these proposed centers focused on animal or plant genomics.

    Strangeway says CFI's governing board will select the specific fields to be examined at a meeting on 13 October. He estimates the formation of 10 to 12 task forces, composed of experts drawn from around the country and the world, that would cover such areas as genomics, high-performance computing, and digital libraries. The panels would make their recommendations regarding national scientific priorities and needs. The universities, meanwhile, will be encouraged to work together to revise their proposals to address those national strategies. Both the recommendations and the revised proposals will then be fed back to CFI peer-review committees, whose advice will be incorporated into the board's funding decisions.

    Such directives may encounter some resistance, however, say university administrators. “Universities spend a lot of time developing their expertise in certain areas,” notes Sally Brown, executive vice president of the Association of Universities & Colleges of Canada. “If somebody puts in a human genome project as opposed to a plant one and is then told that we've got enough of those, there will be some sensitivities.” Others are skeptical about Canada's capacity to develop discipline-specific strategies. “We don't even have a national science strategy, so who are we trying to kid?” asks Paul Hough, executive director of the Canadian Consortium for Research, an association of scientific lobbies.

    But universities realize that some collaboration is inevitable, if not also desirable, says Brown. “Budget cuts have forced it,” she says. “It's not often national in scope, but there's certainly a lot more of this stuff going on.”

    Strangeway acknowledges that there are a host of potential political land mines. But he says the CFI must exercise “due diligence” in ensuring that taxpayers “get the best return on intellectual activity.” And Chad Gaffield, president of the Humanities & Social Sciences Federation of Canada, agrees that half a billion dollars provides a strong impetus for collaboration: “They have a lot of money as a carrot, so, presumably, there is a very good incentive to get this worked out.”


    Lasker Awards Go to Cancer Researchers

    1. Jennifer Couzin

    Seven biologists received coveted Albert Lasker Medical Research Awards this week. The award for basic research went to three scientists in recognition of their contributions toward understanding cell division mechanisms, while three others shared the clinical prize for their studies on the genetic basis of cancer. In addition, former Science Editor-in-Chief Daniel E. Koshland Jr. received a separate Lasker award for lifetime achievement in medical research. Although not the most lucrative awards—this year's basic and clinical winners get $10,000 each—the Laskers are considered highly prestigious because they frequently foreshadow the Nobel Prize. Indeed, 59 Lasker winners have gone on to win Nobels.

    The chair of the jury that selected the winners, Joseph Goldstein of the University of Texas Southwestern Medical Center in Dallas, who is himself both a Lasker and a Nobel Prize winner, says that the current awardees “really provided the foundation” for understanding both normal cell division and the genetic errors that cause it to go awry, as happens in cancer. The winners for basic research—Yoshio Masui, a professor emeritus of zoology at the University of Toronto; Lee Hartwell, director of the Fred Hutchinson Cancer Research Center in Seattle; and Paul Nurse, director-general of the Imperial Cancer Research Fund in London—helped tease out the many components of the biochemical machinery that drives cell division.

    Masui provided the first clue with his 1971 discovery of the then-uncharacterized maturation promoting factor (MPF), which stimulates cell division in frog eggs. Then, Hartwell and Nurse, working with two different yeast species, identified a series of genes involved in regulating cell division in those organisms and, as they and others showed, in other species as well. In fact, one of the genes turned out to encode a component of Masui's MPF.

    The winners of the clinical award—Alfred Knudson Jr., former president of the Fox Chase Cancer Center in Philadelphia; Peter Nowell of the University of Pennsylvania School of Medicine in Philadelphia; and Janet Rowley of the University of Chicago Medical Center—examined how genetic abnormalities may trigger cancer. Nowell and Rowley proved that leukemia could be caused by faulty genes, while Knudson showed that development of certain childhood cancers requires mutations in both copies of the genes at fault, a finding that led to the idea of tumor suppressor genes, currently one of the hottest topics in cancer research.

    And finally, Koshland, currently a biochemist at the University of California, Berkeley, was honored for his work on enzyme regulation and cell signaling systems, as well as his efforts to reshape biology studies at Berkeley and his success at improving the quality of Science.


    300-Year-Old RGO Finally to Close

    1. Nigel Williams

    London—Like Lewis Carroll's Cheshire cat, which disappeared leaving only its grin, one of Britain's oldest scientific institutions will vanish next month leaving only its name. The 300-year-old Royal Greenwich Observatory (RGO) in Cambridge, which provides technical and scientific support for Britain's astronomers, will close in October as part of cost-cutting measures by the Particle Physics and Astronomy Research Council (PPARC). Far from leaving a grin, however, the loss has left many astronomers grimacing. “The closure sends a very unfortunate signal to our foreign colleagues, students, and the public about the status of British astronomy,” says Britain's Astronomer Royal, Martin Rees.

    After reviews of Britain's home-based astronomy facilities over 15 years, RGO finally lost out last year in a contest with the Royal Observatory Edinburgh to become Britain's single Astronomy Technology Centre (ATC), serving telescopes in the Canary Islands and Hawaii (Science, 13 June 1997, p. 1641). The ATC opens officially next month. The former science minister, John Battle, backed the decision but asked the council to try to find a way of saving the name of the RGO in some form. However, to stay afloat as a semi-independent scientific institution, RGO staff developed a business plan for telescope design and construction and discussed the possibility of closer links with Cambridge University.

    But at the end of last year, PPARC finally decided to close the observatory, in part because of worries that a reconfigured RGO might end up in competition with the new ATC (Science, 19 December 1997, p. 2049). PPARC says the closure will release an extra $3.2 million for astronomy research over the next 4 years and $6.5 million each year after that.

    PPARC and the government are now discussing plans to transfer the RGO name back to its original site in Greenwich, southeast London. The old observatory at Greenwich, straddling the Greenwich Meridian at zero degrees longitude, is now a museum and will house new public exhibitions on astronomy under a plan agreed this month between the National Maritime Museum—its owner—and PPARC. Many old instruments held in Cambridge and the RGO's public astronomy information service will also be moved to Greenwich.

    PPARC has been trying hard to minimize the number of job losses among RGO's staff of 110 and says that all senior researchers have been found alternative university positions. “We expect very few to be unemployed by the end of the year,” says PPARC administrator Jim Sadlier. A few staff members will move to a telescope construction company set up by researchers from John Moores University in Liverpool, called Telescope Technologies Limited; five are expected to transfer to the new ATC; and another six will set up temporary home at Cambridge University's Cavendish Laboratory to complete ongoing projects.

    Despite PPARC's efforts, staff at the RGO are still bitter about the closure. “A close-knit, high-tech family has been blown apart, and we feel it very personally,” says RGO director Jasper Wall. And many astronomers are still concerned about the effects of dispersing the RGO team. “Crucial technical expertise for future projects is being lost,” says astronomer Phil Charles at the University of Oxford. “In the coming years, there are going to be many occasions when we realize we just don't have the support we need.”


    NSF Eyes Biodiversity Monitoring Network

    1. Jeffrey Mervis

    To most people, an observatory is a place for astronomers to probe the far reaches of the universe. But some life scientists think the concept might also help unlock secrets in their own backyards. In what could turn into the most ambitious effort yet to systematically study Earth's ecosystems, the National Science Foundation (NSF) has begun planning what may become a global system of biodiversity observatories. The idea appears to be on a fast track at NSF as one of several environmental initiatives promoted by new director Rita Colwell (see p. 1944).

    The observatories program would build on a spate of NSF-funded activity in recent years to study biodiversity and ecological processes. NSF already funds 21 Long-Term Ecological Research (LTER) sites that monitor ecosystems ranging from Antarctic dry valleys to New England forests (Science, 15 October 1993, p. 334). Three years ago it created a National Center for Ecological Analysis and Synthesis in Santa Barbara, California, to support projects that attempt to glean insights from existing data collected across LTER sites and any number of field and marine research stations (Science, 17 January 1997, p. 310). More recently, Arctic researchers funded by NSF proposed pooling data from a network of circumpolar studies. And this fall the agency is preparing a competition to support microbial research at a half-dozen or so existing outposts.

    The observatories idea is likely to incorporate elements of all those programs—although planners have not yet hammered out any details, including the definition, number, and locations of the observatories. The program's budget is also unknown, although researchers and NSF officials hope that some work can begin within 2 years. Despite such gaps, organizers have at least outlined the project's philosophical underpinnings: to take the broadest possible look at how organisms interact and evolve in a range of ecosystems. “We're trying to get away from the stamp-album approach, in which scientists go to one site and take a snapshot of conditions at that time for a particular organism,” explains Doug Siegel-Causey, NSF's program manager for biotic surveys and inventories, who will manage the initiative. “But it's hard to take a picture of a dynamic process.”

    NSF took the first step in that direction earlier this month when it convened 15 experts. The group endorsed the idea of such observatories, agreeing that it is long overdue, says meeting chair Leonard Kristtalka, director of the University of Kansas Biodiversity Research Center. “Historically, the systematists and the ecologists have gone their separate ways, and biology has been the worse for it,” he says. “These two approaches need to be brought together if we hope to understand biodiversity over time.”

    One idea likely to receive scrutiny is for a center to support any number of sites in what NSF officials describe as a hub-and-spokes arrangement. Whether it's a physical entity or a virtual presence, the center could serve as both online database and administrative support for field researchers. Participants also envision establishing the observatories at some combination of existing field and marine stations and new sites. A second workshop this fall will prepare recommendations for NSF, says Siegel-Causey.

    Meanwhile, a smaller NSF initiative is nearing the starting gate. That's a plan to spend $2.5 million in 1999 to set up microbial observatories at half a dozen existing field stations, with the intention to double or triple that number in 2000. The money would fund research that extends existing studies ranging from identifying new species and sequencing DNA to measuring nitrogen fixation and other biogeochemical processes. “For far too long, microorganisms have been a black box,” says Colwell. “But it turns out that they play a fundamental role in everything.”

    The two initiatives would dovetail nicely, says Siegel-Causey: “I could imagine one station having adjacent plots of land labeled microbial and biodiversity observatories.” But he says the biodiversity observatories initiative, once unveiled, could well be a far more ambitious project than the microbial stations: “We're thinking an order of magnitude larger.” Not quite astronomical proportions, maybe, but a big step for environmental researchers and taxonomists.


    Harvard Tops in Scientific Impact

    1. Amy Adams*
    1. Amy Adams is a science writer in Santa Cruz, California.

    Harvard University wins bragging rights in the latest ranking of U.S. research universities, according to the September/October ScienceWatch. It not only churned out more papers than any other university between 1993 and 1997, but the work was rated as having higher scientific impact across the board.

    The Philadelphia-based Institute for Scientific Information, which publishes ScienceWatch, tracks citations from hundreds of scientific journals. To rank the top 100 federally funded universities in 21 separate fields, ScienceWatch worked out the average number of times that papers from researchers at each institution were cited in another paper. These scores were then calculated as a percentage above or below the world average for papers in the same field, to yield an estimate of their “relative impact.” In clinical medicine, for example, papers from Johns Hopkins University were cited, on average, 9.19 times—129% above the world average for the field. Chris King, who edits ScienceWatch, says the calculation “represents what scientists think is important in their field when they write papers.”

    View this table:

    Harvard placed in the top 10 in 17 of the 21 categories, ScienceWatch reports. It was followed by Stanford University (13 top-10 placings), California Institute of Technology (Caltech) with 11, Yale University (9), the University of Michigan (9), Massachusetts Institute of Technology (MIT) with 8, University of California (UC) Berkeley (7), University of Washington (6), UC Santa Barbara (6), Cornell University (6), and UC San Diego (6).

    Although the overall rankings were based on performance in all fields of science, ScienceWatch published rankings in only nine biological science fields in the current issue; it plans to publish the rankings in the physical sciences and some social science fields in its next issue. The biology rankings indicate that quality does not always go hand in hand with quantity. In neuroscience, for example, Caltech came out on top for relative impact, publishing 395 papers compared to Harvard's 2419. Washington University in St. Louis ranked first in immunology with only a third as many papers as number two Harvard, and MIT had the highest relative impact in molecular biology and genetics with a fraction of Harvard's publication rate. The same held true for the rankings of biology and biochemistry, which Duke University topped.


    A Biomolecule Building Block From Vents

    1. Robert F. Service

    In 1952, University of Chicago chemists Stanley Miller and Harold Urey staged a simple demonstration that transfixed other scientists pondering the origin of life. They showed that a mixture of ammonia, methane, hydrogen, and water yielded amino acids—the building blocks of proteins—when zapped with the lab equivalent of a lightning bolt. The demonstration was hailed as a re-creation of a likely first step toward life. But critics later dubbed the experiment a creation rather than a re-creation, pointing out that whereas inert nitrogen gas (N2) would have been abundant on the early Earth, the reactive forms needed to make amino acids, such as ammonia (NH3), would have been scarce. “The formation of ammonia has always been a big problem for origin-of-life scenarios,” says Jim Ferris, a chemist at Rensselaer Polytechnic Institute in Troy, New York.

    Now, a team of researchers at the Carnegie Institution of Washington, D.C., report in this week's issue of Nature that they may have found a major source of early ammonia: the hot springs on the deep sea floor. In a series of laboratory tests, the researchers found that minerals deposited there make efficient catalysts for converting nitrogen into ammonia at the high temperatures and pressures of the vents. And because the vents continuously heat up and spew out huge volumes of water, says study leader Robert Hazen of the Carnegie Institution of Washington's Geophysical Laboratory, they could have churned out enough ammonia to set the stage for life's beginnings—either at the surface, perhaps sparked by lightning, or at the vents themselves. “If these guys have come up with an abundant source of ammonia, that's an important step forward,” says Ferris.

    The new study isn't the first to propose a source of ammonia in the prebiotic Earth. Other teams previously suggested that ferrous iron dissolved in water or titanium dioxide particles in desert sands could have converted N2 to ammonia. But these reactions probably would have been too slow to keep pace with the destruction of ammonia by the sun's ultraviolet rays as it wafted into the atmosphere.

    Hazen wondered whether the high pressures and temperatures found at deep-sea vents could have sped things along. Such vents line the midocean ridges, where magma wells up to form new ocean crust. Cool water seeps through fissures in the crust, hits the superheated rock near the magma, and roars back upward at temperatures of up to 350 degrees Celsius. Iron and sulfur dissolved in the hot water then rain out as it emerges from the vent and cools, depositing minerals such as pyrite (FeS2), pyrrhotite (Fe1−×S), and magnetite (Fe3O4).

    These minerals, Hazen thought, might act as catalysts for ammonia production. Testing the idea at present-day vents was impractical, because ammonia from microorganisms would swamp any ammonia made by the minerals. So Hazen, postdoc Jay Brandes, and their Carnegie Institution colleagues devised a laboratory test by combining a vent mineral, a nitrogen source such as N2, nitrite (NO2), or nitrate (NO3), and water, then cooking the mixture at varying temperatures and pressures. The results were unambiguous. In most ventlike conditions, the minerals turned into little ammonia factories. At 500 degrees Celsius and 500 atmospheres of pressure, for example, pyrrhotite converted up to 90% of the nitrate to ammonia in just 15 minutes. At lower temperatures of about 300° to 350°C, Hazen says, the ammonia conversion was still as high as 70%. Even powdered basalt, the stuff of the sea floor itself, seemed to do the catalytic trick.

    “It could be that this is the dominant mechanism” for forming ammonia on the early Earth, says Chris Chyba, an early Earth expert at the Search for Extraterrestrial Intelligence Institute in Mountain View, California, and Stanford University. Still, Chyba says it's hard to say exactly how much would have been produced, as so little is known about conditions in the planet's early days. But Chyba notes that if vents did churn out ammonia, this could help explain another mystery: the faint young sun paradox.

    Researchers have long known that early in Earth history, the sun only put out about 70% of the light and heat it does today. The oceans and all other surface water should have frozen, yet life's early appearance on the planet suggests liquid water must have been present. Abundant ammonia resolves this dilemma, says Chyba, because as a powerful greenhouse gas it could have helped trap the sun's warmth. If so, Chyba says, “it suggests that there may have been an important synergy between subsurface and surface environments that helped life get its start.”


    Panel Calls for Science-Savvy Diplomats

    1. David Malakoff

    Diplomacy is often noted for its slow pace and bland language. But last week an unusually fast-moving National Academy of Sciences (NAS) panel offered the U.S. State Department some plain-spoken suggestions for improving the quality of the scientific advice available to makers of foreign policy. Although government officials say they welcome the input, many observers are skeptical that it will lead to significant changes.

    The interim report* is the latest in a long line of well-meaning but often ignored reports aimed at helping the department cope with a growing array of technology-based issues, ranging from bioterrorism to biotechnology (Science, 15 May, p. 998). It comes less than 4 months after Secretary of State Madeline Albright asked for outside guidance on shoring up the diplomatic corps' sagging expertise in science and technology. In recent years, scientists have criticized the department for undermining its already slim scientific capabilities by abolishing embassy and headquarters positions once filled by science-savvy foreign service officers.

    The NAS panel outlines nine “immediate and practical” steps Albright could take to increase the State Department's sensitivity to science and technology issues. The suggestions range from appointing one of her five undersecretaries as a science czar who would integrate science, technology, and health issues into top-level decisions to creating a new external advisory board. A State Department official says senior administrators “are grateful that the committee responded in such a quick and highly focused manner” and are organizing a task force to “digest the report and examine its financial implications” as they assemble their request for the fiscal year 2000 budget.

    But panel members say money shouldn't be an obstacle. “There are ways to do some of this on the cheap,” says panel leader Robert Frosch of Harvard University's Kennedy School of Government in Cambridge, Massachusetts. For instance, Frosch says adopting a new policy on integrating scientific concerns into day-to-day diplomacy doesn't require new spending, nor would building closer ties to knowledgeable staff at other government agencies.

    At the same time, the panel concedes that some solutions will cost money. One is to create about a dozen new positions at headquarters for science and health experts, with half assigned to the 130-person Bureau of Oceans and International Environmental and Scientific Affairs. The committee also proposed strengthening scientific posts at a handful of key embassies, as well as spending up to $500,000 per year on a scientific advisory board to help staff with cutting-edge issues, such as the impact of the Internet on foreign relations. The costs of making such moves, the committee concludes, “seem modest given the stakes involved.”

    Other recommendations are aimed at changing a State Department culture that has discouraged career staff from taking a professional interest in science. Becoming an embassy science attaché is often viewed as a “kiss of death” among foreign service officers hoping for promotion, says Frosch. To combat that trend, the panel wants the department to encourage young diplomats to learn more about the growing role of science in foreign affairs and to provide a career ladder for scientifically literate employees. Details will be included in the committee's final report, due out sometime next year.

    In offering its suggestions, the NAS panel acknowledged that it's not the first committee to offer suggestions for injecting more science into foreign policy. “Over the years, the department has gotten a lot of advice on this subject,” Frosch says, citing about three dozen reports in the last 50 years. But he thinks the department is more receptive to technical advice now than in the past. “It's not difficult to get good scientific advice in Washington—you have to bob and weave to avoid it,” Frosch says. “The real trick is knowing when to ask for it.” And State Department officials say that this time they are listening closely: “The Secretary asked for it, so we are taking [the report] very seriously,” says one diplomat.

    • *“Improving the Use of Science, Technology, and Health Expertise in U.S. Foreign Policy,” an interim report of the National Research Council Committee on Science, Technology, and Health Aspects of the Foreign Policy Agenda of the United States (


    Size of Indian Blasts Still Disputed

    1. Pallava Bagla*

    Four months after India and Pakistan surprised the world with twin sets of nuclear bomb tests, Indian and U.S. scientists remain sharply divided over the actual size of India's explosions. The debate—which flared up this week in two new papers—could affect the international test ban agreement, as its enforcement depends on the ability to detect even small nuclear tests with confidence.

    In a Policy Forum published this week in Science (p. 1967), a group of 19 academic and U.S. government seismologists calculate that the yield from the 11 May Indian event—the larger of India's two sets of tests—was 9 to 16 kilotons with 50% uncertainty. (The Indian government reported that three devices were exploded simultaneously that day, the largest a fusion device.) However, a group of physicists at India's Bhabha Atomic Research Centre (BARC) in Mumbai claims this estimate is too low by a factor of 4. In a paper appearing in the 10 September issue of the Indian journal Current Science, Satinder Kumar Sikka and his colleagues in BARC's high pressure physics division report that the international monitoring system grossly understated the blast sizes by failing to account for the seismic patterns created by the overlapping explosions. Based on a computer analysis of the seismic recordings, they say the actual yield was 58 kilotons, even larger than the initial report of 55 kilotons.

    Neither the Indian nor U.S. paper casts any new light, however, on the most controversial test in last spring's series. That is the two low-yield explosions India says it detonated 2 days later, on 13 May. Indian officials said at the time that these small tests released nuclear energy equivalent to about 800 tons of TNT. But they produced no signals on remote seismic sensors, and some U.S. researchers concluded that no nuclear blast had occurred (Science, 26 June, p. 2038). The authors of this week's Science Policy Forum estimate that a blast larger than 30 tons would have been detected but that one 10 times larger could have escaped detection if detonated in sand, as reported. The BARC scientists do not mention this test in their analysis.

    The U.S. seismologists base their estimate of India's 11 May test on earthquake data, an analysis of local geology, and a compilation of seismic recordings from dozens of stations around the globe. The BARC researchers argue, however, that seismic waves from the blasts may have interacted to produce misleading, attenuated signals at remote sites. Sikka, Falguni Roy, and G. J. Nair note that the major explosions on 11 May took place in two shafts separated in an east-west direction by 1 km. (A much smaller device was exploded in a third shaft 2.2 km away.) Delays between surface waves from these sites, Sikka told Science, could create “destructive interference of the waves in the east-west direction” as well as “constructive interference in the north-south direction.” This could explain, he says, why some seismic stations—particularly those on an east-west line from the test site—actually recorded smaller signals. The BARC scientists say this phenomenon also explains a 30-fold variation, roughly three times larger than expected, in the size of the compression waves from the blasts.

    In an effort to calculate the “true magnitude” of the signal created by the 11 May test, the BARC research team analyzed data from 51 stations of the International Data Center in Arlington, Virginia, and concluded that seismic stations east and west of the Indian test site at Pokharan (which recorded smaller signals) were not as reliable as those to the north or south. The BARC researchers combined information from the Indian seismic array at Gauribidanur with data from a select group of 11 other stations, excluding many stations that their paper says “could have underestimated the true [seismic wave],” to peg the magnitude at 5.4 on the Richter scale, not 5.0, as claimed by the U.S. group. Sikka says averaging the data is misleading but that it serves the interests of some seismologists: “They want to belittle our tests; at the same time they want to defend [the credibility of the seismic monitoring system].”

    Terry Wallace of the University of Arizona, a senior author of the Science paper, says the BARC scientists “are choosing arguments clearly designed to make the yield as large as possible.” He added that “half a dozen” teams of sesimologists participating in a Defense Department conference this week had reached roughly the same conclusion as his group: The upper bound on India's 11 May tests is 25 kilotons. A colleague, seismologist Jeffrey Park of Yale University, adds that most of the arguments presented in the BARC paper have been considered “very carefully” in the past. “There are some novel elements in the BARC paper,” he notes, but “I don't find them persuasive.” Sikka says his team is still analyzing cores from the test site for a more accurate measure of the yield.

    • Pallava Bagla is a correspondent in New Delhi.

    • * With reporting by Eliot Marshall.


    Two More Scientists Died in Swissair Crash

    The crash of Swissair flight 111 on 2 September claimed the lives of two prominent scientists who were not included in our initial coverage of the tragedy (Science, 11 September, p. 1587). Also aboard the flight were Eugenia Spanopoulou, an immunologist at Mount Sinai Medical School in New York City, and Thomas Kreis, chair of the Department of Cell Biology at the University of Geneva. Spanopoulou's research focused on the role of the immune system's recombination activating genes, RAG-1 and RAG-2, in generating antibody and T cell receptor diversity. Spanopoulou, who was traveling with her husband and 16-month-old son, was selected as a Howard Hughes Medical Institute investigator and joined the institute last year. Kreis was an internationally known authority on proteins that regulate membrane traffic in cells.


    Seeking a Snapshot of an Alien World

    1. Andrew Watson*
    1. Andrew Watson is a writer in Norwich, U.K.

    Now that we know extrasolar planets are out there, astronomers are gearing up to photograph one. But the job will require a sizable fleet of space observatories

    Ever since the subtle wobble of a nearby star gave astronomers their first indirect hint of a planet outside our solar system, they have longed for closeup views of distant worlds. Following that first planet detection, in 1995, such wobbles have revealed a total of about 10 candidate extrasolar planets tugging on their parent stars (Science, 30 May 1997, p. 1336). And although all are inhospitable “gas giants” similar to Jupiter, few astronomers doubt that small, rocky planets like our own—possible nurseries for life—are waiting to be discovered. Visiting them is out of the question. But by launching armadas of telescopes into space, astronomers hope to get a closeup look at other Earths and scan them for signs of life.

    The undertaking has fired the imagination of astronomers and administrators alike, from Alain Leger of the Institute of Space Astrophysics in Orsay, near Paris, who calls it “a great adventure for humanity,” to Dan Goldin, the NASA administrator, who has made planet searching a cornerstone of NASA's Origins program. And in a burst of studies and proposals over the past 3 years, astronomers in Europe and the United States have proposed a planet-spotting strategy and planned a series of missions that, within 25 years, may return portraits of an alien Earth and even reveal signs of life and large features such as an otherworldly Amazon jungle.

    A useful single telescope able to spot tiny dim planets just a whisker away from a bright star would need a mirror roughly 100 meters across, 10 times as wide as the largest available today, and even such a monster telescope could not reveal any detail on an alien world. So astronomers are pinning their hopes on a relatively new technique called optical interferometry. Interferometry combines the light gathered by two or more standard-sized telescopes placed some distance apart in such a way that the resulting image has the resolution of a telescope as wide as the baseline of the interferometer (see sidebar). Interferometry “is just another way of building larger and larger telescopes,” says Michael Shao of NASA's Jet Propulsion Laboratory (JPL) in Pasadena.

    Which is not to say that building a planet-spotting interferometer will be easy. Optical interferometry stretches the limits of technology even on the ground, and planet imaging will have to be done from space. A space-based interferometer can be arbitrarily large, and the infrared wavelengths that carry information about substances such as oxygen and water—clues to possible life—are blocked by Earth's atmosphere and can only be detected in space. NASA is laying plans to fly, perhaps as early as 2002, a technology-demonstration mission to see whether space-based interferometry is even possible. The ultimate goal, penciled in for 2020, is an instrument with a baseline as wide as the United States, which would provide the first image of an alien world and probe its atmosphere for signs of life.

    Precision flying

    Interferometry is nothing new for radio astronomers, who build arrays of telescopes spanning thousands of kilometers on the ground and have even launched one antenna into space to create an interferometer larger than Earth itself (Science, 18 September, p. 1825). But the challenge of interferometry is in precisely combining the signals from the telescopes, which entails holding the path length from star to image through each scope steady to wavelength accuracy. That's a far easier task in radio astronomy, where wavelengths are measured in meters, than in optical astronomy, where wavelengths are less than a millionth of a meter.

    View this table:

    Several experimental optical interferometers are now in use around the world (Science, 6 March, p. 1449). But for planet searchers, says JPL's Charles Beichman, “the atmosphere is a major problem, so we need to go to space to fully realize the advantages of interferometry.” Controlling the light paths is challenging enough on the ground; in space it is still more daunting. A space-based interferometer is likely to take the form of a flock of spacecraft, each carrying its own mirror, which would have to combine their light beams to the nearest tenth of a micrometer or better over long periods of time. But in just 4 years, if all goes well, planet searchers will test their ability to perform such precision flying.

    The test will be a NASA mission called Deep Space 3. Despite the imminent launch, the final form that DS-3 will take has not yet been decided. In the latest version, two spacecraft would fly in tandem up to 1 kilometer apart, their relative positions controlled to centimeter precision. With the help of onboard correcting optics, the maximum resolution of its images should be 0.1 milli-arc seconds (mas), or 1/10,000 of a second of arc. Such a precision is stunning compared to the 10-mas theoretical best resolution of today's biggest telescopes, the 10-meter Keck telescopes in Hawaii, and should be at least enough to get a clear “family portrait” of a nearby planet system, showing planets as indistinct bright regions, like flashlights in a fog, around a central star.

    In practice, however, this is not DS-3's primary goal. “Basically, DS-3 is a technology-demonstration project. … Its main goal is not to do science,” says Shao, who heads JPL's interferometry center, which is masterminding the DS-3 project. Instead, the aim is to test interferometry combining light from separate spacecraft.

    Formation flying “isn't quite as impossible as it sounds,” says Shao. However, it does demand that the spacecraft fly well out of range of Earth's atmosphere, which would cause drag problems, and away from steep gravity gradients that might pull on one spacecraft more than another. This means that DS-3, like all formation-flying space interferometers, is destined to fly in deep space, circling the sun along with the planets.

    To control drift, the craft would fire ion thrusters—devices already found on communication satellites. These thrusters require a far smaller mass of propellant than rockets, because they use solar electricity to ionize a substance such as cesium, then accelerate and eject the ions to provide thrust, explains Malcolm Fridlund of the European Space Agency's (ESA's) research center at Noordwijk, the Netherlands. A few kilograms of material can provide a year's thrusting. To make the final submicrometer-scale adjustments necessary to form interferometer images, the spacecraft will need to measure their separation with lasers and make finer scale corrections for drift with active optical elements such as moving mirrors. This kind of active optics control is already used in ground-based interferometers, according to Shao.

    Although DS-3 will provide a test-bed for space interferometry, plans for the mission remain in flux, with Shao and his JPL team still working out the details. “The short story is that what we had originally wanted to do is a little bit more expensive than we can afford,” says Shao. A recent decision to scale back from three to two spacecraft should allow the mission to meet its 2002 departure time.

    Because the technology for formation flying is still in its infancy, the first space interferometer to do actual science will take a safer approach. This project, NASA's Space Interferometry Mission (SIM), will comprise seven or eight optical telescopes, each a modest 35 centimeters in diameter, placed on a fixed arm with a baseline of between 10 and 15 meters. SIM is not primarily designed for imaging; with its short baseline it would only be able to generate images with a resolution of 10 mas, enough for a fuzzy family snapshot of a planetary system. “SIM's major purpose is to do astrometry [measuring stellar positions], as opposed to imaging,” says Shao, who is SIM's project scientist.

    In this mode, which relies on comparing the position of the target star with a reference star, SIM would achieve a peak resolution of 0.001 mas, as much as 250 times better than anything currently available. It will look for planets using the same method now used from Earth: searching for the telltale wobbles in star positions. With its high resolution, SIM could look for Jupiter-sized planets around a billion or so close stars.

    SIM's precision should also aid a wide range of other studies, including measuring the expansion rate of the universe, probing the spiral structure of our galaxy, and studying the spread of matter around supermassive black holes. By tracing the distortion of celestial objects due to the gravitational pull of Earth, the moon, other planets, and our sun, “SIM will be able to verify Einstein's general theory of relativity to a few parts per million, 300 to 500 times better than today,” says Shao.

    Although SIM will avoid the technical challenges of formation flying, it still has many hurdles to overcome. One of the toughest will be vibration. The kind of vibration that the Hubble Space Telescope has had to endure from the wheels of its tape recorders would spell disaster for a space interferometer. SIM will dispense with tape recorders, but it will have to rely on spinning wheels, known as reaction wheels, to control its spin and rotate it toward its targets. Even the best possible bearings transmit vibration to the optics. Vibration caused by thermal “snaps,” when solar panels move between light and shade, also poses a threat to the interferometric signal. The JPL team hopes that a combination of vibration decoupling—“basically just a very soft spring,” says Shao—and yet more active optics should overcome this problem.

    World view

    The effort to actually image extrasolar planetary systems will begin in earnest with ESA's Infrared Space Interferometer (IRSI) and NASA's Terrestrial Planet Finder (TPF). These two projects, still at a much earlier stage of planning than DS-3 and SIM, are both designed to snap more detailed family portraits of other systems and probe the atmospheres of the planets for elements and compounds that are hallmarks of life. Both will operate at infrared wavelengths, where the signatures of these substances are strongest. The infrared has other advantages, too: Planets are brighter in the infrared relative to their suns, and at these slightly longer wavelengths the demands for optical accuracy in the interferometer are loosened.

    Top of the list of telltale substances is ozone, which can be formed when ultraviolet light strikes oxygen produced by plant life. “The presence of ozone would tell us that some form of life already exists on the planet, which would be fascinating indeed,” says Fridlund. The other two key signatures of a life-bearing planet are water and carbon dioxide.

    With a tentative launch date of 2009, IRSI is still very much on the drawing board. “Currently we are studying concepts, feasibility, eventual cost,” says Fridlund. The current vision is for six 1.5-meter telescopes flying in a formation up to 50 meters across. The array will orbit the sun at L2, a point on the Earth-sun axis where the gravitational gradient is flat. There, says Fridlund, “the biggest force acting on the array is solar photon pressure.”

    Fridlund sees a mountain of technical challenges before IRSI takes its first pictures. “What is going to be extra challenging is the optical arrangement,” he says. It would take the array about 10 hours to detect an Earth-like planet and perhaps 14 days to obtain a reasonable spectroscopic signal; holding the array steady over such long periods is a major issue, he notes.

    Like IRSI, the TPF is still at a formative stage. Current plans envisage four to six mirrors, each up to 5 meters in diameter, spanning a total distance of between 75 and 100 meters, with a tentative launch date of 2010. The mirrors might be mounted on a single structure, but “formation flight is a very serious option,” says JPL's Beichman, the TPF project manager. The big challenge facing TPF is the need for large, lightweight telescopes. “This relies on developments for the Next Generation Space Telescope project [the successor to Hubble],” says Beichman. “We also need interferometry techniques being developed for SIM. With these projects under our belt, TPF can be done with acceptable risk.”

    But of all the planned missions, the grandest, most speculative, and furthest over the horizon is NASA's Planet Imager (PI). The PI is a “dream mission,” says Fridlund, and “a gleam in Mr. Dan Goldin's eye,” according to Alan Penny of Britain's Rutherford Appleton Laboratory, a member of the IRSI project team. The PI, with a tentative launch date of 2020, is likely to comprise a squadron of TPF-type spacecraft, each one carrying four 8-meter telescopes. They would be dispersed over distances comparable to the width of the United States and would produce images of alien Earths which, although fuzzy, would have discernible details. In NASA's words, the PI will offer “humanity's first image of another world.”

    Whether or not it is NASA's PI that will give us our first glimpse of distant life, astronomers are convinced that some kind of space interferometer capable of seeing life-bearing planets is just a matter of time. The urge to learn about habitable new worlds is too basic to ignore for long, says Antoine Labeyrie, director of the Observatory of Haute-Provence near Marseilles, France. “It is perhaps the same curiosity which may have stimulated the prehistoric dwellers of the Greek coastline into observing and exploring the islands they could see in the distance,” he says. Now that we have spotted clues to other worlds, he adds, “we are in a similar situation.”


    Interferometry: Getting More for Less

    1. Andrew Watson*
    1. Andrew Watson is a writer in Norwich, U.K.

    “Interferometry is really based on economics,” says Michael Shao of the Jet Propulsion Lab in Pasadena, California, and planet searchers hope to get the best deal of all. By merging light from several small telescopes aboard spacecraft flying anywhere from a few tens of meters to thousands of kilometers apart, they hope to mimic single telescopes far too large and costly ever to be built. The payoff: a chance to see Earth-like planets around stars tens of light-years away and examine them for signs of life (see main text).

    Bargain-hunting can be hard work, and optical interferometry, especially when it's done in space, comes with some punishing technological demands. One is controlling the optical paths—the distances from the target to the mirrors and on to the detector—to an accuracy of 1/100 of a micrometer, much less than the wavelength of light and roughly the size of a large protein molecule. That precise control has to be maintained while the array of free-flying mirrors rotates, expands, and contracts, forming new configurations in order to gather the data needed to form an image.

    But interferometry is more than a cut-rate way to simulate a giant telescope; it also offers a major advantage over normal scopes. It can obliterate the image of a bright star so that faint, nearby planets can be seen more clearly. This trick, known as starlight nulling, is vital, as the glare of a star could be up to 10 billion times as bright as surrounding planets. It works because when light from an interferometer's mirrors is combined, the light waves reinforce each other—peak combines with peak and trough with trough. But if a half-wavelength shift is introduced into the light from just one of the mirrors, then when the waves combine, peaks will be superimposed on troughs and will cancel each other out, wiping out the brightness of the star. Any object off center in the image, such as a planet, is not completely canceled and still gives a relatively bright signal.

    The first practical demonstration of starlight nulling was recently carried out by a team led by Roger Angel of the Steward Observatory in Tucson, Arizona, and was reported in the 17 September issue of Nature. Using a pair of mirrors of the Multiple Mirror Telescope on Mount Hopkins, Arizona, Angel and his team were able to cancel out the image of the star of their choice. One image, of the star a-Orionis, shows a dust nebula surrounding the star. The nebula—like a planet—would normally be lost in the glare. Starlight nulling “is the only way we can directly detect planets in the foreseeable future,” says Malcolm Fridlund of the European Space Agency.


    Fly Development Genes Lead to Immune Find

    1. Gretchen Vogel

    Guided by fruit fly genetics, scientists are finding that the human innate immune system may be more specialized than they had thought

    Like a lowly foot soldier toiling in the shadow of better equipped and better trained cavalry units, the innate immune system, the body's first line of defense against invading pathogens, has long been eclipsed by its partner, the adaptive immune system. In part, this relative lack of interest can be traced to immunologists' view of innate immunity as a sort of brute-force system that unleashes blunt, nonspecific weapons at any and every invader, keeping the foe at bay until the adaptive system with its highly specific weapons—antibodies and T cells—can take over. But now, aided by results from an unlikely source—the developmental control genes of the fruit fly—researchers are developing a new and more intriguing picture of the innate immune system.

    Over the past few years, researchers have found that a family of proteins related to the Toll protein of fruit flies, which was first identified as a developmental protein, plays a key role in triggering innate defenses against bacterial and fungal invaders—not only in flies, but in organisms as divergent as tobacco plants and humans. Scientists are still sorting out the roles of the newly discovered proteins, but a few trends are emerging. There is strong evidence that the innate system not only provides a first line of defense, but also alerts the more specialized adaptive immune system to the presence of a dangerous microbe. And there are tantalizing clues that the innate system itself, instead of mounting a single generalized response as long thought, may have specific pathways to target particular pathogens.

    Drug companies are especially interested in these findings, as they could eventually help scientists design more effective and safer vaccines and provide better treatments for chronic inflammatory and autoimmune diseases and severe microbial infections. Indeed, more than half a dozen pharmaceutical and biotechnology firms, plus dozens of academic labs, are doing research on the Toll pathway, and results are coming thick and fast. Five human Toll-like proteins have been published to date, molecular biologist Fernando Bazan of DNAX in Palo Alto has described at least five more at meetings, and there are likely more waiting in the wings. The field “is going to be dynamite” for the next few years, says molecular biologist Paul Godowski at Genentech Inc. in South San Francisco.

    Thirteen years ago, when the first toll gene was identified, no one would have anticipated that the proteins would be playing such a “dynamite” role in immunology. The gene was discovered in a screen for mutations that interrupt the early stages of embryonic development in the fruit fly Drosophila melanogaster; the toll mutation disrupts proper formation of the insect's front and back. Developmental biologist Kathryn Anderson of the Sloan Kettering Institute in New York and her colleagues eventually showed that the toll gene makes a receptor protein that picks up developmental signals at the cell membrane and sends them to the nucleus. But as Anderson and other developmental biologists began identifying the various intracellular molecules that relay those signals, they found their work taking an unexpected direction—merging with work on the innate immune system.

    In the late 1980s, researchers studying inflammation—one of the main weapons of innate immunity—began uncovering signaling pathways that converge on a protein called NF-κB, which turns on genes that make proteins that trigger inflammatory responses in other cells. They found that one of the cell surface receptors that passes its signal to NF-κB is the receptor for the protein interleukin-1 (IL-1), which among other things helps to induce fever.

    As it happened, the developmental biologists found that one of the proteins they had identified in their signaling path, which goes by the name Dorsal, is structurally similar to NF-κB. Soon they also realized that the upstream proteins Toll and the IL-1 receptor had similarities, too. A few years later, researchers linked the Toll pathway to the immune system of insects when they found that Dorsal and a related protein, Dif, travel to the nucleus in response to infection. Even more remarkably, plant scientists found that proteins resembling Toll help plants fend off attacks from bacteria and fungi. And last summer, immunologist Charles Janeway of Yale University and his colleagues made the first link to human immunity. They identified the first human Toll protein, now called Toll-like receptor-4 (TLR4), and showed that it activates NF-κB, an indication that it may play a role in the innate immune system.

    Indeed, just last week researchers reported the first direct evidence for such a role. Godowski, Austin Gurney, and their colleagues at Genentech reported in Nature that one of the human Toll-like receptors helps to alert immune cells to the presence of lipopolysaccharide, a component of certain bacterial cell walls, including Escherichia coli and Salmonella. Although scientists knew that lipopolysaccharide triggers an innate immune response that involves NF-κB, exactly how the cell detects its presence was a mystery. Toll had already become a leading suspect in the lipopolysaccharide reaction, however, because previous work in flies had shown that Toll activates Dorsal—the NF-κB relative—in response to fungal invasion.

    To test if any of the human Toll-related receptors really do respond to lipopolysaccharide, the Genentech scientists exposed immune cells to purified lipopolysaccharide. In response, the RNA instructions for making Toll-like receptor-2 (TLR2) increased, indicating that more of the receptor was being made. What's more, the researchers found that the receptor could respond to lipopolysaccharide by increasing NF-κB activity. Molecular biologists Mike Rothe and Carsten Kirsching of Tularik, a biotech firm in South San Francisco, have reported very similar results at several meetings.

    Other experiments, chiefly in the fruit fly, suggest that there are specialized Toll receptors that respond to different pathogens. For example, Toll is known to trigger the production of an antifungal peptide, drosomycin, in the fly. But molecular biologist Jules Hoffmann, of the Institute for Cellular and Molecular Biology in Strasbourg, France, has reported that Drosophila larvae lacking Toll could still turn on an antibacterial peptide called diptericin. Hoffmann and his colleagues suspect that a parallel path—perhaps through one of the four Toll-related proteins reported so far in flies—responds to bacteria, while the original Toll responds to fungal infections. They have shown that different proteins in the Toll pathway respond to fungal and bacterial infections, as do Anderson's recent unpublished experiments.

    The human innate immune system may have similar specificity. Godowski found that TLR2 has varying sensitivity to lipopolysaccharide molecules from different kinds of bacteria—two strains of E. coli and one of Salmonella. Although that may be due to different preparation techniques, Godowski says it “raises an intriguing possibility that different Toll receptors may have the ability to recognize different pathogens.” What's more, in unpublished work Rothe and his colleagues have tested several other members of the human TLR family, and only TLR2 responded to lipopolysaccharide, Rothe says. Says Godowski: “There's going to be an incredible amount of interesting work that will come out of looking at these Toll receptors individually and perhaps in combination.”

    Because so far the handful of Toll proteins with known functions seem to respond to one kind of pathogen, Anderson speculates that both humans and flies may have specific protein pathways for different invaders, although this isn't proven yet. Scientists hope that with a better understanding of how specific pathogens trigger the immune system, they might be able to selectively shut down certain proteins to treat inflammatory diseases.

    Although researchers are still teasing out all the action of the various Toll proteins, they do know that Toll provides a link between the adaptive and innate immune systems. Naïve T cells—members of the adaptive immune system—that have not been exposed to antigens need two signals to become active. The first comes from the binding of an unfamiliar protein, or antigen, and the second comes from a protein called B7.1 and its relatives. And Janeway's work now links B7.1 to the innate immune system via the Toll pathway.

    Janeway reported last summer that the active form of TLR4 increases production of B7.1. Immunologist Douglas Fearon of the Wellcome Trust in Cambridge, U.K., says that B7.1 may be a sort of red alert, released if the innate immune system, through its Toll-like receptors, has recognized an infectious invader.

    While researchers pursue Toll proteins in hopes of medical applications, they are also thinking about what these new findings are telling us about evolution. Because the Toll immune proteins are similar across plants, flies, and mammals, most scientists think that the defense system arose before the divergence of plants and animals—perhaps at the dawn of multicellular life. Only later were the immune proteins co-opted by developmental systems. “You can't do anything as luxurious as making all sorts of fancy body parts without an immune response,” says molecular biologist Michael Levine of the University of California, Berkeley.

    It may be that only flies have used Toll in developmental roles, however—to date, there is very little evidence that Toll relatives are important for mammal or plant development. “In all of our experiments doing knockouts in mice, we've never seen a developmental phenotype,” says David Baltimore of the California Institute of Technology in Pasadena, who helped characterize the NF-κB pathway.

    Meanwhile, even as researchers continue to probe the functions of the Toll proteins, more of them continue to be uncovered. “We're not sure where this family ends,” says molecular biologist Michael Karin of the University of California, San Francisco. “It's a very exciting field.”


    The Biocomplex World of Rita Colwell

    1. Jeffrey Mervis

    In a discussion with Science, NSF's new director, microbiologist Rita Colwell, outlines her views on topics ranging from environmental research to computer science and educating students as well as the public

    Biocomplexity. The word evokes images of the incredibly rich variety of the living planet, and it hints at the formidable scientific challenge of trying to understand such an intricate system. That twin message appeals to Rita Colwell, the new director of the National Science Foundation (NSF), who plans to make the study of biocomplexity a major new thrust for the $3.5 billion agency.

    Last month Colwell, a microbiologist from the University of Maryland, was sworn in as the first woman to head the 48-year-old foundation, the government's flagship agency for the support of nonbiomedical academic research. She's no stranger to NSF, having served on its presidentially appointed oversight board under Reagan appointee Erich Bloch and as a longtime reviewer and grantee (Science, 13 March, p. 1622). “It's sort of like becoming president of your alma mater,” she says. Her “campus” consists of a 5-year-old, 12-story building in Arlington, Virginia, that houses NSF's administrative staff of 1250, and the job comes with a 6-year term, sub-Cabinet rank, and an annual salary of $136,700. Earlier this month, Colwell, 63, met for 90 minutes with a group of reporters and editors at Science to discuss her plans for the agency.

    Colwell takes the reins from physicist Neal Lane, now the president's science adviser and head of the Office of Science and Technology Policy (OSTP). Lane managed to win steady annual budget increases for both core programs and new facilities during nearly 5 years as director by emphasizing the agency's role as the government's major funder of basic academic research. With a deft wordsmith's touch, Lane successfully deflected congressional pressure to fund more applied research by redefining NSF's mission as “conducting basic research for strategic purposes” (Science, 20 December 1996, p. 2000). Similarly, by defining biocomplexity as “the interaction of biological, chemical, social, and economic systems,” Colwell hopes to avoid the “baggage” attached to such related terms as biodiversity, which some conservative politicians see as a refuge for tree-hugging environmentalists, and sustainability, which some industrial leaders say smacks of bureaucratic meddling in the free market. She sees biocomplexity as a unifying theme for several NSF initiatives, some ongoing and others still in the planning stage (see the related News of the Week story, p. 1935).

    Lane also championed the importance of undergraduate teaching and the need to explain to the public how long-term investment in research translates into a stronger economy, two issues that Colwell says are high on her agenda, too. “Seventy-five percent of the public is in favor of basic research,” she says, citing a recent study. But “they don't understand it.” And compared to what many companies spend on R&D to improve their products, she says the federal investment in science “is marginal.”

    Colwell's interest in science education also extends to the nation's elementary and secondary schools. And she believes that scientists must play a major role. “No group bears more responsibility for improving K-12 math and science education than the scientific community itself,” she told a recent gathering of science writers. Using a variation on Clinton's 1992 campaign slogan to play on the current preoccupation with the millennium computer bug, Colwell punned, “It's not Y2K, stupid, it's K-12.”

    Information technology is another priority area, says Colwell, and her vision of the field extends far beyond computer science. “IT pervades the country's business,” she says. “For instance, I think that the behavioral and social sciences are one area that will benefit tremendously from this [increased speed and capacity]. … You don't just put machines together. You need to utilize them and make possible the sort of innovative thinking that takes you to the next stage.”

    In her meeting with Science, Colwell spoke passionately on topics ranging from an overemphasis on the Ph.D. as the gateway to a scientific career, through the need to help research-poor states, to greater collaboration with the National Institutes of Health (NIH), whose budget is four times larger than NSF's. Although she has said that the president's current predicament is not likely to affect support for R&D, she served notice on her political bosses that she won't sit by idly if a lame-duck Administration starts to ignore science. “I haven't any quarrel with the overall direction [of current R&D initiatives],” she noted. “There are ways that I want to shape it, however, and things I'd like to do to move more rapidly in certain directions.”

    What follows is an edited transcript of her remarks.

    On a boost for environmental research:

    I think that the timing is right to build on the LEE [Life and Earth Environments] initiative with what I term biocomplexity. It's an attempt at understanding all the interrelationships between cells and organisms and between an organism and its environment. … We're taking all we know and utilizing it to build the type of models that we thought about 25 years ago that turned out to be so riddled with black boxes that we couldn't get the simulation we needed. But now, with the vastly increased power of computing and data mining, we can infuse a very strong science underpinning into environmental studies and make some dramatic gains in knowledge. …

    What I like about a biocomplexity initiative is that it doesn't carry a lot of baggage. It's focused on science—bringing science into environmental studies. Right now, you get very narrow definitions of some terms. Biocomplexity is more than biodiversity. Biodiversity is part of it, especially with the genomic sequencing, which gives us a chance to learn a lot more about diversity without having to culture cells. But it's more than that, and more than just conservation or sustainability, although they are also important. …

    On informal science education:

    If you provide interesting exhibits, people will respond with their feet. That's been proven with aquaria, some of which are so popular that the museums have to replace the hand railings two or three times during the year. … [But measuring their impact] is an area that requires more behavioral research. It comes back to complexity, which is a hot subject these days. It's not so easy to assess the effectiveness of an educational program, and that's one reason why I would like to look more at research on learning. We've spent a lot of time focused on teaching, and yet we don't really know how people learn, how effectively people can have their learning enhanced, and the differences in how people learn.

    On a recent report of a Ph.D. glut in the life sciences:

    In the 1980s, NSF asked investigators to put graduate students on their research budgets, saying that it preferred to fund graduate students rather than technicians. But as often happens, the pendulum swung too far. There needs to be a balance. Well-trained technicians are needed to run equipment and labs, and what better way to provide opportunities for them than to build it into research grants. …

    There's great respectability for those who want to be technicians, but we don't give them the opportunity. We've made it a sign of failure if you don't get a Ph.D. Other countries don't have this [negative] attitude. … With all the emphasis on graduate students, I'm not so sure that the question we're facing is one of overproduction. I think we have to look more closely at how we're using our resources. We've created a situation in the career pipeline where there is a bulge at the end of a postdoc and no place to go. …

    On congressional earmarks to meet the research needs of “have-not” states:

    The key is to link the local, state, and federal resources to build a collaboration. This kind of partnership is very important. We did a survey at [Maryland] and found there was a need for billions of dollars of research infrastructure. As a result, the governors of Maryland have made a serious commitment to solve this problem, and now the university is getting $90 million a year in state construction funds. But some states don't have a big enough tax base to draw upon, and they still need those facilities and research infrastructure.

    There needs to be some way to provide a partnership—not a giveaway program, but a partnership—between city, state, and the federal government to make possible facilities that strengthen the science and engineering base that leads to economic growth. I didn't say an earmark. But I would like to see a program. Right now there isn't anywhere to go to get that done. It's very frustrating.

    On the role of information technology:

    I feel very strongly that computer science and research and the next-stage Internet and computation and the capacity to go 1000-fold greater in speed and volume of data handling are critical for the country's future. … We need to seize the opportunities open to NSF. There's strong interest at the Department of Energy for the weapons labs, and there's a clear need at NASA if they are going to go to Mars and continue to monitor climate change. All that takes research, and that's what NSF does best. I've met with [DOE Undersecretary] Ernie Moniz and [NASA Administrator] Dan Goldin and had long conversations with OSTP officials. The other agencies recognize that research is needed, and that's where NSF comes in. What they want to do will require the type of advanced research that NSF funds. I see it as a really good working partnership. …

    On why NIH receives larger funding increases than other science agencies:

    What it tells me is that we're not making the case [for the value of research] as strongly as we need to. It's relatively easy to make the connection between the involvement of NIH and a cure for a disease, or for sequencing a gene that explains Alzheimer's. What's not so easy to make is the case that it's the fundamental research that allows that gene to be sequenced and that will provide a 30% return on investment for the economy, and that there's a relationship between our standard of living and our investment in science and technology. We don't make that case very well, and I'm open to ideas on how to do it better.


    Fractals Reemerge in the New Math of the Internet

    1. Gary Taubes

    Traffic on the Internet has unpredictable bursts of activity over many time scales. This fractal behavior has implications for network engineering

    Even a casual user of the Internet knows it is nothing like the phone system. Punch a number on your telephone, and the call will nearly always go through promptly, but click on a Web link or call up your e-mail for that urgent message, and you could be in for a long wait. This phenomenon reflects what may be the most fundamental difference between telephone service and the Internet: “It is no longer people doing the talking,” says mathematician Rob Calderbank of AT&T Labs Research in Florham Park, New Jersey. Instead, it is computers talking to computers. As a result, he says, “the statistics of calling have completely changed.”

    Since the turn of the century, the telephone system has been built on the assumption that calls arrive at any link in the network in what's known as Poisson fashion: The likelihood of a call arriving at any given time is independent of earlier calls, for example, while call length varies only modestly. As a result, call volume fluctuates minute by minute, but over longer time scales the fluctuations smooth out. In contrast, AT&T mathematician Walter Willinger and his collaborators have shown that the machine chatter over the Internet is fractal. It has a wild, “bursty” quality that is similar at all time scales and can play havoc, Willinger says, with conventional traffic engineering.

    When he first published that claim in 1993, other researchers regarded the idea as either wrong or meaningless. Willinger wasn't surprised by the reaction. Many mathematicians view the application of fractals to physical and social phenomena with some skepticism—after all, fractals have come and gone as fads in everything from hydrology and economics to biophysics. But as Willinger and network researcher Vern Paxson of the Lawrence Berkeley National Laboratory in California write in the September issue of the Notices of the American Mathematical Society, the fractal nature of local network traffic is now well established, and new studies have extended his analysis to the Internet as a whole. Indeed, the fractal mathematics of networks has become a fact of life for engineers designing everything from routers to switches. “Willinger's work has certainly changed the way we think about network traffic,” says network researcher Scott Shenker at Xerox Palo Alto Research Center in California.

    The existing paradigm for what is known in the business as POTS, or Plain Old Telephone Service, dates back to the turn of the century and the work of a Danish mathematician named Agner Erlang, who derived a formula expressing the fraction of calls that have to wait because all lines are in use. “What he found empirically by going up to this little village telephone exchange and taking measurements with a stopwatch,” says Calderbank, were the hallmarks of Poisson behavior. Call arrival times were random and independent of each other, and call durations clustered around an average value. Call frequency fell off rapidly at much longer durations.

    The nature of communication—and its statistics—changed dramatically in the 1970s with the coming of what are called packet networks, beginning with Arpanet and ethernets and progressing to today's Internet. Not only did computers start doing most of the communicating, but the method of sending a message through the networks also changed. Whereas telephone networks hold open a continuous line for each call, packet networks break up a message into distinct information packets—maybe several hundred packets for a relatively small message—and send each one separately to its destination.

    Despite these changes, says Paxson, researchers had such confidence in the Poisson paradigm that it continued to dominate their thinking. “People were writing papers,” says Paxson, “and they would bend over backward to try to fit what they were seeing into the Poisson modeling framework, because it was so compelling.”

    In the late 1980s, however, Will Leland and Dan Wilson at Bell Communications Research (Bellcore) in Morristown, New Jersey, put together a hardware system that could, for the first time, accurately monitor and record the traffic flow on packet networks, much as Erlang had done for his local telephone exchange. Willinger, who was at Bellcore at the time and whose background was in probability theory, analyzed the resulting data in collaboration with Murad Taqqu, a Boston University mathematician. What they saw looked nothing like Poisson behavior.

    “With some relatively very simple ways of visualizing and plotting this stuff,” he says, “we could immediately see the very nice nature of the traffic: When you looked at the [variation in the] number of packets per millisecond or per second or per minute, it always looked the same.” Such selfsimilarity is a fundamental characteristic of a fractal process, as is the bursty behavior Willinger and Taqqu also observed. “You can see areas where the traffic behaves quite nicely,” Willinger says, “and then periods where it's extremely variable and goes up and down like crazy.”

    Willinger and his collaborators published these findings 5 years ago to mixed reviews. Paxson, for instance, says he was “deeply skeptical” when he first read the paper, then tried to disprove it and couldn't. Now he describes himself as a “missionary zealot.” To convince the rest of the community, says Willinger, he had to explain the fractal behavior, not just describe it. He did so with Taqqu, Wilson, and Robert Sherman, another Bellcore researcher, in a paper published in 1995.

    To get at the root of the fractal behavior, the researchers looked at traffic between source-destination pairs in a network. They found that characteristics of the traffic—the duration of busy periods, for instance, or the size of the transmitted files—had what's known as a heavy-tail distribution. Whereas telephone call durations, in a Poisson distribution, are tightly clustered around a mean value, heavy-tail distributions include large numbers of values that are arbitrarily far from the mean.

    Telephone calls, for example, might have a mean duration of a few minutes, never lasting less than a few seconds and rarely extending beyond 15 minutes—the classic three standard deviations that encompass 99% of the distribution. But machines communicating on a network don't have the same habits as humans on a telephone. The researchers found, says Willinger, that “the busy or idle periods could last from milliseconds to seconds to minutes and even longer.” The actual size of the documents being sent also varied by as much as six or seven orders of magnitude. When traffic sources have this heavy-tail behavior, says Paxson, “there are theorems that say that you're going to get fractal correlations in your traffic.”

    In their latest work, presented this month in Vancouver at a meeting of ACM SIGCOMM, a networking association, Willinger and his AT&T collaborators Anja Feldmann and Anna Gilbert found that the structure of packet networks themselves also contributes to the bursty nature of the traffic, at time scales of less than a few hundred milliseconds. At least one reason for the behavior, says Willinger, is the way the dominant network protocols break up each electronic message into hundreds or thousands of packets before sending them over the network.

    As Willinger and Paxson describe in their September article, they and others have now documented fractal behavior for traffic on the Internet as well as on smaller networks. “Everyone buys its existence now,” says Shenker. “The only debate is over how much it affects design issues.” Willinger and others cite one instance in which it already has: in the design of buffers for Internet routers, which store packets during busy periods until they can be sent onward to their destination. “If you look at first-generation Internet switches,” Willinger says, “buffer sizes were very small, maybe big enough for a couple hundred packets. Now, they're two or three orders of magnitude larger, because engineers realized very quickly that with the fractal nature of traffic, buffers have to accommodate much more variable traffic than was assumed in a Poisson world.”

    Other AT&T researchers are monitoring network traffic to extend Willinger's work, says Calderbank. “Today we're really where Erlang was around the turn of the century,” he says. “Like Erlang, we are trying to understand the fundamental nature of data traffic by taking measurements. If we could understand the mechanisms at work, then we could do the engineering so that applications would run better.”


    Among Global Thermometers, Warming Still Wins Out

    1. Richard A. Kerr

    Recent analyses show that the gap between the satellite temperature record and that of thermometers at the surface is more apparent than real

    Summer heat waves, together with forecasts of greenhouse warming, have convinced much of the public that the world as a whole has warmed in recent years. And temperatures recorded by thermometers at the surface show warming of about a half-degree during this century and a couple tenths of a degree during the past 2 decades. But there has been a nagging doubt: The 20-year-long temperature record compiled by satellites looking down into the atmosphere—by far the most complete, global temperature record ever made—has given the opposite answer, showing a slight cooling. Although most climate researchers rely on the longer surface temperature record, a few contrarians have seized upon the satellite data as evidence that the threat of greenhouse warming has been overblown.

    The slight surface warming would not prove that greenhouse gases from human activity, rather than natural climate variations, are responsible. Nor would a slight cooling rule out a future greenhouse warming. But the apparent cooling has offered greenhouse skeptics a powerful public relations tool that has been applied from congressional hearings to Reader's Digest. If there is no warming in the satellite data—“our only truly global record of lower atmosphere temperature,” as greenhouse skeptic Patrick Michaels of the University of Virginia, Charlottesville, puts it—then the surface data must be flawed and the threat of greenhouse warming much exaggerated.

    Now, however, in the wake of new analyses of the satellite data, most researchers are more convinced than ever that the satellite cooling trend is not the show-stopper contrarians make it out to be. After considering the effects of El Niños and volcanoes and correcting for decay of the satellites' orbit, researchers are seeing not a cooling but a small warming. The error bars in these new analyses are larger than before, but the trend is close to that in surface records. Although the contrarians still aren't budging, leading satellite analyst John Christy of the University of Alabama, Huntsville—who has been reporting a cooling trend for a decade—agrees that the satellite data are compatible with a slight warming trend. The discrepancy between satellite temperatures and model predictions of moderate greenhouse warming “isn't that large,” says Christy.

    The satellites in question weren't designed to monitor global warming. Launched starting in 1979 for the National Oceanic and Atmospheric Administration (NOAA), they fly from pole to pole at altitudes of about 850 kilometers carrying instruments called Microwave Sounding Units (MSUs). These pick up the microwave glow of the atmosphere at about a frequency of 60 gigahertz, produced by oxygen molecules at an intensity that is proportional to their temperature. Analysts could fill in data-sparse areas in weather forecast models by inferring daily atmospheric temperatures from the MSU data.

    In the 1980s, Christy and Roy Spencer of NASA's Marshall Space Flight Center in Huntsville realized that the global coverage of these data throughout the atmosphere would make them a gold mine for global change studies. They took the daily readings from the series of MSU-bearing satellites—now numbering nine—and spliced them into one long record of atmospheric temperature. By their reckoning of a few years ago, the lower part of the troposphere, centered at an altitude of 3.5 kilometers, had actually cooled at a rate of 0.05°C per decade between 1979 and 1995. That was a far cry from the warming of 0.13°C per decade recorded on the surface during the same period. And greenhouse computer models of that time called for an even larger warming of 0.25°C per decade. So the contrarian complaints began. Michaels began printing a monthly comparison of the cooling satellite data and the warming computer model predictions in his newsletter, World Climate Report.

    Other meteorologists countered that the surface and satellite measurements wouldn't be expected to give identical values. The two observing systems “are not measuring the same quantity,” says meteorologist James Hurrell of the National Center for Atmospheric Research in Boulder, Colorado. “Even if you assume both records are perfect, you're going to get different trends over 20 years.” Climatic events such as El Niño's periodic changes in atmospheric circulation can have different effects on surface temperature than on the temperature several kilometers above the surface, he says.

    In any case, the satellite numbers are now looking more like those measured at the surface. New analyses of the errors incurred in splicing together the separate satellite records are driving some of the convergence. For example, in June's Geophysical Research Letters, remote-sensing specialist C. Prabhakara of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and his colleagues published their own analysis of the satellite data. Prabhakara notes that when his group took full account of the error involved in splicing the record together, they found a distinct warming of 0.12°C per decade. They also estimated an error of ±0.06°C per decade, twice as large as previously assumed.

    Then in August, remote-sensing specialists Frank Wentz and Matthias Schabel of Remote Sensing Systems in Santa Rosa, California, published another revised estimate in Nature. As atmospheric drag pulls a satellite into a slow descent of about 1 kilometer per year, they reported, some MSU readings that are measured at an angle from the satellite are taken at higher—and therefore colder—altitudes, thus reducing the measured temperature (Science, 14 August, p. 930). This orbital decay requires a correction of +0.12°C per decade, say Wentz and Schabel, bringing the 1979 to 1995 trend to +0.07°C per decade.

    Once this problem was pointed out, Christy and Spencer immediately accepted the need for a correction. Christy says that when they applied the orbital-decay correction and added other corrections to account for such things as changes in the spacecraft's orientation, which affects how much the sun heats the MSUs, they still got a negligible cooling trend of 0.01°C per decade for 1979 through 1997. However, the team also widened their error bars from 0.03°C to 0.06°C per decade, in line with Prabhakara's estimate.

    Christy now also has made an additional set of corrections to try to compensate for a basic problem of the satellite record: its shortness. “Twenty years is a very brief climate period,” he says. And as climatologist James Angell of NOAA in Silver Spring, Maryland, points out, trends are “very sensitive to the length of record.”

    Angell provided a case in point recently when he reanalyzed another temperature record, this one derived from sensors on weather balloons. He found that the trend from 1979 to 1996 is −0.02°C per decade, much like the satellite trend. But when he extended the analysis back to the beginning of the reliable balloon record in 1958, the trend jumped to a warming of 0.16°C per decade. The main reason for the difference, says Angell, is that the satellite record begins too late to include a sharp jump in temperature during the 1970s.

    Even climate shifts that fall within a short record can skew the apparent trend. For example, if a temporary global warming, such as the one induced by the warm tropical Pacific during the El Niño of 1982–83, happens to fall near the beginning of a short record, any long-term warming trend will be muted or even reversed. The same would happen if a brief cooling, such as that produced by the 1991 eruption of Mount Pinatubo, falls near the end of the record. To help compensate, Christy attempts to remove the effects of El Niños and major volcanic eruptions in the MSU record. After this adjustment, the underlying trend through July 1998 shows a slight warming—between 0.03°C and 0.10°C ± 0.06°C per decade, according to Christy's latest calculation. That overlaps with the observed surface warming and is compatible with a real, albeit modest, global warming.

    Despite these results, the contrarians aren't yet giving up. Michaels, for one, has answers for all the new corrections. He points out that after Spencer and Christy's orbital-decay correction, “you still don't see any warming.” Nor is he bothered by the satellite record's shortness. The 1970s temperature jump it missed, he says, “is the only warming in the last half-century; if that's global warming, we don't understand it at all.”

    Most researchers, however, now see no major mismatch between satellite and surface temperatures. None of the temperature records—satellite, balloon, or surface—is ideal, notes climatologist Dian Gaffen of NOAA in Silver Spring. Given the available records, Gaffen and many other climatologists choose the longer ones. The 130-year surface record's 0.5°C warming of the past century (±0.2°C) is “virtually certain,” says climate researcher Jerry D. Mahlman of the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. In recent decades, the surface record has tracked the modest 0.1°C per decade greenhouse warming now predicted by climate models cooled by the shading of pollutant hazes. Such results don't prove that the strengthening greenhouse is behind the warming, of course—but neither can they be used to support the notion that the greenhouse threat is a fraud.


    Putting Theory to the Test

    1. Bernice Wuethrich*
    1. Bernice Wuethrich is an exhibit writer at the National Museum of Natural History in Washington, D.C.

    After decades of theorizing about the evolutionary advantages of sex, biologists are at last beginning to test their ideas in the real world

    “Let copulation thrive,” exhorted Shakespeare's King Lear, and it has. Today, across the tree of life, sex reigns—many unicellular and just about all multicellular organisms do it. Yet how sex began and why it thrived remain a mystery. After all, asexual organisms were here first, and new asexual species continue to arise, if only to go extinct in fairly short order. Why did sex overtake asexual reproduction some billion or more years ago, and why does it continue to upstage asexuals? What gives sex its edge?

    Biologists have come up with a profusion of theories since first posing these questions a century ago. Most ideas explore some version of the notion that sex is maintained because it enhances the rate of evolution by natural selection, says evolutionary biologist Graham Bell at McGill University in Montreal, but there are dozens of variations on that general idea (see Review on p. 1986). Most of them fall into two camps: that sex brings beneficial mutations together into a single winning combination that can spread through the population, or that sex purges the genome of harmful mutations. But the devil is in the data—or lack thereof. “I emphasize experimental problems, because we have tons of theories, and some are completely crazy,” says Alexey Kondrashov, an evolutionary geneticist at Cornell University.

    Sex is a paradox in part because if nature puts a premium on genetic fidelity, asexual reproduction should come out ahead. It transmits, intact, a single parental genome that is by definition successful. Sexual reproduction, on the other hand, involves extensive makeovers of the genome. The production of gametes requires recombination, in which the two copies of each chromosome pair up and exchange DNA. Fertilization, in which genes from different parents fuse, creates yet more genetic combinations. All this shuffling is more likely to break up combinations of good genes than to create them—yet nature keeps reshuffling the deck.

    This paradox is compounded by the cost of sex—which is primarily the cost of producing a male. Imagine 1 million sexually reproducing snails and a single asexual female mutant. Say that she has two daughters, who (on average) have another two daughters, and so on. Meanwhile, the sexually reproducing females would be diligently producing a female and a male—who would not directly produce any progeny. Soon, the few sexual organisms would be lost in a sea of asexuals and find it all but impossible to locate a mate. All else being equal, the asexual clone would entirely replace its sexual counterparts in only about 52 generations, says evolutionary biologist Curtis Lively of Indiana University in Bloomington. Yet this happens rarely, if ever. Despite the cost, sexual species persist, while most asexuals quickly go extinct.

    In recent years, evolutionary biologists have begun to find ingenious new ways to test their explanations for the strange success of sex. They're observing populations in the wild and in the lab for evidence that rare packages of good genes really do offer an advantage. They're also counting mutation rates in organisms from water fleas to humans to see whether sex might play a role in eliminating harmful mutations. Although they haven't solved the mystery of sex yet, they are tackling what Kondrashov calls the limiting factor of “extremely lousy experimental data.”

    On the trail of the Red Queen

    One theory put to the test in recent years is the Red Queen hypothesis, a variation of the idea that sex serves to assemble beneficial mutations and so creates a well adapted lineage in the face of a rapidly changing environment. In the case of the Red Queen, the good mutations are those that allow hosts to resist parasites. Because parasites adapt to the most common host genotype, evolution will favor hosts with rare combinations of resistant genes. Thus the Red Queen predicts that selection will favor the ability to generate diversity and rare genotypes—exactly the abilities conferred by sex and recombination.

    Developed in the 1960s and '70s, the theory has been difficult to test, in part because it's hard to identify and track specific resistance alleles in sexually reproducing organisms, says Lively. Therefore, most of the evidence has been quite indirect, such as a 1987 study by Bell and Austin Burt of Imperial College in Silwood Park, U.K., showing that animals with longer generation times have more recombination. This corroborates the Red Queen, says Bell, because short-lived parasites easily adapt to long-lived animals—which therefore need extra recombination to counter the parasites.

    Since 1985, Lively has been seeking more satisfying proof by scrutinizing teeming populations of a freshwater snail, Potamopyrgus antipodarum, which is found in both sexual and asexual variants in a cluster of 65 mountain lakes in New Zealand. P. antipodarum is plagued by a vicious parasite, Microphallus, a trematode that renders the snail sterile by eating its gonads. Lively found that lakes with few parasites tended to have mostly asexual snail populations, or clones, while lakes with more parasites tended to have mostly sexual snails. “This pattern suggests that parasites prevent the clones from eliminating sexual populations,” he says.

    In the August issue of Evolution, Lively and Indiana colleague Mark Dybdahl report more direct evidence for the Red Queen hypothesis from a 5-year study of the snails. They focused on clonal lineages, using identifiable genetic markers to finger each lineage; because each clone had a particular level of parasite resistance, they could track resistance without identifying specific genes. They couldn't use sexual organisms, because recombination would separate markers and resistance genes. But they reasoned that according to the Red Queen, selection pressure should act on these diverse clones just as it does on individuals in a sexual population, favoring unique genotypes and sparking an evolutionary race between snail and parasite. “We looked for rare advantage and a signature of coevolution,” says Lively.

    Specifically, they sought an oscillation in the frequency of host and parasite genotypes. As the rare host thrives and becomes more common, the parasite evolves to attack it and drive its numbers down. Then a new, resistant genotype surges ahead before the parasite evolves to hold it in check. Both species evolve as fast as they can, but neither gets far ahead, hence the theory's name, after the Red Queen's remark to Alice in Wonderland: “It takes all the running you can do, to keep in the same place.”

    Using markers for specific enzymes, the team identified rare clones (less than 5% of a lake's population) and common ones (more than 20%) and exposed both sets to parasite eggs. Three of four common clones were 100% infected, whereas rare clones were 50% infected. They also saw the telltale oscillation. Over 5 years, an overinfected common clone was driven down and replaced by what was initially a rare clone, which itself was driven down and replaced.

    The two findings strongly indicate that parasites can grant an advantage to the rare genotypes efficiently produced by sex, and can introduce oscillatory dynamics, two key predictions of the Red Queen theory, says Lively. “Whether they maintain sex we can't say from this experiment,” he acknowledges. “However, [the results] are consistent with one hypothesis and—more importantly—inconsistent with others.” For example, the theory that sex purges bad mutations can't easily explain the pattern of asexual and sexual snails in the lakes, Lively says.

    Other researchers also find Lively's correlations intriguing but not conclusive. “You can say [Lively's results] are consistent with the parasite hypothesis,” says evolutionary geneticist Brian Charlesworth at the University of Edinburgh in Scotland, “but not that they prove it.”

    Purging bad genes

    Kondrashov is among those skeptical of the Red Queen; he instead favors the notion that sex removes harmful mutations. Removing bad mutations is a far more common challenge for living things than is coping with the fast-changing pressure of an evolving parasite, he says. “I do not believe that everything in the world lives under rapidly changing selection—while everything suffers from bad mutations.”

    Underpinning this theory are two facts of life: Mutations happen, and most are bad. Meanwhile, sex produces loser and winner offspring by re-sorting the mutations. When two parents with different harmful mutations mate, sex will produce some genetic scapegoats with plenty of bad mutations, and some winners with only a few. Selection will stop the losers dead in their tracks, getting rid of several harmful mutations in one fell swoop.

    When a clone reproduces, however, its offspring inherit all of its bad genes and may pick up another through a new mutation. Without sex, mutations continue to accumulate in individuals and in the population. If the mutations interact synergistically, with each new mutation triggering an ever-bigger reduction in fitness (another assumption now under experimental investigation), at some point one more mutation will spell the death of all the clonal individuals. So sexual organisms outdo their asexual competitors because sex brings together, then purges, bad mutations, while the population as a whole is maintained with organisms carrying fewer mutations. And the higher the mutation rate, the greater the advantage of sexual reproduction.

    That prediction opens the way to testing this theory. To overcome the efficiency advantage of asexuals, the rate must be on the order of one harmful mutation per individual genome per generation, says Kondrashov. “Fifty percent of modern evolutionary genetic theory may depend on the deleterious mutation rate,” he says. “If it's high we can explain sex, recombination, diploidy, aging, and sexual selection.”

    Other researchers have embraced the mutational theory of sex, in part because unlike so many others it is testable. “This theory sticks its neck out,” says Laurence Hurst, an evolutionary geneticist at the University of Bath in England. “You measure the parameters and if you don't find them, the theory is wrong.”

    To measure those parameters, researchers raise populations of organisms ranging from water fleas to worms and typically allow only one individual per generation to breed, so that if that individual has picked up a mutation, it won't be eliminated by natural selection. After every 10 generations or so, researchers test the lineages' fitness and translate any fitness decline into the deleterious mutation rate.

    But the experiments are tricky, and problems can crop up if selection isn't adequately limited. Additionally, mutations of very small effect may be undetectable in the lab but important in nature, where the numbers are larger and the time is longer. Counting mutations “isn't counting beans,” says geneticist James Crow of the University of Wisconsin, Madison.

    So far, the results are disconcertingly mixed (see table). “The state of the whole field is very much in doubt right now,” says evolutionary geneticist David Houle at the University of Toronto. Benchmark studies by Terumi Mukai and Crow in the 1970s established a deleterious mutation rate of close to one per generation in the fruit fly Drosophila, just enough to explain sex in Kondrashov's theory. But later reanalyses of that work put the rate considerably lower. Recent worm experiments have yielded rates as low as 0.005, and recent rates in flies have ranged from just about nil to one.

    View this table:

    Now a few scientists are bypassing the difficulties of population genetics experiments and instead simply counting mutations in sequenced stretches of DNA. They compare DNA sequences in noncoding regions in closely related species to derive a genomewide mutation rate. Then they estimate how much of the genome is functional, or subject to selection, and apply the mutation rate to the functional DNA. Beneficial mutations are thought to be so rare that they aren't considered.

    One such experiment, by Michael Nachman at the University of Arizona, Tucson, assumes that 5% of the human genome is subject to selection and concludes that each human infant is born with about six mildly deleterious mutations. If a higher proportion of the genome is functional—as some scientists suspect—then the rate would be even higher. Either way, it supports the mutational hypothesis for the maintenance of sex. But researchers agree that more work, in more organisms, is needed. Only the molecular method will vindicate or doom the theory, says geneticist Peter Keightley at the University of Edinburgh, who is now counting mutations in the worm Caenorhabditis elegans.

    While scientists scrounge for data to support one or the other of the warring theories of sex, other researchers are considering merging the two schools of thought—that sex both collects beneficial mutations and purges bad ones. “My view is they're both going on,” says McGill's Bell. “Something as complex, onerous, and laborious as sexuality is probably only going to be maintained if it's doing something very important.”

    For example, in the “ruby in the rubbish” model, the ruby—a good mutation in an asexual organism—is buried in rubbish—a glut of bad mutations that are constantly being eliminated by selection. Thus the harmful mutations drag the good ones down with them, slowing the rate of evolution relative to sexual populations that can unhitch good genes from bad ones during recombination.

    But the evidence for such theories is also very indirect, and testing them is even more of a headache than testing the old theories. “We're in a world where it's easy to say such synergism is likely and harder to say how to go about falsifying it,” says Bath's Hurst. For now, biologists can offer plenty of reasons why sex is good for you, but they have a ways to go before they can prove their point.


    The Asexual Life

    1. Bernice Wuethrich*
    1. Bernice Wuethrich is an exhibit writer at the National Museum of Natural History in Washington, D.C.

    Forty million years is a long time to go without sex, but that's apparently what the bdelloid rotifers, tiny creatures with a long fossil record, have done. Females clone themselves, and no males have ever been seen.

    About 1000 asexual species of various organisms are known, but most are expected to die out soon, victims of the long-term disadvantages of asexuality. But the diversity of the freshwater bdelloids, which are their own class comprising four families and about 350 species, suggests a long and vibrant asexual history. That makes them something special: the only organism strongly suspected to have made a go of celibacy for the long haul.

    If so, then the bdelloids are the perfect organism on which to test theories of sexual advantage (see main text). “They must have the answer to why asexuals go extinct, because they have figured out how not to,” says molecular geneticist Matthew Meselson of Harvard University, whose team is trying to tease out their secret.

    The first step is to ascertain that the animals are truly asexual—that no males sneak in occasionally. Proof may be found in the DNA, says Meselson, because in asexual species, neutral mutations will pile up independently in each member of a chromosome pair, while in sexual species, over time those mutations will be lost by genetic drift, keeping chromosomes more similar. By now the pairs of bdelloid chromosomes should be highly divergent, and so far Meselson's team has found very different copies of four genes in each of four species studied. They're now using fluorescent probes to see how similar the chromosomes are. Others say that Meselson's team is on the right track. “He has amazingly great data,” says evolutionary biologist Benjamin Normark of Harvard University.


    A New Look at Monogamy

    1. Virginia Morell

    Social monogamy, in which parents cooperate to raise their brood, is relatively common among animals—but true sexual fidelity is hard to find

    Researchers studying the evolution of monogamy once had a straightforward task: Find those members of the animal kingdom that form lasting pair bonds—and then figure out why fidelity is in each mate's interest. But in recent years that task has grown complex. Genetic studies of organisms from birds to gibbons to rodents have revealed that some of the offspring raised by those seemingly attached parents are in fact fathered by different males. Even among those paragons of pair loyalty, the bluebirds, it turns out that the female slips away for brief liaisons with other males. Yet the two parents continue to work together to raise the young. “The first thing you have to understand is that social monogamy, where you've got a pair bond, is not the same as genetic monogamy,” says Stephen Emlen, an evolutionary behavioral ecologist at Cornell University in Ithaca, New York. Indeed, genetic, or sexual, monogamy appears to be the exception rather then the rule among pairs in the animal kingdom.

    Why would organisms live and work in exclusive pairs—but sometimes have sex with outsiders? Biologists have a number of theories to explain this complex behavior, as well as its extremely rare counterpart, true sexual monogamy. To test their ideas, they are examining everything from environmental factors to neural chemistry in various species that are socially—if not always genetically—monogamous. Even as they uncover the biochemical underpinnings of fidelity, they suspect that in certain circumstances, some hanky-panky has evolutionary advantages for both males and females.

    For most animals, mate partnerships are thought to be somehow related to parental care. Birds, for example, were long assumed to be monogamous because two parents are needed for the prodigious labor of incubating eggs and feeding nestlings—and it was thought that males would only do this if they were certain the young were their own. But that's not the whole story. For example, although a pair of eastern bluebirds may mate, build a nest, and rear a brood together, an average of 15% to 20% of the chicks are not sired by the male in this partnership, according to ongoing research by Patricia Adair Gowaty, a behavioral ecologist at the University of Georgia, Athens. Indeed, studies in the last 10 years of the DNA of the chicks of some 180 socially monogamous species of songbirds indicate that only about 10% are sexually monogamous, says Gowaty.

    Males on the prowl are simple to explain in evolutionary terms—they're just trying to get their genes into as many future offspring as possible. Inseminating—and then leaving—a female is an efficient way to do this, explains Emlen, and is by far the most common strategy in mammals. But why would females cast a wandering eye? New work on individual variation in sexual fidelity in birds has helped spur some new theories. “It used to be thought that was all due to forced copulations, that these were male-driven events,” says Gowaty. “But increasingly we're finding that the females have a lot to do with it.” Female songbirds must actively receive sperm and can probably dump it if they choose, so they “probably can't be forced,” she says. Indeed, some females take an active part in these liaisons: Female hooded warblers have a special song soliciting extrapair matings.

    Gowaty and others theorize that the females have good genetic reasons for choosing their extrapair partners, perhaps seeking to maximize the variability of their offspring in case the environment changes. Females may choose one male for a social mate (perhaps because he has a crucial resource, such as a good territory with a nesting tree) and another for a sexual—that is, genetic—mate.

    For example, in recent studies of the Scandinavian great reed warbler, females seemed to prefer mating with certain types of males, particularly those with a large repertory of songs, says Emlen. A study in 1996 by Dennis Hasselquist of Lund University in Sweden showed that males with a broad range of songs father healthier, more viable offspring. In the warbler and other species, says Emlen, females mated to “lesser males” seek extrapair matings, but females mated to “high-quality” males don't wander, apparently because they already have the best genes on the block. A handful of other recent studies have shown that the traits preferred by females are tied to these so-called “good genes” (Science, 19 June, p. 1928). “I think that's what she's looking for—a fitness benefit for the kids,” says Gowaty, who plans to test this hypothesis next summer on the bluebirds.

    Presumably, the cuckolded male in such a pair stays with his mate because that's the best way to ensure that his offspring survive. And in the densely packed bluebird colonies, his position in the nest also offers him opportunities to mate with the female next door, notes Gowaty.

    One way to understand social monogamy is by comparing it to those rare creatures who are truly sexually faithful, such as the California mouse. This peach-sized golden-brown rodent is the rarest of the rare, for only 3% to 10% of mammals are even socially monogamous. But the mice never stray. “They form pair bonds well before they mate,” says evolutionary biologist David Gubernick from the University of California, Davis. In his lab, such bonded males shun other females even if the females are in estrus, and bonded females ignore other males. Genetic tests of paternity confirm the species' till-death-do-us-part fidelity: In a 1991 study done by Gubernick's colleague, David Ribble, now at Trinity University in Houston, Texas, all the offspring from 28 families tested in the wild over a 2-year period were the young of their social fathers.

    Gubernick attributes the rock-solid partnerships in part to a critical requirement for biparental care. “In birds, that need can vary,” he says, “but here it is essential.” In experiments in the wild and in the lab, Gubernick discovered that a female cannot rear the litter of one to three pups by herself. The pups, born at the coldest time of the year, are “absolutely dependent on their parents' body heat for survival,” he says. The father takes turns with the mother to huddle over the young in “the nursing position” to keep them warm. If he leaves or is taken away, she will abandon or kill the pups. “It's the first demonstration of the need for male care in a mammal in the wild,” Gubernick says. Because it takes both to care for the young, this helps make their evolutionary interests so congruent that sexual fidelity is favored, he says.

    But this neat explanation has some untidy loose ends. Harsh conditions don't always lead to faithful hearts: Other species of mice that live in the same environment are promiscuous. It's not clear why monogamy evolved in this one species but not in the others, says Gubernick.

    Although the evolutionary forces are not fully understood, researchers are beginning to explore the hormones underlying both sexual and social monogamy. For example, Sue Carter, a behavioral endocrinologist at the University of Maryland, College Park, has come up with a hormonal explanation for the unusual sexual behavior of prairie voles. These mouse-sized herbivores copulate numerous times over a 24-hour period, generally with one mate. The extensive mating bout apparently releases powerful neuropeptide hormones in the voles' brains, which causes them to form strong pair bonds. Carter's work suggests that in the females, oxytocin—a hormone associated with maternal behavior and lactation—is triggered; while in the males, vasopressin, a hormone associated with male aggression and paternal behavior, is released. When these hormones were experimentally blocked during mating, the pairs did not bond. “They need those chemicals to form their pair bonds,” says Carter.

    These hormones are found in all mammals, including humans and montane voles, close cousins of the prairie voles in which both males and females are promiscuous. However, the receptors for the hormones are found in “totally different parts of the brain” of the two species of voles and so have “totally different effects,” says Thomas Insel, a neuroscientist at the Yerkes Regional Primate Research Center in Atlanta. Over the last 4 years, his team has sequenced the genes for these hormone receptors in more than 10 species, ranging from mice to humans, and found that although the coding sequences are similar, the promoter regions are strongly divergent.

    Does this body of research on animal promiscuity offer insight into human behavior? As anyone who has listened to country music knows, humans are more like bluebirds than the faithful California mouse. Reliable data on human paternity are essentially nonexistent and are expected to vary by culture, but molecular geneticist Bradley Popovich of Oregon Health Sciences University in Portland says that U.S. labs screening for inherited diseases typically expect to find that about 10% of children tested are not sired by their social fathers.

    Still, most researchers agree that, as Sarah Hrdy, an anthropologist at the University of California, Davis, puts it, human “mothers evolved needing help with rearing the kids.” Thus social monogamy, at least, was evolutionarily favored. “There's no question that children are better off with two committed parents,” says Hrdy. As in birds, tending the nest is easier with two on the job.


    A Genomic Battle of the Sexes

    1. Elizabeth Pennisi

    Genomic imprinting, which can permit one parent to stifle the genetic contributions of the other, is surprisingly widespread. Why did such a bizarre system evolve?

    Shirley Tilghman is no romantic about the relations between the sexes, at least when it comes to genes. “It's a war,” the Princeton University developmental biologist announced recently at a public lecture at the National Academy of Sciences in Washington, D.C. No matter how loving a couple may seem, she said, their genes are anything but amicable, and their battleground is the developing embryo. There, in an ongoing molecular battle, “his” genes do what they can to promote their own propagation, and “her” genes fight back to make sure they are not overrun (see Review on p. 2003).

    This picture strikes a blow against a basic dogma of biology—that a gene plays the same role in an offspring no matter which parent contributes it, just as Gregor Mendel saw in his pea plants for traits like seed color and plant height. Lately, though, biologists have learned that the sexes have ways to bias genetic inheritance: They can mark particular genes in the set each one contributes so that later—in the germ cells or the new embryo—these genes get special treatment. Still-mysterious biochemical processes can selectively silence the paternal or maternal copies of genes in ways that advance that parent's genetic interests. “It amounts to an outrageous violation of Mendel's rules,” says Jon Seger, an evolutionary biologist at the University of Utah, Salt Lake City.

    At first glance, imprinting doesn't make much sense, as it seems to undermine some of the hard-won evolutionary advantages of having two sexes in the first place (see p. 1980). For example, a silenced gene no longer offers organisms the safety net of an extra copy of a gene. Imprinting seems even more counterproductive in some insects, where males silence and then discard entire sets of chromosomes. With just one copy of every gene, their offspring have less genetic variety than organisms with two copies of each.

    All the same, since the 1930s geneticists have documented imprinting in dozens of insect species. Moreover, mammalian biologists are catching up, identifying dozens of imprinted genes in mammals since independent teams discovered two in 1991. In mammals, imprinted genes have turned out to play key roles in development and, when their expression goes awry, in cancer and genetic disease. And researchers have come up with more than a dozen possible explanations for why this puzzling genetic twist evolved. Many theories circle around the idea of conflict—that, as Tilghman puts it, imprinting is “a battle of the sexes that is fought between the mother and father.”

    At issue, say many mammalian researchers, is the growth rate of the fetus. About half of the 25 or so mammalian imprinted genes of known function support the notion that fathers contrive to silence genes that rein in growth, boosting the embryo's growth rate and ensuring vigorous offspring. This may run counter to the interests of the mother, who marks and silences growth-promoting genes to keep growth in check. The same seems to be true of a half-dozen genes newly discovered in plants. But not everyone is satisfied with the growth-rate theory. It doesn't seem to apply to all imprinted mammalian genes, and it doesn't explain imprinting in insects. There, imprinting seems to be a struggle between the paternal and maternal genes themselves, battling over which set of chromosomes get passed on, says Seger.

    Mother vs. father. Geneticists got their first clues to imprinting about 60 years ago, when Charles Metz, then at the Carnegie Institution of Washington, observed that the sperm in dung gnats somehow deleted chromosomes inherited from the father and passed on only genes contributed by the mother. He concluded that a signature of paternal origin was somehow “impressed,” as he called it, on the chromosomes. In plants and mammals, the effects of imprinting are more subtle, as often individual genes rather than whole chromosomes are marked. To date no one knows exactly how imprinting occurs, although methylation of genes is thought to help sustain the imprint in mammals. But although no one knows the mechanism, says Gilean McVean, an evolutionary geneticist at the University of Edinburgh in Scotland, the field has been energized in recent years by new ways to explain imprinting in evolutionary terms.

    The growth-rate theory is the brainchild of evolutionary biologist David Haig of Harvard University, who developed it for plants with Mark Westoby in 1989 while at Macquarie University in Sydney, Australia; Tom Moore of the Babraham Institute in Cambridge, U.K., independently came to similar conclusions. The researchers later broadened the ideas to include mammals, and now Tilghman and other mammalian biologists are in their camp.

    The researchers realized that when it comes to the growth of offspring, each parent has different interests, particularly in species where the male mates with multiple females and each female invests a great deal of energy in her progeny. The male's interest is in getting the female to invest as much as possible in his offspring—to make each offspring large. She, on the other hand, would be better off rationing her resources to ensure that she can produce additional offspring—likely with different fathers. Thus paternally derived genes would foster large offspring, while maternally derived genes would moderate growth to safeguard the mother. “The selective forces are different,” Haig explains.

    In animals, Haig predicted that this conflict would be most striking in mammals, where the developing fetus is a virtual parasite on the mother, making some means of control over fetal growth a necessity for her. A few years later, experiments documented imprinting in mammals, and the imprinted genes—growth promoters and inhibitors—“fit right into [Haig's] theory,” says McVean.

    For example, in 1991, Columbia University researchers found that although the paternal copy of insulin-like growth factor 2 (Igf2) was active, the maternal copy of this growth-promoting gene was not. At the same time, a team led by Denise Barlow at the Netherlands Cancer Institute in Amsterdam found that the paternal gene for the Igf2 receptor (Igf2r)—a molecule that binds Igf2 and targets it for destruction—is imprinted and turned off, while the maternal copy is active. Moreover, breeding experiments showed that embryos with no maternal Igf2r genes grew excessively large, while newborns with no paternal Igf2 were undersized. The same pattern was seen for an imprinted gene called Gt12, another growth promoter.

    With Haig's idea winning support, Tilghman sought to test it further. The theory implies that monogamous mammals, in which mates' interests are more congruent, should lack imprinting. Tilghman and Paul Vrana, an evolutionary biologist in her lab, found in the literature a set of 1960s experiments in which a monogamous field mouse called Peromyscus polionotus was crossed with a closely related polygamous species. If Haig's prediction is correct, a monogamous female crossed with a polygamous male would yield large offspring, because the female would lack the imprinting mechanisms to alter her genes to counteract the male's growth-promoting genes. The opposite cross should yield small mice, as the male wouldn't be able to compensate for the female's growth-inhibiting genes.

    The results of those earlier experiments and similar ones performed by Vrana and Tilghman, still unpublished, bore out both predictions, Tilghman says. Whereas newborns from the two species typically weigh 16 to 18 grams, those from the first cross were only about 10 grams and those from the second more than 20 grams. “The [newborn] is so large that it can't be born [properly],” says Tilghman. “And the size differences persisted after birth.”

    But although these results resoundingly confirmed Haig's growth-rate predictions, his prediction that imprinting would be absent in monogamous species turned out to be false, Tilghman reported in her presentation at the academy. When the team looked at the fate of maternal and paternal genes in the offspring, “to our great surprise, imprinting was working fine in [the monogamous] mouse,” she said in her talk. The male genome still turned off known growth-inhibiting genes, such as the Igf2r gene, and the female silenced growth-promoting ones, such as Igf2.

    This seems to strike a blow against Haig's ideas on why imprinting exists. But Tilghman reported that the monogamous and polygamous species split apart just 100,000 years ago—perhaps not enough time for imprinting to disappear in the monogamous species.

    Indeed, although Haig suggests that imprinting will evolve when the interests of the sexes differ sharply, as in polygamous species, other theorists think it could evolve more easily. For example, evolutionary genetics models by Hamish Spencer from the University of Otago in Dunedin, New Zealand, and his colleagues suggest that multiple paternity isn't a prerequisite for imprinting to be advantageous. “The mathematics suggests it doesn't take much to tip the scales [in favor of imprinting],” says Tilghman. The models indicate that a conflict over growth rate can also pit the mother against her offspring, driving the evolution of imprinted genes.

    All for growth? But other evidence indicates that the conflict over fetal growth can't explain all cases of imprinting. Take the male dung gnats whose sperm cells discard paternal chromosomes. In these insects, only one set of chromosomes—from either the mother or the father—winds up in the gamete. So the conflict is not over allocation of the female's resources, but over which genes will be passed on. “If the maternally inherited genes can cause the paternally inherited genes not to be transmitted, they get a twofold boost in fitness,” explains Glenn Herrick, a geneticist at the University of Utah, Salt Lake City.

    Even in mammals, certain imprinted genes, such as snrpn, expressed in the brain, seem to have nothing to do with growth, says evolutionary biologist Laurence Hurst of the University of Bath in the United Kingdom. Furthermore, in 1997, he and McVean surveyed literature on genetic disorders in which a child inherits both copies of one or more chromosome from just one parent and thus gets double doses of paternally or maternally imprinted genes. Multiple copies from the father, rather than leading to bigger babies as Haig would predict, instead led to smaller babies, as did multiple copies from the mother.

    Haig explains these results by arguing that normally imprinted male and female genes balance each other out, so when one sex fails to contribute growth can go awry. “It's like a tug-of-war,” he says. “If one side drops the rope, you're going to get all sorts of abnormal effects.” Hurst and McVean's “evidence doesn't support the model,” agrees Tilghman. “But it isn't enough to kill it.”

    New data from plants are further bolstering the growth-rate hypothesis. In April, Ueli Grossniklaus of Cold Spring Harbor Laboratory in New York and colleagues described what appeared to be an imprinted gene, called MEDEA, in the experimental plant Arabidopsis (Science, 17 April, p. 446). They found that only the maternal copy of MEDEA was active and that its normal function was to curb the growth of the embryo.

    Likewise, Rod Scott, a plant developmental biologist at the University of Bath, and his colleagues have shown that in general, maternal genomes slow the growth of seeds, while paternal genomes accelerate it. Breeding an Arabidopsis plant with four copies of each chromosome with plants carrying the usual two, they created viable offspring with three copies of the genome. If two copies of the genome came from the male, the endosperm (the embryo's nutrient store) and consequently the seed were large; if two copies came from the female, the seed was small, they will report in an upcoming issue of Development. “The results support Haig and Westoby,” says Scott.

    As far as developmental geneticist Wolf Reik of the Babraham Institute is concerned, Haig's “is the best theory we have right now.” And many in the imprinting field agree. Still, given the unexplained cases of this perverse twist of genetics, Haig's explanation isn't likely to be the full story. “These things we call imprinting may be a diverse collection of phenomena with different evolutionary origins,” Herrick notes. “But that's not a problem, that's fun.”

Stay Connected to Science