News this Week

Science  27 Jul 2001:
Vol. 293, Issue 5530, pp. 582

    Plans for Next Big Collider Reach Critical Mass at Snowmass

    1. Charles Seife*
    1. *Snowmass Summer Workshop, “The Future of Particle Physics,” 30 June to 21 July.

    SNOWMASS VILLAGE, COLORADO—High-energy physicists from around the world would like to build a multibillion-dollar linear collider as the next big accelerator project. The unexpected consensus appeared in an unofficial statement cobbled together at the end of a 3-week summit.* But how such a machine would be funded and where it would be built are yet to be determined.

    “There are fundamental questions … that cannot be answered without a physics program at a Linear Collider overlapping that of the Large Hadron Collider,” reads the statement, which refers to a machine being built at CERN, the European particle physics laboratory near Geneva. “We therefore strongly recommend the expeditious construction of a Linear Collider as the next major international High Energy Physics project.” Conference attendees were surprised by the broad agreement represented by the document. “I thought we'd never get to a consensus,” says Mike Barnett, a physicist at Lawrence Berkeley National Laboratory in Berkeley, California, who helped craft the statement. “It took off suddenly.”

    Such a linear collider would take electrons and their antimatter twins, positrons, and smash them together. Because electrons and positrons are fundamental particles—indivisible “leptons”—their collisions are much less complicated than collisions between composite bodies like protons, which are made up of three quarks. This simplicity, along with scientists' ability to manipulate the electrons' polarization, gives electron-positron colliders the ability to make much more precise measurements than composite, or hadron, colliders like the Tevatron at Fermi National Accelerator Laboratory in Batavia, Illinois, and the Large Hadron Collider, which is scheduled to go online in 2006. The sacrifice is that they operate at lower energies—a handicap in searching for massive particles.

    Linear thinking.

    Planned collider would smash electrons into positrons.


    But in the past few years, particle physicists have become convinced that the Higgs boson, a long-sought particle responsible for objects' mass, should have a mass-energy less than 200 giga electron volts (GeV), within reach of a new electron-positron collider. The leading designs are the German Tera Electron Volt Superconducting Linear Accelerator (TESLA) and the American Next Linear Collider, either of which would accelerate the particles for tens of kilometers before smashing them together with as much as 500 GeV of energy. This should enable physicists to make precise measurements of the Higgs boson, as well as possible “supersymmetric” particles that some scientists believe will be found in the next few years.

    Faced with this prospect, the community decided to back the collider rather than scrabble for funding of other potential facilities such as a neutrino factory, which would produce the nearly massless particle in great numbers, or a muon collider, which would slam heavier versions of the electron and positron into each other. “We really wanted to avoid where this becomes a food fight and everyone says, ‘My area should be the one that's funded,’” says Steven Ritz, a physicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who works in high-energy astrophysics.

    Of course, agreement among physicists won't pay for the new machine. An estimated price tag of at least $6 billion makes it extremely unlikely that one country will be willing to finance the facility. But the opportunity to host it is believed to be a major attraction.

    Physicists from Europe, Japan, and the United States each want their own country to win the honor. Talk in the hallways pegged Hamburg, Germany, and Batavia, Illinois, as the leading contenders for the site of the accelerator, to the chagrin of Japanese scientists. But these differences are unlikely to be serious stumbling blocks. “It's more important to get a machine built than to get it built in the U.S.,” says Ken Bloom, a physicist at the University of Michigan, Ann Arbor.

    A bigger issue for U.S. scientists is whether their government will be a reliable partner. The U.S. withdrawal from the International Thermonuclear Experimental Reactor is still an open wound (Science, 9 October 1998, p. 209), and a White House budget official who addressed the conference gave little indication that money would be forthcoming. “Given that the overall funding profile of physical sciences is slipping down, large expenditures are going to be difficult,” says Mike Holland of the Office of Management and Budget. Given the current budget situation, he said, some politicians are more likely to ask the opposite question, namely: “What would be the impact on society if the funding for high-energy physics were zeroed out?”

    Luciano Maiani, director-general of CERN, said that such an attitude would be “very unfriendly to science.” But some participants suggested that they should learn from their colleagues in the life sciences, who have managed to win big increases for the U.S. National Institutes of Health while funding stagnates for most of the physical sciences, by explaining the benefits to society from their work. “We're no longer the favorite son of science,” says Princeton physicist Kirk McDonald. “We're just used to acting that way.”


    World Starts Taming the Greenhouse

    1. Richard A. Kerr

    To the surprise of many, representatives of 178 countries agreed early Monday morning on how to begin fighting global warming. Seventeen hundred diplomats established a complex method of accounting for greenhouse gas emissions and uptakes, which will allow countries to receive due credit for their efforts. The agreement allows countries considerable flexibility in meeting their goals for reducing greenhouse emissions, an aspect that derailed negotiations in The Hague last fall. Enough countries are expected to ratify the agreement to put the Kyoto Protocol into effect next year, without the United States.

    The new agreement is seen by protocol supporters as the best start that could be expected under the circumstances. “Most people in the world do believe this was the only game in town,” says Eileen Claussen, president of the Pew Center on Global Climate Change in Arlington, Virginia, an organization dedicated to reducing greenhouse emissions. After President George W. Bush rejected the Kyoto Protocol this spring as “fatally flawed,” “countries had to rethink whether they wanted to do this,” Claussen says. “They decided they did, and they made the needed compromises.” The compromises are unlikely to entice the United States to ratify the protocol, but they do seem to promise broad enough support among other industrialized countries that the protocol will come into force.

    The compromises encompass a range of policy issues. Flexibility will come from mechanisms such as emissions trading, the exchange of emission credits between countries able to cut emissions beyond their required amount and countries willing to purchase those credits; the Clean Development Mechanism, in which industrialized nations can receive credit for emission reductions achieved through projects such as hydroelectric dams in developing countries; and land use, the managing of soils and forests that can soak up carbon dioxide. The delegates failed to reach a compromise in one key area: compliance. Japan in particular was leery of harsh penalties for countries that don't meet their emission-reduction targets, so negotiators decided that enforcement will be determined once the protocol is in effect. That will be when the requisite 55 countries accounting for 55% of the industrialized countries' 1990 emissions have signed on.

    Given the flexibility and the possibility of a light hand with enforcement, the environmental group Greenpeace has dubbed the current version of the protocol “Kyoto Lite,” but a lightening up is not all bad, say some observers. “They've left the thing sufficiently loose that everyone's willing to join hands,” says journalist-in-residence John Anderson of Resources for the Future, a Washington, D.C., economics think tank. “That's probably useful. Nobody knows how those mechanisms are going to work. Everyone needs to get real-world experience with what the costs are going to be before you can press hard. It's not a howling success, but it's not a disaster either.” The protocol could be in effect, without the United States, by the 10th anniversary of the Rio Earth Summit next July.


    Map of the Human Genome 3.0

    1. Laura Helmuth

    Like a traveler picking up a new language, geneticists are starting to recognize phrases and sentences as they read the babble of bases in the human genome. These grammatical constructs are discrete blocks of DNA that differ from one person to the next. And they may be common enough and account for enough of the genome, researchers concluded at a conference* held here last week, that it's time to create a new map of the genome, one that describes its blocky structure.

    The new project is called a haplotype map—for desperate want of a better name. (In the hopes of finding a more palatable one, National Human Genome Research Institute director Francis Collins has informally launched a name-the-map contest.) Haplotypes are simply long stretches of DNA—including perhaps as many as 100,000 bases—at a given location on a chromosome. To their surprise, genome researchers have found that many such blocks come in just a few different versions, a discovery that should simplify the search for associations between DNA variations and complex diseases such as cancer, diabetes, and mental illness.

    A haplotype map will thus, its creators hope, be a tool for pinning down the genes that contribute to the development of those conditions. “The reason for [the map's] existence,” said Collins, “shall be to try to understand the reasons for disease and find therapeutics” to treat them.

    Mix and match.

    Long stretches of DNA with a distinctive pattern of SNPs are called haplotypes. Successive haplotypes can combine in many different patterns.

    The mapmakers emphasized this goal because a haplotype map could raise ethical concerns. More so than previous maps of the human genome, it might include markers that indicate someone's race and ethnicity. As a result, about half the meeting was devoted to figuring out the best scientific approach to building a haplotype map, whereas the rest was spent exploring social issues. Pilar Ossorio of the University of Wisconsin Law School in Madison pointed out one of the dangers. “As cognitive psychologists have shown,” she said, “people take in information in a way that reaffirms their existing stereotypes,” and creating a genome map that contains race-specific elements might imply a scientific seal of approval on race-based social perceptions.

    Only in the past year or so have genome researchers realized that a haplotype map might be feasible. Until now, they have focused mainly on identifying DNA variations called single-nucleotide polymorphisms (SNPs)—sites along the genome at which individuals differ by just one base—to use in tracking disease genes. And although they've had some success recently in identifying genes associated with diabetes and the gastrointestinal ailment Crohn's disease, it's been an arduous, expensive process (see p. 593).

    Because there's roughly one SNP for every 1000 bases of DNA, there might be a huge number of SNP patterns over a given stretch of sequence. If, for example, a 50,000-base sequence contains 50 SNPs, those SNPs could, in theory, come in 250 different variations. Computer simulations suggested that haplotype blocks—DNA stretches containing the same SNP pattern—would stretch only 3000 to 8000 bases—too short to make a haplotype map worth the trouble for tracking disease genes. But the reality turned out to be much more promising.

    Identifying blocks.

    Hypothetical SNPs A through D are correlated (red) and probably constitute a haplotype. SNPs E through G are also fellow travelers.

    As several teams reported at the meeting, haplotype blocks are at least 10 times longer than predicted, and there are a relatively small number at each chromosomal position. For some sequences of 50,000 bases, for example, just four or five patterns of SNPs—that is, four or five different haplotypes—might account for 80% or 90% of the population. “It didn't have to be this way,” said Eric Lander of the Whitehead Institute for Biomedical Research/MIT Center for Genome Research in Cambridge, Massachusetts. But because it is, instead of trying to correlate each of the 50 SNPs with disease, researchers can restrict their studies to SNPs that differentiate the few common patterns. This should ease their work by cutting down on the amount of DNA they have to scan to identify disease genes.

    No one is sure yet why the genome is so blocky, but geneticists mention two candidate explanations. During the meiotic divisions that give rise to sperm and eggs, the two copies of each chromosome sometimes swap stretches of DNA, or recombine. If, for some unknown molecular reason, some parts of the chromosome are less likely to recombine than others, some stretches of DNA will be conserved as blocks while others change rapidly across generations.

    Population bottlenecks apparently contribute as well. As Kenneth Kidd of Yale University and others have found, there is a greater diversity of haplotypes in people in Africa, where humans first arose, than in other populations. In addition, descendants of people who settled in Asia carry a somewhat different set of haplotypes from those who settled Europe.

    And that's what raises the ethical issues discussed at the meeting. Researchers are still figuring out how to construct the haplotype map—what pilot studies to run, for instance, and how to standardize the definition of a haplotype. (“This notion of a block is a little hazy,” said Leonid Kruglyak of the Fred Hutchinson Cancer Research Center in Seattle to much laughter.) But their most pressing problem may be whether to include ethnic or geographic identifiers on DNA samples. Such identifiers were stripped from the samples used to create the human genome sequence and SNP maps. But as Collins points out, if certain haplotypes are more common in some ethnic groups than others, haplotype mappers run the risk of missing distinctive patterns of DNA that might predict disease susceptibility in some populations but not others.

    Summing up the meeting, Collins said that there was a “consensus” that the project “would have considerable medical value” and is worth pursuing. He solicited volunteers for two working groups that in the upcoming weeks will do the heavy lifting—a scientific steering committee to nail down working definitions of haplotypes and set priorities for pilot studies, and a second group to keep an eye on social and ethical issues.

    • *Developing a Haplotype Map of the Human Genome for Finding Genes Related to Health and Disease, 18–19 July, sponsored by the National Institutes of Health.


    DNA Sequencers to Go Bananas?

    1. Josh Gewolb

    Among scientists, the banana gets little respect. It's one of the most popular fruits on Earth and the developing world's fourth most important food crop, yet only a handful of labs are working on it. Now, a group of researchers is hoping to put the banana (Musa) on the scientific map. On 19 July, an international consortium announced that it hopes to sequence the entire banana genome, perhaps as early as 2006. The announcement of the multimillion-dollar effort has already raised the banana's public profile, but some plant scientists say the project won't bear much scientific fruit.

    The Global Musa (Banana) Genomics Consortium says it expects the project to take anywhere between 5 and 10 years, depending on the degree of accuracy desired, and cost up to $7 million a year. Members of the consortium have promised $2 million a year for the duration of the project, says geneticist Emile Frison, chair of the nonprofit International Network for the Improvement of Banana and Plantain and coordinator of the sequencing effort. The chief sponsor to date is the French agricultural institute CIRAD, which has pledged $1.5 million over the next 5 years.

    The consortium has already begun work but is still looking for funding. To help with fund raising, it engaged a public relations firm and announced the project's launch last week, noting that better bananas would help reduce hunger and poverty in the developing world. Frison explains that it would have been “a handicap in raising resources” not to have sought publicity.

    Two dozen research centers expect to participate in the project, including The Institute for Genomic Research, a large-scale sequencing facility in Rockville, Maryland. Sequencing pros and banana experts met in Alexandria, Virginia, from 17 to 19 July to draw up a protocol and select a banana type for sequencing. Only specialists are likely to be familiar with their choice—Musa acuminata calcutta 4, a wild, nonedible variety native to southern India. According to Frison, calcutta 4 was chosen in part because of its resistance to black sigatoka, a devastating fungus that reduces worldwide banana yields by as much as 50%. Network researchers, who plan to make sequence data freely available as they are produced, say they hope the variety's resistance traits can be used to improve commercial bananas.

    The banana would be the third major plant genome sequenced, after the wild mustard Arabidopsis, completed by a public consortium in 2000, and rice, undertaken by a private company and a public consortium. With 500 million to 600 million base pairs, the banana would be the biggest plant genome cracked yet.

    To be revised.

    International group seeks disease-resistant banana genes.


    But some members of the plant genomics community are skeptical of the project. They say bananas are intractable in the laboratory, which will make it hard to take advantage of the genomic data. “You can count the number of people working on banana on one hand,” says Chris Somerville, a pioneer in Arabidopsis research who works at Stanford University. “The sequence is of limited utility if no one is working on the plant.”

    Backers of the project acknowledge that the ranks of banana geneticists are thin but point to recent accomplishments such as the identification of black sigatoka resistance genes. They argue that the banana is a very important crop even though it is not ideal for genetic research. “Just because something you sequence isn't a perfect model organism doesn't mean that its genome isn't useful,” says University of Chicago plant geneticist Daphne Preuss.


    Funding Backlog at NSF Sets Off Free-for-All

    1. Jeffrey Mervis

    Elbowing your way to the head of a line that has been forming for years isn't polite. This year, however, it could be a winning strategy for several groups seeking major new research facilities from the National Science Foundation (NSF). The latest evidence came earlier this month in separate votes by both House and Senate spending panels on increasing NSF's $4.4 billion budget.

    The panels approved overall increases for NSF in 2002 of 9.4% and 5.5%, respectively—more than the 1.5% the Bush Administration requested but less than the 15% sought by research advocates. Those middle-of-the-road numbers hide some controversial decisions on individual projects, however. The Senate panel included funds for an underground laboratory in South Dakota, whereas the House panel approved money for an expanded search for neutrinos at the South Pole and a high-altitude research plane. None of these projects were included in NSF's budget request to Congress. Although Congress must still reconcile those bills this fall, the trend is clear: Pushiness pays off.

    NSF's budget has traditionally been nearly free of congressional earmarks—projects not requested by the agency and often advanced by politicians on the urgings of their constituents. But a decision in February by the Bush Administration to eliminate any so-called “new starts” from the foundation's capital budget, combined with congressional rejection of two projects contained in last year's budget request, has created a backlog of pent-up demand. That delay has prompted some researchers whose projects are stuck in NSF's pipeline to plead their cases to Congress.

    The result is undermining the orderly process for setting priorities on big projects. The process begins with scientific reviews, runs through approval by the National Science Board (NSB), NSF's governing body, and culminates with a decision to include an item in the agency's budget request to Congress. Any major deviation, say most scientists, invites chaos and a misallocation of limited resources. “NSF [decision-makers] can see the big picture, and they set their funding priorities based on what's best for the entire community,” says Anne Meltzer, a seismologist at Lehigh University in Bethlehem, Pennsylvania. She's involved with one of the delayed initiatives, a 10-year, $350 million geosciences project called EarthScope that includes a network of seismic monitoring stations and a 4-kilometer hole drilled into the San Andreas fault. Although “it's all good research,” says Meltzer about the projects Congress has favored, “it's very disappointing when people don't respect the process.”

    The science board hasn't even been briefed on one of the major projects that Congress wants the agency to take on: building a national underground laboratory in an old gold-mine shaft in South Dakota (Science, 15 June, p. 1979). Last week, the Senate panel that funds NSF included $10 million to examine the feasibility of the $280 million project, which was submitted to NSF for scientific review only last month. Proponents, who have the ears of Senate Majority Leader Tom Daschle (D-SD) and panel member Senator Tim Johnson (D-SD), say that impending plans to flood the former Homestake mine forced them to act quickly.

    In contrast, the antarctic astrophysics project, dubbed Ice Cube because of its 1-kilometer-per-side dimensions, has worked its way through NSF's review process. The proposed $243 million muon and neutrino detector would vastly expand an existing array beneath the South Pole. The expansion was approved last year by the NSB and made it into NSF's draft budget last fall. But it was booted out by the president's diktat.

    Project leader Francis Halzen and others at the University of Wisconsin, Madison, which would run the facility, didn't give up, though; they won over Representative David Obey (D-WI), the ranking member of the House Appropriations Committee. At his urging, the committee approved a $15 million down payment on the project. “There's a long backlog of projects, and I've been told that we were top ranked,” says Halzen, explaining his decision to seek help from a powerful friend. “There's also a European project coming along, and we want to push forward and retain our advantage.”

    This year's final spending bill is also likely to include money for a research plane that was endorsed by the NSB in 1998 but never included in NSF's budget request. The $80 million project, called High-Performance Instrumented Airborne Platform for Environmental Research (HIAPER), would be operated by the National Center for Atmospheric Research in Boulder, Colorado, which persuaded Congress to put in $20 million over the last 2 years. In keeping with the panel's philosophy of finishing what it starts, the House this year added the remaining $35 million needed to start bending metal. Last week, NSF announced that it had begun negotiating with Gulfstream to build the plane.

    Left behind in this free-for-all are supporters of EarthScope and the $100 million National Ecological Observatories Network, a collection of 10 instrumental field stations. These two projects were part of NSF's 2001 budget request, at $17 million and $12 million, respectively, that legislators declined to fund. And they were bumped from NSF's draft 2002 budget.

    Unlike the teams involved in Ice Cube, HIAPER, and the underground lab, the earth scientists who hope to carry out EarthScope have agreed to bide their time until NSF is able to reinsert them next year into its major research equipment and facilities account, at a proposed $32 million. However, so far that strategy has netted them nothing but a pat on the back. “They agreed to play by the rules, and so far they have lost out,” notes one NSF official.


    Shutdown at Hopkins Sparks a Debate

    1. Eliot Marshall

    The federal government abruptly suspended all U.S.-funded human studies at Johns Hopkins University in Baltimore last week. Then, in a turnabout 3 days later, it allowed about 700 of the more than 2400 that were affected to resume, although with restrictions. The crisis sent a shock through the clinical research world, mainly because Hopkins is regarded as one of the country's most conscientious overseers of research. And it seemed likely to raise questions anew about how best to protect research subjects.

    These events grew out of an inquiry into the death on 2 June of a healthy volunteer in a Hopkins asthma study. Hopkins completed its own investigation of the tragedy on 16 July, finding relatively minor flaws in the study (Science, 20 July, p. 405). Then on 16 to 18 July, investigators from the U.S. Department of Health and Human Services (HHS), who had been checking other issues at Hopkins since fall, conducted an urgent site visit. One day later, they found Hopkins deficient on 31 points and issued the shutdown order.* Among the alleged infractions: failure to approve every study in a “convened meeting” of the Institutional Review Board (IRB), lack of proper informed consent, exposure of subjects to a drug not approved for human use, and failure to report an adverse reaction.


    Hopkins Medicine CEO Edward Miller protested a federal shutdown order.


    Unlike a half-dozen other institutions that have been punished in this way, Hopkins didn't meekly swallow HHS's medicine. Instead, university officials contested some of the allegations and lashed out at the regulators. The same day Hopkins received the notice of deficiency from HHS's Office for Human Research Protections (OHRP), Edward Miller, CEO of Johns Hopkins Medicine, appeared before cameras on Hopkins's front steps to protest what was happening. And in a public statement, the university called the research shutdown “unwarranted, unnecessary, paralyzing,” “draconian,” and “outrageous.” University officials also contacted members of Congress. Both of Maryland's senators, Democrats Barbara Mikulski and Paul Sarbanes, responded with faxed protests to HHS Secretary Tommy Thompson.

    Over the next few days, Hopkins and HHS reached an understanding. The university submitted a plan to correct the claimed deficiencies by specific deadlines. And HHS agreed that people could continue to be treated in Hopkins clinical studies if the investigators could verify that this would be “in the best interests” of the subjects. Most studies, however, have been placed in limbo and must be reapproved.

    This compromise, reached over a weekend of hectic activity at Hopkins and in government offices, seemed to cool the rhetoric. But it was clear early this week that Hopkins and OHRP still disagree sharply on important points and that a wider debate may be brewing. Observers differ markedly: Some are exasperated with the detailed reporting imposed by the regulators—which they dismiss as mere paperwork that doesn't necessarily improve the protection of research subjects—whereas others think the oversight system needs even more support and authority.

    After reading the documents, Norman Fost, a pediatrician and ethicist who chairs an IRB at the University of Wisconsin, Madison, says he agrees with Hopkins that some government sanctions were “outrageous,” probably “unfair,” and perhaps even “wrong.” For example, OHRP reprimanded Hopkins for ignoring U.S. rules on the use of drugs in basic clinical research, but Fost says the record shows that the government has never clearly articulated them. In contrast, Mary Faith Marshall, a professor of medicine and bioethics at the University of Kansas Medical Center in Kansas City and chair of OHRP's advisory committee, thinks government inspectors were probably justified in taking strong action. “In my experience,” she says, “they bend over backward to be fair and do an excellent job.” She notes that an HHS inspector general's report in 1998 warned that IRB review panels were overworked and underfunded—putting human subjects at risk.

    The university, meanwhile, is trying to recover from the setback. If OHRP-ordered clinical reviews at other sites are any guide, it will cost Hopkins well over $1 million and months of labor to get its clinical studies fully on track.


    Research Toll Is Heavy in Time and Money

    1. Mark Sincell*
    1. Mark Sincell writes from Houston.

    HOUSTON—The final group of researchers returned last week to labs at the Texas Medical Center, nearly 6 weeks after Tropical Storm Allison temporarily turned the campus into a lake (Science, 22 June, p. 2226). But no one was celebrating. Widespread power outages had destroyed years of work, and the basement laboratories of the University of Texas Health Science Center (UT-HSC) and the Baylor College of Medicine (BCM)—once home to over 35,000 animals that drowned—were declared a total loss.

    “This is a real, honest-to-God disaster,” says Jim Patrick, BCM's vice president of research. Several large pieces of equipment, including two multimillion-dollar electron microscopes, were completely destroyed by water and floating debris, and Baylor officials are still tallying the damage. The total cost to UT-HSC is expected to be about $205 million, including a ruined $50 million cyclotron used for positron emission tomography, says George Stancel, the dean of the graduate school of biomedical science at UT Houston. With insurance expected to pay for only a fraction of the replacement value, both schools are seeking state and federal funds for the restoration work.

    Damaged goods.

    Floodwaters destroyed equipment in the basement of the Texas Medical Center.


    Researchers are more concerned about the lost time. Morteza Naghavi saw almost 4 years of work float away when the flood drowned all his research animals. “I lost everything: 800 mice and 35 rabbits,” says Naghavi, a cardiology researcher at UT-HSC, about animals bred to be apoE deficient, which predisposes them to heart attacks. It was even worse for Lance Gould, leader of the team that built the ruined cyclotron. “It took 20 years to build that machine,” he says.

    Allison's destructive clouds have a faint silver lining, however. Gould's team hopes to build a better cyclotron than the one they had, for less money because of improvements in technology, with support from private donors. “The storm was an unmitigated disaster,” says Gould. “But we are turning it creatively into an advantage.”

    The researchers back on the job have shifted to higher terrain, either elsewhere in the hospital or in nearby temporary facilities. And neither institution plans to move them back underground. “That is history,” says Patrick. But their upward mobility has created a space crunch. The two hospitals must each relocate employees in nonessential services, now occupying thousands of square meters, before they can resettle the labs in permanent homes.


    New Data Reveal the Sisterhood of Lions

    1. Elizabeth Pennisi

    In any pride of lions, a “lion king” typically sires most of the cubs. But whereas male pridemates have a strict pecking order, a new study reveals that there's really no such thing as a “lion queen.” On page 690, behavioral ecologist Craig Packer and colleagues Anne Pusey and Lynn Eberly of the University of Minnesota, Twin Cities, report that female lions all bear young with about equal success—an unusual behavior for social mammals.

    Packer and his colleagues have been observing lions in Tanzania since the 1960s. A decade ago, their DNA analysis showed a big “reproductive skew” among males, with most of the offspring belonging to one of two dominant males. In this study, they analyzed 36 years' worth of birth records in which they kept track of every cub reaching its first birthday in some 31 prides and identified its mother. “That they looked across a large number of groups over a long time makes this a powerful [study],” notes Jeffrey French, an animal behaviorist at the University of Nebraska, Omaha.

    The number of young varied from pride to pride: In some, the females had just one or two cubs a year, whereas in others, they tended to have three or four and occasionally more. But within a pride, Packer says, “there was no hint [that] any females were systematically getting more reproduction than others.” Indeed, the more mothers in a pride, the likelier the cubs were to survive.


    In contrast to males, female lions in a pride have about equal reproductive success and even cooperate in raising their young.


    Such behavior is atypical for social mammals, in which it is common for one female to hold the reproductive reins and actively sabotage the reproductive efforts of others. As a result, subordinate females stop breeding and instead help a more dominant sister or mother with her young. But Packer and his colleagues offer several reasons why this behavior does not occur in lionesses.

    For one, the fighting required to set up a pecking order that would prevent low-caste females from breeding would lead, Packer says, to “mutually assured destruction” from the animals' massive claws and teeth. “It's too risky to try to control other females directly,” agrees Tim Clutton-Brock, an evolutionary biologist at the University of Cambridge, United Kingdom.

    Lionesses also avoid another strategy sometimes employed by alpha females seeking to thwart breeding by potential rivals: killing the rivals' newborn young. This likely stems from breeding in communal locations. But in behavior typical of felines, lionesses go into hiding to give birth and don't rejoin the pride until the cubs are 6 weeks old and much less vulnerable.

    Returning mothers then raise the cubs communally and together fend off raids from lions in other prides. “Females benefit from each other's presence,” notes Clutton-Brock. In short, Packer adds, “the queen of beasts is a democrat.”


    Galápagos Takes Aim at Alien Invaders

    1. Jocelyn Kaiser

    Scientists are waging a two-pronged assault on the goats, rats, weeds, and other exotic species that threaten the fabled archipelago's flora and fauna

    Venturing across a lava isthmus, the first goats reached northern Isabela Island, an uninhabited section of the largest island in the Galápagos, in the early 1980s. Since then, that handful of invaders has exploded into a marauding troop of 100,000 that has devoured the lush vegetation carpeting the flanks of Alcedo Volcano. Their food source converted to arid grassland, the local population of 3000 giant tortoises is facing an uncertain future. But the goats' days are numbered. Early next year, a SWAT team of up to 100 sharpshooters plans to descend on the volcano and kill every last goat—even if it means dangling from helicopters to get a clear shot.

    Although animal rightists may wince, to biologists, the planned Isabela goat removal is no less than a battle to save the Galápagos' largest remaining population of giant tortoises, an icon of the famous archipelago 965 kilometers off Ecuador's coast. Indeed, a war is raging in the Galápagos, and feral goats are not the only species targeted. Also in the cross hairs are imported cats, pigs, fire ants, weeds, and many other invasives that are considered the number one threat to the extraordinary diversity of birds and reptiles that inspired Darwin's theory of natural selection.

    The brainchild of scientists and managers at the Galápagos' Charles Darwin Research Station and the Galápagos National Park Service, this project developed over the past 5 years is one of the most ambitious ever to combat invasive species. With $18 million over 6 years from the United Nations and World Bank-run Global Environment Facility (GEF) and an expected $19 million from other sources, biologists and park staff will use a combination of brute force, high-tech gadgetry, and cutting-edge science to wipe out some alien species, make a dent in other populations, and bolster controls to keep other exotics out.

    Trouble spots.

    Some of the exotic species plaguing biodiversity in the islands that inspired Darwin's theory of natural selection.


    The scientists believe that eradicating the most troublesome species all at once—different species are a problem on different islands—is the only way to stop the destruction of the Galápagos. And they're attacking the problem from all angles, from supporting a clampdown on imported goods that could carry in new species to enlisting the help of the islands' villagers. They know some steps will be controversial but hope this holistic approach will set an example for other places battling invasives, especially developing countries, where poverty poses an especially big challenge to conservation. “There's a tremendous chance to see how all the elements of this package can work together,” says Robert Bensted-Smith, director of the Charles Darwin Research Station. It also offers ecologists a chance to see what happens when major players are suddenly plucked out of an ecosystem.

    Prominent biologists are cheering them on. “It certainly appears to be an important project … in view of the biological uniqueness of the Galápagos and what is at stake,” says ecologist Hal Mooney of Stanford University. But some experts question Ecuador's odds of success. “It's not a simple problem, and they need a very long-term commitment by the government. But I don't know if they can necessarily afford it,” says Clifford Smith, a botanist who recently retired from the University of Hawaii.

    Space invaders

    The first invasives arrived with sailors and whalers in the early 1600s—goats, rats, dogs, and cats that came along onboard. Settlers in the 1800s added more animals to the mix. As they escaped into the wild, feral populations grew, sometimes popping up on several islands, as the goats did. The damage they inflict is enormous and varied. When pigs and goats destroy native vegetation, for example, they wipe out not only food sources but also foliage that shades temporary rain pools and regulates temperatures crucial to reptile egg development. Pigs, dogs, rats, and cats also feast on lizards and the eggs and hatchlings of imperiled endemic species such as dark-rumped and Galápagos petrels, mangrove finches, tortoises, and snakes. Even more insidious in the long run are such insects as fire ants and weedy plants that can quickly blanket a patch of land and “change the whole basis of the ecosystem,” says Bensted-Smith.

    The archipelago is lucky in one sense: Unlike Hawaii and other islands that have also been inundated with exotic species, it was invaded relatively recently. So although several tortoise species were nearly extinct by the time of Darwin's 1835 visit, 95% of all original species remain. Also working in its favor is that the Galápagos has a relatively small population—16,000 people on just four of the 19 islands—and 97% of the land is park, protected from human activity.

    The park service and research station have been battling invasives since they were set up in the 1960s, occasionally ridding small islands of goats or dogs. Erratic funding meant they weren't always able to finish the job, however. In the last 15 years, moreover, the exotic species problem has worsened as immigrants, attracted by the booming tourist trade and fisheries, flood in from mainland Ecuador. The traffic in tourists and immigrants brings more cargo and fresh produce that can carry stowaway species. The latest newcomers include the cottony cushion scale insect, a notorious pest that's now attacking at least 50 species of endemic plants. Frogs, aided by an unusually wet El Niño year that helped them survive in cargo, showed up in 1997 “for the first time since the islands arose out of the sea millions of years ago,” says Marc Patry of the station. Over the last decade, the number of problem invasives has climbed to at least 60 plants, 15 vertebrates, and six insects.

    Battle lines drawn

    The new war plan has been gathering steam for the last few years, and researchers began testing it in February 2000 with several pilot projects funded with $4 million from Ted Turner's U.N. Foundation. Now, with $18 million on its way from GEF through the U.N. Development Foundation, scientists and park managers are concentrating on the most destructive species. Topping the most-wanted list are Isabela Island's goats. Park staff plan a $6 million eradication “on a larger scale than has been attempted anywhere else,” says Bensted-Smith. They're borrowing techniques from New Zealand and Hawaii, which also have a long history of coping with island invaders. Over several weeks, riflemen will shoot the easiest targets and use the Global Positioning System to map goat positions. Sharpshooters in helicopters will then scour the rocky volcanic terrain, and trained dogs and radio-collared “Judas” goats will track down missed animals.

    Also under fire are feral cats on Baltra Island and black rats on several islands. Some cats will be radio collared so scientists can find where their brethren are hiding. The invasives team hopes to rid four islands of two alien birds, rock doves—which live in towns—and smooth-billed anis. These intruders are competing with native birds for food and are suspected of spreading diseases. All will be put down in “as humane a way as possible,” says station vertebrate biologist Howard Snell, using high-powered air guns, traps, and nonpersistent poisons. Team members will come back every few months to check for surviving aliens. Poisons, together with brush clearing, will also be used to eradicate some of the worst invertebrates, such as the little fire ants that swarm over 18 hectares on Marchena Island, pushing out other insects and threatening the nests of masked boobies.

    The project has a heavy research component as well. In beach areas where wild pigs prey on the nests of green sea turtles and giant tortoises, scientists are comparing several approaches to boosting reptile numbers: killing pigs, incubating eggs in cages, and simply protecting nests with netting. And to kill black rats without harming the native rice rats, the researchers have designed an elevated bait station that only black rats seem to climb.

    The invasives team faces an uphill battle in trying to wipe out quinine, a tree native to the Ecuador mainland that has covered the highlands of Santa Cruz with a dense canopy. Eradicating a well-established plant invader would be a first in the annals of exotic plant species, says station botanist Alan Tye: “We are trying to find out whether it's feasible.” His team is studying the reproductive rate of quinine and how long its seeds remain viable in soils; depending on the answer, regular weeding and spraying with an experimental brew of herbicides might eventually eliminate all seedlings once the adult plants are gone. “It may take 5 years, it may take 15 years,” Tye says. Far easier, hopes Tye, will be tackling 30 smaller plant invasions of such species as kudzu and water hyacinth.

    Stemming an invasion.

    Park scientists and staff hope to wipe out destructive nonnative species such as black rats (top), iguana-eating cats, and vegetation-destroying goats and pigs.


    The second prong of the government's attack is to put teeth into a 1998 law that limits immigration to the islands and sets up a quarantine and inspection system to keep invasives out (Science, 20 March 1998, p. 1857). But with 36 inspectors still learning the job—the first was hired in 1999—the fortress is far from impregnable. Inspection of travelers' bags on the mainland and island is “turning up an awful lot of stuff” such as stashed fruit and a few guinea pigs, says Snell, “but there's still some getting through.”

    The plan won't work, Smith and others agree, without the participation of the local populace. The scientists of Galápagos know that all too well: Their station has been attacked three times over the past 6 years, most recently last December, by fishers protesting fishing quotas for sea cucumbers and spiny lobsters (Science, 15 December 2000, p. 2059).

    So far, the reaction to the invasives program has been decidedly more positive. There has been some grumbling about the inspection system, but mostly because inspection officers need more “public relations” training, says Michael Bliemsrieder, coordinator for the station's UN Foundation project. The team has also launched a pilot educational effort to encourage Galápagos farmers to grow more produce, which should both cut down on imports of produce that can bring in aliens and also keep invasive weeds from spreading beyond little-used fields. One sign of success, Snell says, is that last month villagers cooperated when the invasives team eliminated every rock dove they found in one town on Santa Cruz, except for a few owned by one holdout family.


    Even with local support, the challenges are enormous. Poverty is a constant pressure. And perhaps no country has attempted anything this comprehensive before, says Bensted-Smith, with a program that not only seeks to wipe out major invasive species populations but halt human immigration as well. But then, no developing country, anyway, has had so much money before either. “The added funding should make a huge difference in what Galápagos can manage to do,” says Lloyd Loope, an invasive-plant expert with the U.S. Geological Survey in Hawaii. And to make sure these actions don't die out in a few years for lack of funds, the plan calls for setting up a $15 million endowment for the station's conservation efforts from the GEF grant and other sources.

    And what will happen to the Galápagos ecosystems if the project succeeds? Some should spring back, says Bensted-Smith. Once the cats are gone on Baltra, for instance, snakes, land iguanas, and native doves should flourish. And vegetation on northern Isabela should come back right after the goats are gone. However, on other islands that have both goats and invasive plants, the weeds could take over after goats are removed if land managers don't intervene, perhaps by planting native species. And on Pinta Island, the “solution” has created a new problem. After the goats were removed, vegetation returned—with a vengeance. The island historically was grazed by giant tortoises—but only one is left, Lonesome George. So the biologists are debating whether to put captive-bred tortoises of a different subspecies on the island so it will have enough herbivores.

    Even if the plan is a resounding success, Charles Darwin Station scientists say they don't expect to claim victory. For one, their plan targets only “a small percentage of the invasive species that occur on the Galápagos,” says station biologist Brand Phillips, albeit the most destructive ones. And unlike an oil spill, he says, keeping invasive species in check “is a permanent situation.”


    Fossils With Lessons for Conservation Biology

    1. Erik Stokstad

    BERKELEY, CALIFORNIA—This quadrennial meeting regularly runs the gamut of life history, but this year's convention, from 26 June to 1 July, featured an emphasis on research with relevance to problems of today, such as invasive species and the decline of coral reefs.

    Big Clams Make Good Invaders

    When exotic species invade new territory, trouble may ensue—or it may not. In the sea, some creatures that humans intentionally or unwittingly moved to new homes have turned conqueror, wreaking ecological devastation on marine habitats. Others, however, fail to thrive. Why the difference? Studies haven't found a hallmark of modern marine invaders that can predict their success. At the meeting, however, a trio of paleontologists showed that—for ancient clams, at least—a warning sign of a potentially successful invader may be its size.

    By analyzing millions of years' worth of invasions recorded in the marine fossil record, David Jablonski of the University of Chicago and his colleagues—Kaustuv Roy of the University of California (UC), San Diego, and Jim Valentine of UC Berkeley—found that bulkier clams were more likely than small clams to have expanded their geographic range. That held true both during the ice ages of the Pleistocene and after the Cretaceous-Tertiary mass extinction 65 million years ago. “This suggests we're picking up a general pattern, and that range dynamics of fossil species can provide insights and perhaps predictions for human-mediated biotic interchanges as well,” Jablonski says.

    Jablonski, Roy, and Valentine started by compiling a database of sites where 216 species of marine clams along the California coast were known to have lived during the middle and late Pleistocene. Since then, 26% of the species have changed their range by at least 1 degree of latitude, as the team will describe in an upcoming issue of Ecology Letters. Then the paleontologists looked for ways in which successful invaders differed from other species in their native habitat that didn't spread out.

    Muscling in.

    Large bivalves, such as Mytilus edulis, appear to have an edge in invading new habitat.


    What made the invaders unique, it turned out, was not life habits—for example, whether they lived on the sea floor or embedded in the sediment—or even reproductive traits such as spawning free-floating larvae. But on average, the invasive taxa were larger than species that didn't expand their turf. “It seems that size really does matter when it comes to geographic range shifts in the Pleistocene,” Jablonski says.

    The same pattern held when the group examined biological invasions that took place in the Gulf Coast after the Cretaceous-Tertiary mass extinction, 65 million years ago. It was also true when they compared the median size of 25 successful recent marine bivalve invaders, compiled by Jim Carlton of Williams College in Williamstown, Massachusetts, to 914 marine bivalve species native to the northeast Pacific shelf. All together, Jablonski says, the three data sets imply that large bivalves are better than small ones at exploiting good habitat as it opens up. A possible reason for their success is that larger bivalves tend to produce more eggs and gametes—and consequently more larvae that can colonize new habitat.

    “This is really a first for the marine realm,” says Ted Grosholz, a marine and invasion biologist at UC Davis. Most studies of modern marine invaders, Grosholz says, have compared the size of invaders to that of other taxa in their new habitats—not the ones they left. That's misleading, because exotic marine species of all sorts tend to get bigger after invading a new ecosystem. And that, in turn, makes it hard to say whether size gave them an advantage to start with. One of the few studies that did look at preinvasion size seems to contradict Jablonski's results. In an unpublished dissertation, the Smithsonian Environmental Research Center's Whitman Miller, then at UC Los Angeles, examined 41 types of East Coast bivalves that had probably been transported across the United States with commercial oysters in the 19th and 20th centuries. Size, he concluded, did not predict success.

    Some paleontologists are also skeptical about the bigger-is-better scenario. “If I look at patterns of taxa I know to have spread, nothing jumps out at me as being size-related,” says Geerat Vermeij, who studies fossil mollusks of the Pacific. “I'd be surprised if there was a pattern.”

    Yet the chance that the pattern may hold is enough to excite conservation biologists. A simple trait to predict the tendency to take over a new ecosystem would be very useful to those attempting to prevent such disasters, Grosholz says.

    Humans to Blame for Coral Loss

    The most important species of coral in the Caribbean have been dying since the 1970s at tremendous rates, and the once-majestic reefs now are overgrown with algae. Among the stresses are warmer ocean temperatures and what appears to be disease, but the suspects also include human activities. Pollution can damage the coral, and overfishing removes an important control on algae.

    Now, a paleontologist has bolstered the evidence that humans bear much of the blame, by showing that Caribbean reefs consisted of nearly the same mix of coral species for much of the past 220,000 years. This long-term baseline shows that the recent devastation is “a profound change that's unprecedented in recent geologic history,” says John Ogden, a marine ecologist at the University of South Florida in St. Petersburg. “It's difficult to lay that at the feet of any cause other than humans.”

    To identify past trends in coral reef ecology, John Pandolfi of the Smithsonian Institution's National Museum of Natural History in Washington, D.C., examined ancient reefs in the Caribbean that were left high and dry when sea level dropped during ice ages and the islands rose. He made numerous 40-meter-long surveys of fossil reefs on San Andres, Curaçao, and Barbados, counting every coral species. The surveys covered the reefs' leeward crest, a shallow environment that is relatively easy to identify in fossil reefs.

    Almost gone.

    Elkhorn coral used to be very common but is now rare in the Caribbean.


    On Barbados, Pandolfi found, the communities had the same composition of coral species at four time intervals: 220,000, 190,000, 125,000, and 104,000 years ago. About 80% was Elkhorn coral (Acropora palmata), a species that has become exceedingly rare in the Caribbean since the early 1970s. Another 15% of the coral community consisted of five other species—always the same ones—in every survey. The community structure was so stable that the 25 rare species that made up the remaining 5% showed up every time, too. “I almost fell off my chair when that came out,” Pandolfi says.

    Reefs have come and gone in the Caribbean during the past 220,000 years, but they have reappeared with the same community structure each time. That demolishes the argument that reef ecology may change all the time, Pandolfi says: “People can't say we don't know what's normal.” And several researchers note that the fact that community structure survived the vicissitudes of climate change in the past—such as swings in sea level, temperature, and carbon dioxide levels—suggests that the current problems really are unnatural.

    The new data extend the persistence of stable community structure from a 3000-year-long record from Belize, published last May in Nature by Richard Aronson, a marine biologist and paleobiologist at the Dauphin Island Sea Lab in Alabama. Whereas that study focused on the reef lagoon, a slightly different environment, it found essentially the same trend of stability—until the massive die-offs of the last few decades. “This tells you that what we're seeing today is not some random fluctuation,” Aronson notes. “That's very powerful ammunition,” he adds, for trying to muster the political will to fix some of the problems that people have caused.


    Can SNPs Deliver on Susceptibility Genes?

    1. Trisha Gura

    Minor differences in people's DNA ought to predict their risk of certain diseases. Is research on so-called SNPs living up to its promise?

    When genetic markers called SNPs burst on the scene several years ago, scientists hailed them as a salvation for weary gene hunters. For decades researchers had been trying to track the genes involved in major killers such as heart disease and cancer. These complex diseases are not caused by a defect in a single gene, as is cystic fibrosis; rather, they arise from the interaction of multiple genes and environmental factors. But these so-called susceptibility genes emit such weak signals that chasing them has frustrated even the most seasoned geneticist.

    As DNA sequencing capabilities soared in the late 1990s, researchers hit upon a new strategy: tracking down risk genes by using single-nucleotide polymorphisms (SNPs), fondly known as “snips.” SNPs are simply locations along the chromosomes where a single base varies among different people. Where some have a guanine in a given string of nucleotides, for instance, others might have a cytosine.

    The premise of SNP mapping is simple: Common diseases such as cancer must be caused in part by common mutations. And the most common mutations in the genome are SNPs, which occur about every 1000 bases. All scientists have to do, the reasoning goes, is find enough SNPs on the human genome map and see whether distinct SNPs occur more often in people with a given disease. If so, these SNPs are either implicated in the disease or near another genetic variation that is a possible culprit. Now, several years, tens of millions of dollars, and millions of SNPs later, how goes the hunt for susceptibility genes?

    Slowly and painstakingly, is the word from the trenches. One obstacle is that only about 15% of candidate SNPs are ready for prime time—the others haven't been characterized well enough for meaningful studies. Another is that researchers have to analyze huge numbers of SNPs in lots of people to find true links between DNA and disease. Finally, SNPs are not distributed as randomly in the genome as clean statistical analyses demand, and they don't always stay put. Thus, trying to scan the entire genome for disease genes using SNPs as signposts is a feat now considered nearly impossible.

    In response, many researchers are focusing their SNP studies, using traditional methods to narrow down suspect chromosomal regions. This strategy has yielded success recently in studies of diabetes and the gastrointestinal ailment Crohn's disease. But even after taming their wild SNP hunts, researchers tell battle stories of endless assays, false leads, and depressingly high costs in time and money.

    The technique may still pan out, many gene hunters say, provided researchers fine-tune the map of SNPs in the human genome; develop cheaper, faster, and more sophisticated methods for analyzing SNPs; and, perhaps, augment the SNP map with yet another type of genome map, called a haplotype map, an idea researchers explored last week (see p. 583).

    “Researchers are going after SNPs because all other approaches have worked horribly,” explains molecular geneticist Pan Pui-Yan Kwok of Washington University in St. Louis. “When you study complex disease traits, you need to have all the tools that you can.”

    What makes a SNP a SNP?

    When the hunt for SNPs began in earnest in 1998, researchers weren't sure how hard it would be. That year 10 pharmaceutical companies, five academic centers, and the Wellcome Trust joined forces to create a public SNP map (Science, 19 December 1997, p. 2047). Their goal was to identify 300,000 SNPs and map 150,000 of them along the human chromosomes by 2001—a goal they met handily. SNPs proved remarkably easy to find. Nearly 3 million putative SNPs have been deposited into the public database; Celera Genomics of Rockville, Maryland, boasts of its own map with at least that many; and other companies such as Incyte Genomics in Palo Alto, California, and GlaxoSmithKline in Greenford, U.K., are stockpiling their own SNPs.

    But determining whether these genetic variations are true SNPs has proved more daunting. To live up to the name “polymorphism,” a single-nucleotide change at a given location must be shared by at least 1% of a specific population, such as African Americans, say, or Pima Indians of the southwestern United States. This somewhat arbitrary distinction between a polymorphism and a mutation—as even rarer oddities are called—helps researchers zero in on those variations most likely to reveal a link to a specific disease in a given population.

    “You have to know whether that SNP is relevant in the population you are interested in,” says Rick Lundberg of Celera. But doing so requires researchers to check hundreds or thousands of DNA samples from the group they are studying to see whether a candidate SNP reaches the 1% benchmark.

    Geneticists call this process genotyping and traditionally do it by sequencing DNA from each sample. Genotyping can be managed for a handful of SNPs, but looking for millions of SNPs in thousands of individuals—at 20 cents to $1 per SNP per DNA sample—is simply too expensive for many studies, says Kwok.

    Complex disease marker?

    SNPs are single-base differences in DNA.

    To keep costs down, many investigators are concentrating on validating just the 60,000 or so SNPs known to reside within genes, as opposed to those in noncoding regions. To do that, mappers from the original SNP mapping project and others are genotyping a standard panel of DNA samples (maintained at the Coriell Institute for Medical Research in Camden, New Jersey) from anonymous donors from around the world.

    Paleo SNP studies

    One of the pioneers of SNP expeditions, geneticist Graeme Bell of the University of Chicago (UC), has suffered many of the setbacks that plague SNP research today. In the mid-1990s, he and UCcolleague Nancy Cox began a search for genes related to adult-onset diabetes in a population of Mexican Americans in Starr County, Texas. Some people in this population run an increased risk of diabetes due to an unknown gene or genes somewhere at the end of chromosome 2. The researchers made their own SNP map, as others had done in smaller, highly focused studies. The UC team sequenced a 1.7-million-base region of DNA from 10 people with diabetes and then looked for SNPs common to that group but not controls. They turned up a SNP that seemed to fit the bill—a guanine in place of an adenosine at position 43.

    But SNP43's location—in an intron, or noncoding region—didn't make sense. So Bell's team tried again … and again. Over the course of another year, they resequenced a 66,000-nucleotide stretch in and around SNP43 and came up with about 180 additional polymorphisms. When they compared the patterns of 98 SNPs in 100 diabetics and 100 controls, the team hit pay dirt. A combination of three SNPs marks a heightened susceptibility to diabetes in this population, the researchers reported in the October 2000 issue of Nature Genetics. The combination also conveys a risk—albeit lower—in two northern European populations, one from Germany and the other from the Bothnia region of Finland.

    When asked what he would do differently if he could perform the study again, Bell answers: “I wouldn't do it.” Indeed, says human geneticist David Altshuler of Harvard Medical School in Boston and the Whitehead Institute for Biomedical Research/MIT Center for Genome Research in Cambridge, Massachusetts, “one of the main things Bell showed us is how tremendously difficult this kind of study can be.”

    Second-generation SNPs

    Diabetes is a complex disease, however, and no one is convinced that these SNPs on chromosome 2 are the only players. Bell's team had to spend so much time pinpointing SNPs that they weren't able to thoroughly blanket the region in search of gene candidates, says Altshuler. The team might have missed some.

    “Graeme is an advertisement for the human genome project,” Altshuler says. “If his lab were starting today, they would go to the Web, click the map, and they would not be wasting time discovering polymorphisms.”

    Spurred by that premise, a team led by Altshuler tried a slightly different approach. The investigators went back to published studies linking other SNPs to diabetes (not including Bell's) and retested 16 reported SNPs in a new group of 333 Scandinavian trios (a child who had diabetes and the child's parents). After genotyping and looking for associations, the investigators found that 13 SNPs were common enough in the population to study. Of those, two showed clear associations, either increasing or decreasing risk in the same direction as reported originally. The team tested those two in three additional patient populations, including a group of non-Scandinavian Canadians—in all, analyzing the DNA of some 3000 people to find a gene reproducibly associated with a risk of diabetes.

    Only one SNP held up in the target populations: a mutation that changes an amino acid in a hormone receptor that regulates fat metabolism. The SNP association had been reported in 1998, but four out of five subsequent analyses with only a few hundred patients each could not confirm the linkage. The team figured out why: The diabetes-inducing version of the SNP is common—about 85% of the samples carried it—but the polymorphism increased someone's risk of diabetes by only about 25%. Finding such a subtle genetic contribution to diabetes using traditional family-based linkage studies would have required samples from 3 million siblings, says Altshuler. So although it is unwieldy, he says, the SNP approach is an improvement over traditional genetic tools.

    SNP success

    Another team used SNPs successfully without gathering thousands of samples, despite early setbacks. Gilles Thomas of France's biomedical research agency INSERM and Fondation Jean Dausset CEPH in Paris and his colleagues were hunting for genes that might play a role in Crohn's disease, an ailment in which people mount an abnormal inflammatory response to the normal microbes in their gut. In 1996, the team fingered a 20-million-base-long region of chromosome 16 by traditional methods as carrying at least one risk gene. But several rounds of subsequent SNP analyses failed to link Crohn's to two genes that researchers suspected might play a role.

    Thomas's group proceeded to make ever-finer maps of the region. They narrowed it to a 160,000-nucleotide span, using a database search and non-SNP markers, and then turned to SNPs for finer detail. They found 13 candidate SNPs by comparing DNA sequences from Crohn's patients and unaffected people. The team then used the SNPs as markers to analyze 235 families with members harboring the disease.

    The approach worked: Crohn's patients carried at least one of three SNPs more frequently than controls, Thomas's group reported this May in Nature. And the more copies of the bad SNPs they carried, the greater their risk. All three culprits fell in a gene that encodes NOD2—a protein involved in microbe recognition and the body's first line of defense against pathogens. Until now, researchers hadn't demonstrated that the gene contributed to Crohn's disease, so the finding might open new avenues for treating the ailment.

    Tailoring SNP studies

    New technologies may drive down the expense and person-hours necessary for a solid SNP study. To improve speed, for example, some teams are refining methods to perform many DNA amplifications and SNP genotyping reactions simultaneously. Some of these “multiplexing” techniques rely on enzymatic reactions coupled to fluorescent-tagged probes, or mass spectrometry, rather than sequencing a stretch of DNA to find single-nucleotide differences.

    At the analysis end, statisticians are working out more powerful ways to separate disease-related SNPs from the noise of irrelevant genetic variation. One technique, haplotype analysis, is generating almost as much enthusiasm as SNPs did when they were first introduced. A more sophisticated version of SNP analysis, this new approach relies on the fact that certain SNPs travel together on a block of DNA—in other words, they tend to be inherited together. Researchers can thus focus on just a few SNPs per block, cutting down on the cost of looking for links between segments of DNA and disease. Finding haplotypes is a tricky computational problem, but new computer algorithms can find the SNPs that are inherited as a package.

    Several researchers are lobbying for funds to build such a haplotype map, and the National Institutes of Health sponsored a meeting in Washington, D.C., on 18 and 19 July to explore the idea. But no matter how detailed a SNP or haplotype map researchers draw, linking SNPs to disease is likely to work only on a case-by-case basis, depending on which disease, which genes, and which population each researcher chooses to study. As Kwok says, “Some investigators will be very lucky and others will not.”


    SNP-ing Drugs to Size

    1. Trisha Gura

    In addition to looking for single-nucleotide polymorphisms (SNPs) related to disease (see main text), researchers are also embracing SNPs as a way to find the genetic underpinnings of people's responses to drugs. In one of the first success stories in this new field, called pharmacogenetics, researchers have used SNPs to find genes that predict how quickly people can clear certain types of drugs and how susceptible they are to a drug's side effects.

    A set of enzymes known as CYP3A metabolizes about 50% of all common therapeutics as well as natural biochemicals such as estrogen, testosterone, and bile acids. Pharmacologist Erin Schuetz's team at St. Jude Children's Research Hospital in Memphis, Tennessee, began a search for genes that control the production of these enzymes. The researchers hunted down SNPs in the three genes that control CYP3A levels. The investigators then took liver and intestine samples from organ donors, screened the DNA, and looked at how the SNPs matched up with a person's ability to process different drugs, as measured by test tube assays.

    The team discovered two SNPs that quash production of active enzymes; both ultimately force the production of a shortened, nonfunctional protein. People who carry either one of the culprit SNPs metabolize drugs more sluggishly than do people who harbor other versions of the gene, the team reports in the April issue of Nature Genetics. Schuetz suggests that the variation could be used to fine-tune patients' drug dosages.


    Top Young Problem Solvers Vie for Quiet Glory

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Once a Soviet-bloc monopoly, the International Mathematics Olympiad now draws competitors from 83 countries worldwide—and counting

    FAIRFAX, VIRGINIA—The Patriot Center at George Mason University has never seen an event quite like this one. On a floor usually used for basketball, a regiment of tables stands in formation, each one equipped with a water bottle, a granola bar, and an identifying placard: “Latvia 001” or “Trinidad and Tobago 003.” In the center of each table lies a sealed envelope.

    Outside, the competitors are gathering: 83 teams of six youngsters each. Some are clad in ordinary T-shirts and jeans; others are wearing team jackets. “Springboks! Yes!” chants a group of boys from South Africa, borrowing a cheer from their national rugby team.

    But the other trappings of a sporting event are missing: There are no vendors, no TV cameras, no frenzied spectators. Once the competitors enter the arena, the exuberance ends and a librarylike hush descends. It's so quiet that you can hear the buzzing of the lights and the pacing of the gray-shirted invigilators.

    At 9:00 a.m. on 8 July, the chief invigilator sounds a deafening air horn. The competitors rip open their envelopes, each of which contains three math problems in one, or in some cases two, of 51 different languages. Then all is hushed again. For the next four and a half hours, some of the world's best high school mathematics students will be absorbed in a mental battle for which they have been preparing for months or even years: the 42nd International Mathematics Olympiad (IMO). Then they will do it again tomorrow.

    “The IMO is frightening in its difficulty,” says John Webb, a former team leader (coach) for South Africa and now the secretary of the IMO's advisory committee. “It is the toughest intellectual challenge you can expect any high school student to face.”

    The exam consists of six brain-wracking puzzles from four areas of mathematics: algebra, combinatorics, geometry, and num-ber theory. In theory, they require only high school mathematics to solve. But even a professional mathematician would have trouble solving all six problems, let alone doing it in 9 hours. Half of the students in this gym, the best the world has to offer, will score 10 points or fewer out of a possible 42.


    The Chinese team (shown with leader Yonggao Chen and deputy Shenghong Li) turned in a solid-gold performance at the IMO.


    For those who excel, the olympiad offers a showcase for their talents and a springboard toward a scientific career. Many IMO medalists have gone on to become top researchers in mathematics or related sciences. Five of the last eight winners of the Fields Medal—often considered the mathematical equivalent of the Nobel Prize—made their first mark at the IMO.

    This month's IMO is the first held in the United States for 20 years. In that time the number of participants has doubled, and the budget—$3 million this year, funded largely by contributions from the National Science Foundation and the Akamai Foundation, the charitable arm of a company in Cambridge, Massachusetts, that provides Internet routing services—has ballooned by a factor of 10. (Teams pay their own travel costs to and from the competition, but the host country covers all expenses while they are there.) Yet IMO organizers say there is still room to grow. Even after 42 years, the oldest and largest international academic competition for students of high school age is eager to bring new countries into the mathematical fray.

    By 1:30 p.m. on 9 July, the contest is over. Hundreds of teenagers burst forth from the arena doors, some with smiles, very few with tears. Clustering around their coaches, they turn into ordinary kids again, hungry for lunch.

    Math guru.

    Coach Titu Andreescu thinks schools should offer Olympic-style training.


    For the U.S. team, a miniature media crush awaits—reporters from CBS, U.S. News and World Report, and Technology TV. Half an hour later, as the rest of the team fidgets, 18-year-old Gabriel Carroll of Oakland, California, fields the last question: “So why is mathematics important?”

    “I don't do mathematics because it's important,” Carroll says. “I do it for aesthetic reasons. Math is an art.”

    Off camera, Titu Andreescu nods approvingly. The U.S. team leader doesn't exactly beam—with eyes that look like the dark hollows in an ancient oak tree, he never beams—but he does say, “Good answer, Gabe.”

    The coach

    A former contestant himself, Andreescu coached the Romanian team from 1981 to 1990. Then he emigrated to the United States, where he started teaching at the Illinois Science and Mathematics Academy in Chicago. Within 3 years, he was coaching again—this time, as a deputy team leader for the United States.

    Like a more famous émigré fromRomania—Bela Karolyi, the gymnastics coach from the “other” Olympics—Andreescu is known as both a stern taskmaster and a gentle father figure. Every summer, he leads an intensive 4-week training program for the top 30 finishers in the USA Mathematical Olympiad, including this year's team members and next year's hopefuls. Based this year at Georgetown University in Washington, D.C., the program features lectures from visiting mathematicians and constant practice—three exams a week under the same conditions as the IMO. Most top countries at the olympiad have similar training programs, lasting from a weekend to a month or more.

    The trend toward specialized training appalls some mathematicians. The IMO “should be about inventing things,” says Béla Bollobás, who competed for Hungary in the very first olympiad and now teaches at Cambridge University. “If it turns into which of 125 tricks you apply, it's wrong. … It's against the spirit of mathematics.” But Andreescu says his training program does teach real mathematics, and former team members agree. “I think I learned mathematics at a greater rate [at the training camp] than at any other time in my life,” says Bjorn Poonen, a silver medalist in 1985 and now a math professor at the University of California, Berkeley.

    The benefits of such training need not be limited to IMO participants, Andreescu says. “With $1 million, I could establish a network in the U.S. and train teachers to go back to their communities and do what we do here.” Although that kind of expansion may be a long time coming, a smaller but significant one is already under way: Next year, Andreescu's mathematical boot camp will quadruple in size to train about 120 students at new, permanent quarters in Lincoln, Nebraska, thanks to a grant from the Akamai Foundation. “These are exciting times,” Andreescu says. “The sky is the limit.”

    Beyond the Cold War

    In a sense, the mathematics olympiad itself is another Romanian émigré. The first IMO was held in Romania in 1959. For several years, it was strictly a Soviet-bloc affair, as were the similar scientific competitions it inspired (see sidebar on p. 599). In the late 1960s, however, Western European countries started to join in the IMO. In the détente year of 1974, the United States sent its first team to Erfurt, then in East Germany. Members included Eric Lander, now director of the Whitehead Institute Center for Genome Research in Cambridge, Massachusetts.

    “We had a great time hanging out with the Russians,” Lander recalls. The two teams bonded over political jokes and lobbed water balloons out of the institute where the competition was held. “The Russians had a sense that they couldn't really get in trouble. The East Germans weren't going to throw the book at them.”

    Beforehand, some American mathematicians had feared that the United States could not compete with Eastern Europe, with its long tradition of competitions and excellence in math education. “There was a good deal of soul-searching before the United States sent a team,” says the IMO's Webb. “It was viewed as totally unacceptable for the U.S. to be at the bottom.” But the Americans surprised everyone by finishing second behind Russia and ahead of such traditional powerhouses as Hungary. (There are no official team standings, but in reality everyone is aware of them.)

    Casting the net wider

    Since then, the roster of IMO competitors has spread far beyond the old Cold War-era rivals. Teams from less developed countries regularly turn in impressive performances. Their formulas for success vary. Vietnam, a perennial top-10 finisher, has special schools in each province from 10th to 12th grades in which students study advanced mathematics along with their regular courses. Colombia, whose team usually finishes first or second among Latin American countries, traces its success to one person: Maria Falk de Losada, who organized the country's first IMO team in 1981 while serving as a Peace Corps volunteer and is still at it 20 years later.

    In the world map of the IMO, one giant terra incognita remains. This year's competition included only three teams from African countries: Tunisia, Morocco, and South Africa. As secretary of the IMO's advisory committee, Webb is working to bring more countries from what he calls “the birthplace of mathematics” into the fold.

    Webb believes that developing countries need to build from the bottom up—first establishing a sound national infrastructure, then holding a national olympiad, and finally joining the IMO. In this way, although the IMO may serve only a few students, it will indirectly benefit many. “It's the cherry on top of the cake,” Webb says. “Most people don't get a bite of the cherry, but they do get a slice of the cake.”

    Unfortunately, Webb says, not everyone is eager to compete. A team that comes home with no medals, or worse, a string of zeroes, can be a serious embarrassment to its ministry of education. As de Losada says, “countries rule themselves out” rather than risk that kind of humiliation.

    Bollobás suggests organizing a two-tiered olympiad, with easier problems for the second “league.” But according to Adam McBride of the University of Strathclyde, U.K., the organizer of IMO 2002, that idea stands no chance of getting past the team leaders. No country wants to deny its students a chance to compete on an even footing with China, Russia, and the United States.

    A hard act to follow?

    When the last paper was graded—at 2:00 in the morning of 12 July—half of the contestants had won gold, silver, or bronze medals. Four students—Liang Xiao and Zhiqiang Zhang of China, and Gabriel Carroll and Reid Barton (see sidebar on p. 597) of the United States—achieved perfect scores. China dominated the unofficial standings, followed by Russia and the United States (tied for second and third), Bulgaria and Korea (tied for fourth and fifth), and Kazakhstan, India, Ukraine, Taiwan, and Vietnam. All six members of China's team won gold medals. Barton won his fourth gold medal in 4 years, a feat believed to be an IMO first.

    The next day, in the marble halls of the John F. Kennedy Center for the Performing Arts, Andrew Wiles, the man who proved Fermat's Last Theorem, presented gold medals to 39 contestants. University of Nebraska, Lincoln, mathematician Walter Mientka, organizer of this year's IMO, then symbolically passed the Olympic flag—an interlocking circle and infinity symbol—to McBride, next year's host. McBride confidently predicts that the 2002 olympiad in Glasgow will live up to the high standards set by this one. “We need 50 [graders], and we already have 80 names. The problem-selection committee has absolutely superb people. Assuming we can get money, we're quite confident that we'll put on an excellent show.”


    IMO's Golden Boy Makes Perfection Look Easy

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    Even in the rarefied world of mathematics competitions, Reid Barton is one of a kind. This year, the soft-spoken, unassuming 18-year-old senior from Arlington, Massachusetts, became the first person to win four gold medals in 4 years at the International Mathematics Olympiad (IMO). “Since 10th grade he [has written] proofs like a professional mathematician,” says team coach Titu Andreescu. “You could take his solutions to the IMO problems, written in just four and a half hours, and publish them in a book without editing them. It's remarkable.”

    The son of two environmental engineers, Barton began discovering arithmetic, inventing his own notation, at age 4. In first grade he noticed π on a calculator and asked what it meant. The discovery motivated him to learn the Greek alphabet and start taking lessons in classical Greek at age 7. By second grade, he had finished sixth-grade math, and his grade school teachers said there was nothing more they could offer him. In third grade, he studied game theory with a graduate student tutor, devouring Douglas Hofstadter's Gödel, Escher, Bach and Richard Guy and Elwyn Berlekamp's Winning Ways for Your Mathematical Plays. The next year it was on to calculus; at age 10, Barton took the Advanced Placement exam, a U.S. standardized college-placement test, and, of course, scored a perfect 5.


    Reid Barton's aesthetics of excellence go beyond the IMO.


    Meanwhile, he was accelerating in other subjects as well: chemistry at Tufts University in fifth grade, physics in sixth grade, and in subsequent years Swedish, Finnish, French, and Chinese. “When I took chemistry at Tufts in fifth grade, I was too young to realize how strange it was,” he says. “After that I was used to it.” Although he has officially been home-schooled since third grade, he still enjoys taking chamber music and orchestra at Buckingham Browne & Nichols School. He listens only to classical music and is a fan of Chopin.

    For the last 4 years, Barton has had a part-time job in the laboratory of Charles Leiserson, a computer scientist at the Massachusetts Institute of Technology (MIT). At first he worked on Leiserson's chess-playing program, Cilkchess, which is one of the two or three strongest in the world. In 1999, Leiserson began a 2-year leave of absence to work at Akamai Technologies Inc. in Cambridge, Massachusetts, as director of research, and Barton came with him. Leiserson says that what distinguishes Barton is “his excellent sense of aesthetics. His code is clean, well organized, simple, and easy for other people to modify. It's unusual to find that ability in someone so young.”

    The day after the IMO ended, Barton hopped on a plane to join the American team at the International Olympiad in Informatics (IOI) in Finland. Last year, he pulled off a similar “double,” earning a gold medal at the IOI in Beijing and the IMO in Korea. This year, he scored first in the informatics competition, with 580 points out of a possible 600—55 points ahead of his nearest rival.

    This fall Barton will start college at MIT as an undergraduate. He could have gone straight on to graduate school, Leiserson says, but didn't want to miss out on the undergraduate experience. “Reid is a hell of a nice kid,” Leiserson says. “He has a very good chance of avoiding burnout issues, which he will start to face as he matures.”


    Science Olympiads Offer a Variety of Arenas for Overachievers

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    The International Mathematics Olympiad is the oldest and largest scientific olympiad for high school students. But it has distinguished siblings, each with its own quirks. The biology, physics, and chemistry competitions have laboratory components. The International Olympiad in Informatics (IOI, for computer science) is by far the easiest to grade: The students' programs are simply fed into a computer with certain sets of test data, and the results are available within seconds. Some exams, such as those for biology, include multiple-choice questions; others, such as mathematics, do not. The American Chemical Society, which sponsors the U.S. team for the International Chemistry Olympiad, does not let students compete again after they win a gold or silver medal; the corresponding organizations in math, physics, and informatics do allow contestants to go for multiple gold medals.

    View this table:

    The United States has never sent a team to the International Biology Olympiad (IBO), a curious absence by a country that devotes so much money to biological and medical research. That situation may change next year. For 3 years Ravi Vikram Shah, a senior at Harvard University, has directed the Harvard Biological Sciences Olympiad, a small high school competition run entirely by undergraduates without faculty help. Next year, Vikram Shah hopes to extend it to a national level and select a team for the IBO. He paid his own way to Brussels, Belgium, to observe this year's IBO and qualify the United States to send a team next year. He reports that he was most impressed by the attitude of the IBO participants. “Rather than mulling over defeat or flaunting [their] success, the groups reveled in their own accomplishment of getting to this level,” he says. “In fact, the mentors of countries not receiving a medal seemed more disappointed than the students themselves.”

    Vikram Shah is getting some help from Elizabeth Wissner-Gross, the mother of a former IOI participant from Long Island, New York. “My younger son was interested in biology and wanted to know why there wasn't a biology olympiad,” Wissner-Gross says. For the last year, she has been trying to drum up interest and funding for a USA Biology Olympiad. The first part was easy—more than 100 high school biology teachers have contacted her. The second part has been unsuccessful so far, but Wissner-Gross remains optimistic. “We went to several companies at the beginning of the year, when their budgets were already set,” she says. “A number of them said to please come back this fall.” She says that the National Institutes of Health has also expressed interest in funding the competition.

  16. An Experiment for All Seasons

    1. Jocelyn Kaiser

    In probing fundamental ecology and forcing scientists of all stripes to work together, the NSF's LTER network has proved a smashing success

    Eleven years ago, Mark Harmon drove deep into an Oregon forest, dug some pits, and buried 500 purse-sized mesh bags stuffed with pine needles, sticks, roots, and leaves. He wasn't the only one engaging in this bizarre behavior. At the same time, colleagues were burying similar bundles at 28 sites across the continent. Every so often, somebody exhumes one of these now-redolent bags and ships the contents off to Harmon's lab at Oregon State University in Corvallis.

    An odd variation on the children's game of buried treasure· Not quite. Harmon and his colleagues are ecologists engaged in an experiment that has yielded an important result: Leaf litter in North American forests retains more carbon than anyone had expected. That discovery should lead modelers to tweak estimates of how much carbon dioxide land plants are capable of sopping up, a crucial factor in global warming predictions. Just as important, Harmon says, is the transformation he and his colleagues have undergone: “We proved that people actually would work together at this scale.”

    Mountain of data.

    Research at the Niwot Ridge LTER in Colorado has yielded decades' worth of insights into an alpine ecosystem. In this 1953 photo, scientists collect weather data at the 3743-meter site in the Rockies.


    The litter-decomposition study is a product of a big-science approach to ecology: the Long Term Ecological Research (LTER) network and its grand ambition of understanding ecology's sweep through time and across space. The largest single project in ecology, involving over 1200 scientists and students, the LTER network of 24 sites includes habitats as diverse as a tropical rainforest in Puerto Rico, Antarctica's dry valleys, prairie in the U.S. heartland—even the inner cities of Baltimore and Phoenix. Findings tend to emerge after many years and require untangling short-term perturbations such as hurricanes and pest outbreaks from long-term imprints such as global warming. The LTER network “has moved long-term change of ecosystems front and center on the ecological agenda,” says Stephen Carpenter of the University of Wisconsin, Madison.

    View this table:

    By all accounts, the 21-year-old program has been a big hit, churning out high-impact studies on everything from the effects of global warming on Arctic tundra and Western grasslands to how a glut of nitrogen pollution is altering forest ecosystems (see table). “LTER has already paid enormous dividends,” says Bill Heal, who recently retired from the U.K. Centre for Ecology and Hydrology in Scotland.

    LTER has also been a huge sociology experiment. It has forced scientists to pool data, glean patterns across habitats, and forge ties between the natural and the social sciences. The megacollaboration has produced a “healthy tension” between independent-minded scientists and those who thrive in packs, says Ingrid Burke of Colorado State University in Fort Collins. As a result, the network's full potential has yet to be tapped. Still, LTER is having a lasting effect on the field of ecology, spurring it toward a new, more open culture. As Harmon says, “It's been quite a profound change.”

    All together now

    The grandfather of long-term ecological research sites is Rothamsted Manor in England. Set up in 1843 as an experimental farm, the project has since blossomed into an important research effort on grassland biodiversity (see facing page). It wasn't until the next century that long-term ecology began to catch on in the United States. One influential site was Hubbard Brook in New Hampshire, where in the 1960s researchers began studying how ions moved from rainwater through land to streams in a logged watershed—findings that led to the discovery of acid rain. Then around 1970, the International Biological Program (IBP), an ensemble of studies in 44 countries, including research on five biomes in the United States, produced the first detailed analyses of what controls how nutrients such as carbon and nitrogen move through ecosystems— information still used in global carbon models. The IBP, says forest ecologist Jerry Franklin of the University of Washington, Seattle, offered “incredible lessons in the need to be able to look at responses over a long period of time.”

    Whereas most countries lost enthusiasm for funding such work once the IBP ended, Franklin and the U.S. National Science Foundation's Tom Callahan persuaded NSF top brass in the late 1970s to continue supporting an IBP-like effort. Their argument was that scientists would pursue long-term experiments only if they had long-term funding. NSF held several planning workshops and in 1980 christened a new network of five sites the LTER.

    Going global.

    U.S. ecology sites have inspired sites worldwide, including Taiwan's subtropical Fu-shan Forest.


    Each of the now two dozen sites must collect data on basics such as weather conditions, nitrogen and carbon levels, and vegetation growth measured by clipping and weighing leaves, twigs, and roots. Beyond these core tasks, the sites undertake research tailored to their regions: probing the ecology of hantavirus in the southwestern U.S. desert, for instance, or studying how increased water flow from the Florida Everglades restoration will change the region's food web.

    A 10-year review chaired by Oregon State University ecologist Paul Risser urged the program to involve social scientists to probe more assiduously how humans alter ecosystems as well as search for solutions to environmental problems (Science, 15 October 1993, p. 334). In response, NSF added the Phoenix and Baltimore sites to plumb such questions as how neighborhoods contribute to watershed pollution and which species thrive in cities (Science, 22 October 1999, p. 663). Four coastal sites were also folded into the LTER program to give it more depth.

    Prodded by the Risser panel, the LTERs have stepped up efforts to exploit their massive data troves. Some of the current work is aimed at developing software that can yoke together disparate databases, to eventually allow scientists to type in search terms related to topics such as climate and gather information from all the sites. The network also agreed to a policy in 1997 requiring investigators to post most data sets on the Web within 2 to 3 years. Anyone who wants to use them needs only notify the owners by e-mail and cite the source. “People had a lot of concerns about whether we could do that” without scientists losing credit for having compiled the data sets or losing out to rivals who interpreted the results more quickly, Burke says. But the system seems to be working. Seeing a paper come out—without her name on it—based partly on her soil data from the Shortgrass Steppe site in Colorado took some getting used to, Burke says, but she's glad to see it help advance the science. “It's more valuable if you let it go,” she says.

    Perhaps the most far-reaching change brought by the Risser panel is a more panoramic view: research projects that span several LTER sites. Such projects often come at a premium, in both cost and good will. For instance, the leaf-rotting experiment, known as LIDET, gave NSF reviewers “sticker shock” because of the hundreds of thousands of dollars necessary to analyze the rotting vegetation, Harmon says. Then after the experiment got the thumbs up, he and others had to persuade reluctant colleagues to cast in on multiauthor papers. “You used to work in your own little universe. Now if it stops with your own, you're missing the boat,” Harmon says. “It's a really important new way to think about ecosystems.”

    Lonely hearts club

    The push toward collaborative research has met with some resistance. “There's a very healthy and active tension between top-down and bottom-up: how much NSF should be dictating, and how much freedom” sites should be given, says David Foster of the Harvard Forest LTER, a trailblazing project in historical ecology (see sidebar on p. 626).

    Indeed, not every site leader shares the communal spirit. David Tilman of the University of Minnesota, Twin Cities, admits he shuns most network meetings and leaves collaborative studies to others at his Cedar Creek site. “I personally believe that creativity in science is more of an individual than a group effort,” says Tilman, who points out that the LTERs were conceived to operate independently. Adds William Schlesinger of Duke University, a soil biogeochemist at the Jornada Basin site in New Mexico, “I'm a little old-fashioned. I came up in the ranks [of people who] did everything individually.” But Schlesinger says he applauds efforts by younger scientists at the sites to join forces.

    And some scientists argue that the payoff of studies performed across several sites is overblown. “I'm cautious. I just don't think that's where the science is,” says John Hobbie of the Marine Biological Laboratory in Woods Hole, Massachusetts. He thinks the LTER network's greater value is to bring together scientists “with common interests” to solve problems and generate new ideas for their own sites.

    A more nagging problem, perhaps, is the perception among the broader ecology community that the LTER sites get more than their fair share of attention and funding from NSF. The reality, claims Jim Gosz of the University of New Mexico, Albuquerque, chair of the LTER coordinating committee, is stagnant budgets spread thinly over many sites that scientists must supplement with other grants. Nor do all sites have the right stuff. Reviews led three sites—Okefenokee in Georgia, North Inlet in South Carolina, and Illinois Rivers—to be shut down several years ago. “The sites are being asked to do far too much,” says Washington's Franklin, with requirements ranging from intense data management to precollege education programs. He's hoping that a 20-year review, headed by biologists Kris Krishtalka and Frank Harris and due out by December, will recommend a boost to site budgets.

    Still, the LTERs have developed a cachet that may skew some decisions in their favor. “There's a sense that every new idea that comes along is best done at one of the LTERs”—such as integrating social and ecological science—“when there may actually be a better place,” says Stanford ecologist Pamela Matson, who's not affiliated with any of the sites. She hears concerns that when new funding comes along, “doors are too easily opened” to the LTERs compared to other long-established ecological research stations such as Stanford's own Jasper Ridge. While not questioning the validity of awards won by LTERs, Matson says it's “more of a worry about how things will go in the future.”

    Hands across the water

    Despite its limitations, the LTER system has inspired similar projects—and new collaborations—in 21-and-counting countries. One of the first efforts at global outreach began several years ago, when U.S. and Hungarian researchers joined forces on a study of grassland biodiversity. The pooled data revealed a correlation between aridity and fewer plant species, firming up models predicting deleterious effects of global warming on arid plant communities.

    Although one aim is to collect the same basic data, not all these international LTERs are carbon copies of U.S. sites; some, such as those in the United Kingdom and Canada, are focused more on monitoring than on research. Others, including China's and Taiwan's, study problems tailored to national priorities. China secured a $25 million World Bank loan in 1993 to build its 29 LTERs, which focus on helping farmers reduce erosion and boost crop yields. Western Europe has lagged behind, although French ecologists expect 10 sites to join the international network by fall.

    The LTER concept may have taken off elsewhere in the world, but it has fallen short on its home turf in one big way. The Risser report urged other U.S. agencies that run ecology sites to emulate the model. Twenty-four LTERs “are not sufficient to explain continental science,” explains Gosz, who thinks that about 50 sites could make greater inroads into questions such as how ecological processes change with scale, or across several types of lakes. But the advice came with no funding, and an über-network never arose.

    Long-time LTER boosters say such shortcomings should not dim the program's luster. “It was an extraordinarily innovative program when NSF began it, and it has accomplished a tremendous amount of innovative science,” Franklin says. “It's been a good investment scientifically. It hasn't achieved everything people expect. But holy smokes, you can't do it all.”

  17. Where the Grass Never Stops Growing

    1. John Pickrell

    HARPENDEN, U.K.—A 16th-century manor house with manicured lawns and nonchalant tabby cats in the heart of rural England seems an unlikely place for one of the world's longest running experiments. But Rothamsted Manor and its ecological research station are far from ordinary. “There are no other long-term studies of this kind in existence,” says David Tilman, director of the University of Minnesota's Cedar Creek Natural History Area.

    One experiment, in particular, has inspired Tilman and generations of ecologists that came before him. The Park Grass experiment, which analyzes how grassland communities respond to variations in nutrient levels, has been paying scientific dividends since the 19th century. “Any ecologist who has wandered through Park Grass in summer couldn't help but generate a whole series of novel ecological hypotheses,” says Tilman.

    Park Grass was the brainchild of Rothamsted's former owner, John Lawes, who had made a fortune from a patented process for producing phosphate fertilizer. He started nine long-term ecological and agricultural experiments between 1843 and 1856. While one experiment was abandoned in the late 1800s after a severe nematode infestation, the other eight have continued to this day.

    Lawes divided the 2.8-hectare Park Grass plot, originally native grassland, into sections to test the nourishing effects of inorganic fertilizers such as sodium nitrate and ammonium sulfate against those of traditional farmyard manure. These treatments have remained largely unaltered since 1856, although some plots have been further subdivided and limed or have had treatments halted to assess whether they might revert back to wild grassland. Realizing that his experiments appealed to scientists as well as farmers, Lawes upon his death left a substantial endowment to keep the work going. “He was an incredibly foresightful man,” says Tilman. The endowment still funds the experiments.

    The fertilizers have had a profound effect on the diversity of the plant communities, says Peter Lutman, a weed ecologist at the Institute of Arable Crops Research at Rothamsted. Plots with heavy applications of nitrogen now have few species, sometimes only one or two grasses. Untreated plots, meanwhile, have maintained 50 to 60 species of grasses, broad-leaved plants, and leguminous plants.

    Park Grass's longevity has taught ecologists some unique lessons. “Plenty of so-called ‘long-term experiments’ are in fact showing transient dynamics,” says Jonathan Silvertown, a plant ecologist at the Open University in Milton Keynes, U.K. Park Grass, on the other hand, has shown that a grassland community, after its nutrient balance is altered, takes up to 60 years to reach equilibrium. Says Silvertown, “This is very frustrating for the average researcher experiencing a normal [career] of 40 years!”

    Deep roots.

    This 1932 map of the Rothamsted estate depicts Park Grass and other experiments that continue to this day.


    Park Grass and other Rothamsted experiments also offer invaluable archives of dried plant material. “It's like a well that ecologists can dip into whenever they want to test a new idea,” such as possible mechanisms of competition or theories on ecological stability, says Tilman. Others use archival samples as controls for studies on the accumulation of pollutants such as PCBs; much of the material was collected before such contaminants even existed.

    Most ecologists agree that Lawes's legacy is likely to yield many more insights—making the continuation of the experiments all the more vital. “Long-term data sets of this kind,” says Tilman, “have a tendency to surprise you.”

  18. Divining a Forest's Future From Its Past

    1. Jocelyn Kaiser

    PETERSHAM, MASSACHUSETTS—Standing amid towering hardwoods and pines in a hushed upland forest, one might assume that this New England landscape looks much as it did when the Pilgrims first set foot on Plymouth Rock nearly 4 centuries ago. But subtle clues give away the epochal change this forest has endured. A crumbling stone wall reveals that this spot was once a farmer's field. And stooping beside a meter-wide pit, ecologist David Foster points to the soil, a uniform brown that could have gotten that way only by plowing. These reminders of long-gone human activity explain why white pines, not oaks, are creaking in the breeze: They're the first tree species in the area to colonize an abandoned field.

    “We've come to realize you need to have a very deep sense of history and long-term processes to understand ecosystem structure and function,” says Foster, who heads the Harvard Forest Long Term Ecological Research (LTER) site, part of a network supported by the National Science Foundation (see main text). The imprint of European settlers is felt in everything from the communities of beavers and moose roaming these woods to how long New England's resurgent forests can continue to sop up carbon dioxide (CO2), the primary villain behind global warming.

    The Harvard Forest is a pioneering site in the emerging discipline of historical ecology. Research here started in 1907, when a group led by forester Richard Fisher “immediately began documenting the way the land was used and the history of natural disturbance,” Foster says. They started with 1830, by which time 80% of the central Massachusetts old-growth forest had been cleared for agriculture. Many farmers, driven out by more profitable operations in the Midwest, then left for jobs in cities. Trees have been returning ever since.

    In a former life.

    Fading signs of human habitation suggest how the Harvard Forest has been transformed since the region's agriculture industry peaked in the mid-1800s.


    The region's rich ecological history has many subplots that fascinate researchers. For instance, in 1938 a hurricane cut a devastating swath, spinning north from Long Island before losing strength in Quebec. Its 167-kilometer-an-hour winds flattened trees across a 100-kilometer-wide stretch of New England, where loggers then carried out the largest timber-salvage operation in U.S. history. The Connecticut and Merrimack rivers ran higher than normal for the next 5 years, reflecting the carnage wrought across the watersheds.

    One of the first projects that Foster and colleagues undertook when their site joined the LTER network in 1988 was to reenact the 1938 hurricane by yanking down 250 trees in a stand of hardwoods. Instead of salvaging the timber, they let sleeping logs lie. To their surprise, toppled trees leafed out for years and seedlings sprouted, so the forest continued to evaporate water through leaves. That meant the forest's ability to return precipitation to the atmosphere didn't change much—unlike the dramatic runoff that occurred 60 years ago. The message, says Foster, is that “if you want to maintain ecosystem processes [after a hurricane], the absolutely best way you can do that is to leave the forest intact.” The experiment, says forest ecologist Jerry Franklin of the University of Washington, Seattle, “made my jaw drop. … It's just incredible how it illustrated that natural disturbance works a lot differently than we thought it did.”

    View this table:

    Other studies are sifting the shards of the Harvard Forest's ecological past for clues to future climate. A tower draped with cables set up in 1989 gauges the forest's CO2 appetite by sniffing the gas wafting in and out, a project that spurred a national network of CO2 towers. Elsewhere in the Harvard Forest, dish plate-sized white plastic rings embedded in the ground capture CO2 venting from the forest floor. The rings are helping researchers untangle the roles of roots and soil microbes in storing carbon. Heaters beneath some plots simulate how much carbon will be released by 5 degrees Celsius of warming. At first the heated plots leaked more carbon, but after 10 years they are now stabilizing. If the study had stopped after 3 years —the typical length of a research grant—“our conclusions would be very different,” says Paul Steudler of the Marine Biological Laboratory in Woods Hole, Massachusetts, perhaps painting an overly dire picture of carbon escaping from soils indefinitely.

    This summer, the 40 Harvard foresters are hoping to complete a decade-long odyssey to state archives and town halls across Massachusetts, where they have been collecting maps from an 1830 survey that detailed land use in every township. These records will help inform which habitats are the highest priorities for conservation. “This is an arcane activity, dredging up these maps,” says Foster. “Yet they become a vibrant part of conservation and ecological sciences.” To a historical ecologist, the past is where many answers to tomorrow's problems lie.

  19. The Partitioning of the Red Sea

    1. Carl Zimmer*
    1. Carl Zimmer is the author of Parasite Rex and At the Water's Edge.

    The Partitioning of the Red Sea

    Israeli and Jordanian researchers have embarked on a novel experiment in marine ecology—and in scientific cooperation

    For the past 2 years, marine scientists have been engaged in a remarkable new experiment in the Red Sea. While tensions in the Middle East have escalated in the wake of renewed clashes between Israelis and Palestinians, researchers from Israel and Jordan have embarked on a long-term collaborative effort to monitor coral reefs that straddle the border between their two countries.

    The Red Sea Marine Peace Park aims to protect a unique but imperiled ecosystem at the northern tip of the Gulf of Aqaba. Home to 140 species of stony corals and nearly 1000 species of fish, the gulf's magnificent coral reefs are a marine biologist's delight. Particularly intriguing is why they exist at all: Nowhere else in the Indian or Pacific oceans do reef-building corals grow so far north of the equator.

    Like the Long Term Ecological Research (LTER) sites (see p. 624), the young marine reserve is probing fundamental processes—such as the ebb and flow of nutrients and changes in coral cover—over many years. “There's a slew of basic science questions that this project will hopefully begin to provide some data on,” says Michael Crosby, a science adviser at the Agency for International Development and the National Oceanic and Atmospheric Administration, two U.S. agencies that are helping fund and organize the park's research.

    Under siege.

    Early data show that a fifth of the Red Sea reserve's corals have died off in the past 2 years.


    But the most remarkable aspect of the research effort is the fact that it exists at all. Although separated by only a few kilometers, the Israeli and Jordanian marine scientists had never been in contact until recently. “I remember many days sitting on the beach and dreaming one day we will go to Aqaba and talk to our colleagues,” says Jonathan Erez of the Interuniversity Institute for Marine Science in Eilat, Israel, referring to his Jordanian counterparts at the Aqaba Marine Science Center. “That dream has become reality.”

    Mending a gulf

    The reserve traces its origins to the peace treaty that Jordan and Israel signed in 1994. Not only did the pact call for a formal end to hostilities, but it also included agreements on measures to help establish a permanent peace. These included opening border crossings, establishing free-trade zones, and cobbling together the marine park from the Eilat Coral Nature Reserve, which runs along 2 kilometers of Israeli coastline, and the Aqaba Marine Park, a 7-kilometer-long strip on the Jordanian side.

    The reefs are in danger. Because the Gulf of Aqaba is isolated from the rest of the Red Sea by the narrow Straits of Tiran, its water circulation is sluggish. That leaves the gulf exquisitely vulnerable to agricultural runoff and sewage, and to silt dumped offshore by dredging and landfill operations on both sides of the border. The reefs are also under direct assault from a thriving coral trade and from hulls and keels that graze the reefs.

    Fortunately for this fragile ecosystem, the Peace Park has proved more than a half-hearted confidence-building measure between the two former enemies. Soon after the park's creation, scientists and officials from Jordan, Israel, and the United States began hashing out how research could improve reef management. This dialogue led to a $2 million, 3-year effort, launched in 1999, that's now amassing a trove of data on the reefs.

    Under the U.S.-funded initiative, Israeli and Jordanian researchers are building a high-resolution map of the entire reef ecosystem in the Peace Park, as well as surveying and tracking the corals, algae, and reef animal populations. At the same time, the scientists are measuring the currents, temperature, and chemistry of the gulf to create detailed circulation models that could help explain how deep-water currents supply the shallow-water reefs with nutrients. The information is being pooled into a single LTER-style database that enables results from different teams to be combined seamlessly for ecosystem-wide analysis and modeling.

    The data are also finding a real-world application: gauging the punishment that humans are inflicting on the reefs. Fish farms, which keep millions of gilthead seabream in offshore cages, have started up on the Israeli side of the gulf over the past 2 years. The farms have released vast quantities of nitrogen, phosphates, and other nutrients into the gulf, claims Erez, whose team has reconstructed nutrient flow from their monitoring in the gulf. “It's like a city of 40,000 people sitting under the water, excreting,” he says. Initially, the nutrients sink into deep waters, but during cold winters, the water near the surface cools enough to allow mixing in the water column. This in turn delivers the fish-farm pollution to the reefs, Erez says. The nutrient influx triggers algal blooms that block sunlight from reaching the corals, slowing growth or killing them.

    Erez believes the fish farms are largely to blame for the fact that a fifth of the park's corals have died off in the past 2 years. “Coral reefs and fish farming are not going together,” he says. “We were lucky to monitor all these changes as they occurred.” A committee appointed by the Israeli government recommended this month that the fish farms slash their release of nitrogen into the gulf by raising their juvenile fish on land and reducing the feed for their marine farms, and that the pollution in the gulf be studied more closely. Officials at Ardag, the company that runs the offshore fish farms, claim that the damage to the reefs has come mainly from other sources of pollution. “The fish farms in the gulf have no bad effect on the environment outside the limited area of the farms itself,” says Ardag's Elisha Turniansky.

    Ecopolitical hot spot.

    The marine reserve is bringing together researchers from former enemies Israel and Jordan.


    Riven, but together

    Although relationships between project scientists have remained good, according to both sides, the region's turmoil has placed some constraints on the way research is conducted. For now, researchers must settle for a form of synchronized science. Last summer, for example, Israeli and Jordanian researchers on the gulf measured such parameters as water temperature and salinity. But rather than piling into a single boat, each team met at the border offshore to calibrate equipment before plying its own waters separately. Only later were the data pooled.

    Despite the cumbersome procedures dictated by politics, the project has done “extremely well,” says Geoffrey O'Sullivan of the Marine Institute in Dublin, who helped review the park's research for the U.S. government. Under the circumstances, he says, “it is a credit to all involved that they have made progress at all.” Adds Crosby: “We're clearly moving in a positive direction. But we're not there quite yet.”

    Also encouraging is the fact that political support for the project is holding strong in both countries—with a commitment, it would appear, to continue the research after the U.S. money runs out in 2003. “I see it as a long-term relationship,” says Bilal Bashir, commissioner for environment, regulation, and enforcement with the Aqaba Special Economic Zone Authority in Jordan.

    If it does live up to its promise, the Peace Park may even serve as a model for other parts of the world, where coastal ecosystems are shared by countries that want to move from conflict to cooperation—from the river deltas between Pakistan and India to the Adriatic coastline of the former Yugoslavia. “There are so many areas that are potentially ripe,” says Crosby, for both political and ecological healing.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution