News this Week

Science  06 Oct 2000:
Vol. 290, Issue 5489, pp. 22
  1. SCIENTIFIC COMMUNITY

    DOE Softens Bite of Tighter Security Rules at Labs

    1. David Malakoff

    An increasingly public backlash by scientists angry about recent security crackdowns at Department of Energy (DOE) labs is having an effect. DOE is preparing to streamline some rules and is rolling out an array of initiatives it hopes will boost the spirits of government scientists oppressed by burdensome and confusing regulations.

    The pendulum swing toward scientific openness gained momentum on 13 September, when the government's case against former Los Alamos National Laboratory physicist Wen Ho Lee virtually collapsed; he was freed from jail, where he had spent 9 months, after pleading guilty to a single charge of mishandling government material (Science, 15 September, p. 1851). Then, last week, retired Senator Howard Baker and former Representative Lee Hamilton concluded after a review of a separate incident at Los Alamos that “the current negative climate is incompatible with the performance of good science.” A few days later, presidential science adviser Neal Lane said that the Baker report suggests the government has “overshot the mark” on security. Lane was speaking at a workshop held by a National Academy of Sciences panel* convened to study the issue.

    The new DOE initiatives include simplifying restrictions on foreign visitors to classified facilities and on travel abroad, holding “science days” at DOE's weapons laboratories to shine a positive spotlight on the labs' research, and appointing a new blue-ribbon commission on science and security. These initiatives are seen as an olive branch from Energy Secretary Bill Richardson to a community under heavy fire from politicians since a 1998 congressional report on Chinese espionage argued that the labs had become easy pickings for spies.

    Time delay.

    A 1999 law tightened rules on background checks for lab visitors from sensitive nations, causing long delays that DOE has promised to shorten.

    The tension between science and security, never far from the surface, erupted in March 1999, when The New York Times publicized allegations that Lee had stolen weapons data for China. Amid the ensuing controversy, Congress created the semiautonomous National Nuclear Security Administration (NNSA) within DOE to oversee security at the three weapons laboratories, ordered polygraph tests for thousands of scientists, and imposed a moratorium on foreign visitors to DOE's 15 national labs, of which just six are heavily involved in classified work. DOE also released an array of revised security measures, from new rules on when employees had to report overseas sexual liaisons to the restoration of a badging system that requires foreign nationals to wear bright red tags. In slightly more than 1 year, notes Jonathan Dorfan, director of the Stanford Linear Accelerator Center (SLAC), the number of DOE security directives grew from 11 to more than 50 final and draft regulations.

    “We spent a lot of time and money fighting off ill-conceived, one-size-fits-all security directives,” says Dorfan, who leads one of DOE's Tier 3, or nonclassified, laboratories. Among the proposals were rules that imposed lengthy background checks on foreign visitors from 17 sensitive nations (including Russia, China, and India), required scientists to receive security briefings before and after trips abroad, and called for erecting thick digital walls around SLAC's computer system. Although such precautions may be appropriate at the top-secret weapons labs, says Dorfan, “they would shut down science at SLAC.”

    Other lab officials who attended the academy workshop reported similar problems. At DOE's Pacific Northwest National Laboratory in Richland, Washington, which conducts both unclassified and classified research, officials had to cancel meetings abroad and prevent foreign students in a visiting university group from touring the lab, said the lab's Gerald Stokes. The alternative —background checks—would have taken weeks, he noted, and lab officials were leery of issuing waivers that might attract criticism. At Lawrence Livermore National Laboratory in California, scientists are drowning in a sea of often vague rules regarding the handling of nearly a dozen kinds of unclassified but “sensitive” technical information, reported Eileen Vergino of Livermore's Center for Global Security Research, who has surveyed employee reaction to the clampdown.

    Things went from bad to worse last summer at Los Alamos when two computer hard drives filled with weapons secrets went missing for several weeks (Science, 23 June, p. 2109). DOE's handling of the investigation has angered scientists. “It's devolving into a witch hunt, where people are afraid to admit mistakes because of the risk of criminal prosecution,” says one DOE scientist, echoing a common view. An aide to NNSA head John Gordon says that “morale is much worse than we expected.”

    DOE security officials conceded at the academy meeting that they have won few friends in the past year. Lab researchers should “give us a chance to make things work,” pleaded Marshall Cumbs, a DOE official who is shepherding new, streamlined rules for vetting foreign visitors through the bureaucracy. DOE officials are working to reduce the time needed for headquarters to complete background checks (see above), which were previously handled by each lab.

    Also in the works are plans for Gordon and for Mildred Dresselhaus, a Massachusetts Institute of Technology engineer and the new head of the Office of Science, to host “science days” at the Sandia, Los Alamos, and Livermore weapons labs. The events, says an aide, would emphasize Gordon's belief that “security must not jeopardize DOE's science mission.” That goal also underlies the new commission, to be stocked with high-profile independent scientists and security experts appointed by Richardson.

    Such initiatives, however, may not be enough to win back researchers' confidence. Dorfan “is reserving judgment” but says he is “impressed with Gordon's commitment to science.” A Los Alamos scientist quoted in the Baker report urged DOE officials to be guided by common sense rather than political expediency. “If you give nerds absurd guidance, it won't work,” said the anonymous researcher. “Give them sensible and consistent guidance, and they'll do their best.” Scientists are hoping that DOE will meet that standard.

    • *Scientific Communication and National Security, Washington, D.C., 27–28 September.

  2. LABORATORY ANIMALS

    Researchers Fight Plan to Regulate Mice, Birds

    1. David Malakoff

    The U.S. government has decided that laboratory mice are animals. This week a federal judge is expected to accept an agreement between the U.S. Department of Agriculture (USDA) and animal-welfare advocates to regulate the use of mice, rats, and birds in scientific research. The pact, which would settle a pending lawsuit, would reverse a 30-year-old policy that exempts 95% of all experimental animals from the federal government's legal definition of “animal.”

    Biomedical research groups are furious. They predict that new rules will drive up animal-care costs, force small colleges to stop using live animals in classes, and spawn more lawsuits. “Settling this suit without taking into account the deep concerns of the research community is a serious mistake,” says Jordan Cohen, president of the Association of American Medical Colleges (AAMC), one of several groups that fought unsuccessfully to sink the deal.

    The controversy stems from a 1972 decision by the USDA's Animal and Plant Health Inspection Service to exempt mice, rats, and birds from Animal Welfare Act (AWA) regulations that spell out everything from annual inspections to cage sizes. That decision came under legal attack in 1992 after a federal judge ruled that USDA's justification for the exemption—that Congress never intended the law to apply to the three animals—was “strained and unlikely.” But an appeals court later threw out the case, ruling that the Humane Society of the United States and other plaintiffs lacked standing to sue since they had not been directly harmed by USDA's action.

    In 1997 the Alternatives Research & Development Foundation (ARDF) of Eden Prairie, Minnesota, sought a way around that ruling by suing on behalf of an undergraduate student who claimed she suffered emotional and aesthetic harm from working with mistreated rats in a college psychology lab. This summer, after a judge ruled that the student had standing, USDA moved to negotiate an out-of-court settlement (Science, 21 July, p. 377).

    The decision alarmed biomedical research advocates, including AAMC, the National Association for Biomedical Research (NABR), and the Federation of American Societies for Experimental Biology. In a blizzard of faxes, they asked Agriculture Secretary Dan Glickman to intercede, arguing that researchers should not be frozen out of negotiations. They also worried that USDA's refusal to challenge the decision could lead to dozens of new animal-rights cases.

    Their pleas fell on deaf ears. On 29 September, USDA officials agreed to “initiate and complete a rulemaking on the regulation of birds, rats, and mice within a reasonable time.” A jubilant John McArdle of the ARDF says that “we wanted to be certain that alternatives are considered for all laboratory animals,” as required by the AWA. He also notes that the process will allow “everyone to have their say.”

    Observers are divided over the practical impact of any new rules, which could take years to finalize. Animal-care experts say large breeders and research universities, which typically already meet widely used voluntary standards and other government requirements, should have little trouble complying with new regulations. But NABR has estimated that compliance could cost researchers $280 million or more, in addition to requiring a bigger USDA budget for enforcement. Smaller teaching institutions won't be able to afford to keep rats in mazes in psychology classes or raise chicks in development courses, the group predicts. Research funds, says Cohen, will be “frittered away on senseless and duplicative bureaucratic hoops that are driven by ideology and not reality.”

    Opponents have vowed to fight any changes. NABR and the Johns Hopkins University in Baltimore, Maryland, have asked to be included in the case so that their voices can be heard. If the judge signs off, however, the groups may seek a statement from Congress on whether it intended to regulate mice, rats, and birds when it passed the AWA. Any final rule could also be challenged in court. “The battle will now be joined in the regulatory and legislative arenas,” says Cohen.

  3. NEUROSCIENCE

    A Possible Target for Better Benzodiazepines

    1. Laura Helmuth

    Valium and other benzodiazepines are among the world's most prescribed drugs. They ease anxiety, relax muscles, control epilepsy, and help people sleep—but so far it's impossible to enjoy just one of the drugs' benefits without the rest of the package. Benzodiazepine users also risk some troubling side effects, such as scrambled memories and clumsiness. The problem, says behavioral neuroscientist David Stephens of the University of Sussex in the United Kingdom, is that the drugs “shut down the brain fairly generally.” Pharmacologists have tried for years to distill the drugs' desirable effects from the bad; now a team of neuroscientists has shown that such a strategy could well succeed.

    Focusing anxiety relief.

    α2 receptors (light, top) ease a mouse's anxiety if it's given Valium. α3 receptors (bottom) don't appear to help.

    CREDIT: JEAN-MARC FRITSCHY

    Benzodiazepines work by amplifying the action of the neurotransmitter GABA, the brain's main “off” switch. They do this by binding to and sensitizing some of the receptors that register GABA's inhibitory signal. Now, in work described on page 131, a group led by Uwe Rudolph and Hanns Möhler at the University of Zürich in Switzerland has pinpointed one particular subtype of the GABA receptor as the source of benzodiazepines' anxiety-reducing powers. Knowing this, says neuroscientist Richard Olsen of the University of California, Los Angeles, “we can and should be able to design drugs that selectively lower anxiety without putting you to sleep or impairing learning and memory.”

    An individual GABA receptor contains five different protein subunits, which come in a variety of shapes. Benzodiazepines bind between the α and γ subunits, changing the receptor's shape subtly so that it responds more enthusiastically to GABA. There are six possible α subunits, just four of which allow a receptor to respond to Valium.

    “We knew about the diversity of [GABA receptors] in the brain, but it was unclear whether that would correspond to functional diversity,” says Rudolph. The first indication that the α subunit's identity might influence the type of effects the drug produces came last year. In work reported in the 21 October 1999 issue of Nature, Rudolph's team reported that when they mutated the gene encoding the α1 subunit in mice, Valium no longer bound to the receptors that should have embraced it. What's more, the mice weren't sedated by the drug, even though it retained its ability to reduce anxiety and relax muscles in the animals. Those results were subsequently confirmed by Ruth McKernan and colleagues at Merck Sharp & Dohme Research Laboratories in the United Kingdom.

    In the current work, Rudolph's team has tracked down the site of Valium's anxiety-reducing action. The researchers focused on receptors composed of the α2 and α3 variants of that protein because of where the receptors are located in the brain—in regions that previous research has shown participate in fear.

    Karin Löw, a former graduate student in Rudolph's lab who is now at the University of California, San Diego, created mice with a point mutation in the gene for the α2 receptor subunit; meanwhile Ruth Keist of the University of Zürich bred mice with mutated α3 genes. “What's really clever about these mice,” says Stephens, who was not involved in the research, is that the mutations affect only the receptor's sensitivity to benzodiazepines.

    Indeed, the mice appeared to be perfectly normal. Their GABA receptors continued to respond to the neurotransmitter, and the mice also responded to other anxiolytic drugs that act on non-α-dependent binding sites. Administering Valium produced a different picture, however. In standard lab tests of anxiety, which measure how much an animal explores new wings of a maze or ventures into the light, the animals with α2 mutations—although not those with the α3 mutation—acted as though they hadn't received a dose of the drug. Thus, it appears that only the α2 receptor is needed for Valium's antianxiety effects, even though a3 receptors are also located in fear areas.

    Stephens isn't prepared to rule out a3 as a mediator of Valium's anxiolytic effects, however. The tests the Zürich group chose, he points out, measure unconditioned fear. Other sorts of fear, such as learning to avoid a certain place or scent, might be better models of human anxiety and could rely on different receptor subtypes. Even so, Stephens says, drugs that target a2 receptors are a good bet for stopping anxiety. The a2 receptors tend to be clustered around the base of a neuron's axon, a region through which any nerve signal must pass, and so they are well positioned to inhibit neuronal firing. “There's nothing subtle about” a2 receptors, he says; they're an “emergency off switch.”

  4. PLANETARY SCIENCE

    Giant 'Planets' on the Loose in Orion?

    1. Robert Irion

    Strange, dim objects in the constellation Orion have left astronomers hunting for words. In a young star cluster perched near Orion's belt, a team at the Astrophysics Institute of the Canary Islands has spotted nearly a score of what appear to be balls of gas several times as massive as the planet Jupiter. Unlike planets, however, the objects —described on page 103 of this issue—are celestial free agents, drifting through the cluster rather than orbiting stars. Astronomers disagree about how they got there and what to call them. Are they “a new kind of giant planet,” as their discoverers would have it? Or are they something else entirely —perhaps unusually puny brown dwarfs?

    Brown dwarfs, objects lighter than 75 times the mass of Jupiter, never grow hot and dense enough to ignite stable furnaces of hydrogen fusion at their cores. Rather, some glow feebly by fusing atoms of deuterium, a heavier and rarer isotope of hydrogen that requires less energy to burn. Below 13 Jupiter masses, dwarfs lack enough heft to sustain even that reaction. Instead, they shed the heat produced by their gravitational contraction until they fade to invisibility.

    Recent surveys of other young star clusters indicate that such small objects are common. For example, astronomer Joan Najita of the National Optical Astronomical Observatories in Tucson, Arizona, and her colleagues reported in the 1 October issue of the Astrophysical Journal that the cluster IC348 in Perseus is richer in low-mass brown dwarfs than in high-mass ones. The team's Hubble Space Telescope observations captured objects as low as 15 Jupiter masses, but Najita's data hint that the trend continues to even smaller sizes. “The process that makes stars shows no sign of pooping out at these lower masses,” she says.

    Rogue Jupiters?

    New observations of a young star cluster in Orion reveal 18 free-floating red objects, including the three shown here, that resemble giant planets with masses five to 15 times that of Jupiter.

    CREDIT: GABRIEL PEREZ/INSTITUTO DE ASTROFISICA DE CANARIAS

    Now, the latest study strengthens that suspicion. The Canary Islands team, led by astronomer Maria Rosa Zapatero Osorio, used long exposures with two Spanish telescopes to find 18 faint, red objects in the Sigma Orionis cluster. The team's analysis suggests that most float freely amid the cluster itself rather than in the background or foreground. Spectra of three of the objects from the Keck telescopes in Hawaii revealed temperatures of 1700 to 2200 kelvin.

    Because the Sigma Orionis cluster is so young—just 1 million to 5 million years old, according to other studies—the team calculated that the objects must be very small indeed to have cooled off so quickly from the heat of their formation. By examining three models of how such objects may evolve, the researchers derived a range of five to 15 Jupiter masses for its quarries. “Less massive objects cool down very rapidly and would be too faint for our survey to detect,” says Zapatero Osorio, who is now at the California Institute of Technology in Pasadena.

    Other astronomers find the detections convincing. But they caution that it's not clear whether the stellar evolution models are valid for such tiny objects. “None of the models have been tested at very low masses and very young ages,” says astronomer Gibor Basri of the University of California, Berkeley. If the models are shaky, Zapatero Osorio's objects may not be so lightweight after all.

    Large or small, the cosmic rovers still might not qualify as planets. Most astronomers reserve the “p word” for bodies that form within a planetary system and orbit stars, says theorist Alan Boss of the Carnegie Institution of Washington in Washington, D.C. “They should call them ‘planetary-mass brown dwarfs,’” says Boss, whose calculations show that, depending on circumstances, clouds of molecular hydrogen may either condense into full-fledged stars or fragment to form dwarf objects as small as three Jupiter masses. The same semantic umbrella should cover all such bodies, he maintains.

    However, Basri and theorist Jack Lissauer of NASA's Ames Research Center in Mountain View, California, point out that Boss's way of distinguishing “planets” and “stars” is imprecise, too. Both form in accretion disks, they note. Furthermore, gravitational tugs from other massive planets or stellar interlopers can eject large planets from a system. A few such wanderers might be drifting through Sigma Orionis, Lissauer says. “We won't be able to figure out how every object formed,” he notes. “Classifying planets solely on some useful basis like mass or lack of fusion in their cores has some merit.”

    Najita thinks the name debate is inconsequential compared with the science at hand. “We should use observations of all of these low-mass objects to learn new things about how planets and stars form,” she says. “That's the real strength of these studies.”

  5. ASTROPHYSICS

    Lucky Star Sheds Light On Gamma Ray Burst

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    Spill a clear drink on this page, and the drops of liquid will magnify the letters into a jumble of swelling arcs and dots. Astrophysicists think an analogous effect deep in space has helped them glimpse a much harder-to-see letter: the expanding O left by the afterglow of a distant gamma ray burst. A well-placed star, they believe, accidentally acted as a telescope, focusing light from the O so that more of it reached Earth. The resulting “microlensing” may have given scientists their first direct evidence that gamma ray bursts (GRBs) blow fiery bubbles into the cosmos.

    “This is an amazing confirmation of a surprising prediction,” says astronomer Peter Garnavich of the University of Notre Dame in Indiana, part of the team that made the discovery. To prove it, though, Garnavich and colleagues must show that the lensing star exists, and that won't be easy.

    About once a day, a sudden explosion of gamma rays pours down on Earth from a random corner of the universe. Theorists believe the initial explosion powers an expanding spherical shock wave that crashes into the surrounding gas at nearly the speed of light. The collision lights a cosmic fire at the sphere's surface that, if you could see it, would look like a glowing ring. As the wave expands and the fire fades, the afterglow changes “color” from x-ray to optical light to radio wave. Although a worldwide network of telescopes has captured the rapidly fading glow of about 20 bursts in the past 3 years, none has seen the predicted ring of fire. That's no surprise, theorists say; such a ring would be at least 1 million times too small to resolve with the most powerful telescopes.

    Last March, the gamma ray burst GRB000301C changed all that. The burst occurred about 10 billion light-years away, in the constellation Corona Borealis. Routine follow-up observations with radio and optical telescopes caught an unexpected sudden brightening in the afterglow's otherwise smooth fade-out. “Since gamma ray bursts are usually so well behaved, this really stood out,” says radio astronomer Dale Frail of the National Radio Astronomy Observatory in Socorro, New Mexico. Frail and his colleagues speculated that the shock wave brightened when it overtook a lump of interstellar gas.

    Then, a closer look at the compiled radio and optical frequency data by Garnavich and by Kris Stanek of the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts, turned up a surprise: Within the small observational uncertainties, brightness increased evenly at all frequencies. Shock waves colliding with interstellar gas rarely produce such achromatic changes. Instead, Garnavich, Stanek, and CfA astrophysicist Avi Loeb argue in a paper accepted for publication in the Astrophysical Journal Letters, part of the expanding ring must have passed behind a star located exactly between Earth and the ring itself. When that happened, the star's gravity would have focused the light from the ring, bending each frequency by the same amount while increasing the intensity by a factor of 2—precisely as Loeb and his student Rosalba Perna had predicted in a 1998 paper. The duration of the flare-up implies that the width of the ring is between 7% and 20% of its radius, Stanek says.

    The data are too sparse to prove unambiguously that microlensing caused the curious brightening of GRB000301C, Frail says, and there is no way to go back and get more. “Gamma ray bursts are a one-shot deal,” he laments. Help may come from the HETE-2 orbiting GRB observatory, scheduled for launch on 7 October, which is expected to spot dozens of new afterglows a year. With more observations, says Princeton astrophysicist Bohdan Paczy'nski, GRB microlensing may become as well established as so-called galactic microlensing, in which one star brightens achromatically as it passes behind another. “At first, everyone called them candidate microlensing events,” Paczy'nski says. “But after many more were discovered, they stopped saying ‘candidate.’”

  6. DEPARTMENT OF ENERGY

    Science Wins Out in Latest Budget

    1. David Malakoff

    Science has emerged a winner in this year's struggle over the Department of Energy's (DOE's) budget, erasing fears earlier this summer of severe cuts in several high-profile programs. Congress this week gave the agency's civilian science programs a 13% boost, to $3.2 billion, slightly more than the Administration had requested. The $24 billion bill also includes the extra cash needed to keep the world's largest laser project on track and restores funds that the directors of DOE's national laboratories can award to hand-picked projects. Even a threatened veto by President Clinton due to an unrelated issue is not expected to alter the research numbers.

    Such an upbeat result seemed unlikely just a month ago, after both the House and the Senate approved budgets that would have punched major holes in research programs at DOE, the federal government's third-largest funder of basic research. The House, for instance, had severely cut funding for the Spallation Neutron Source (SNS), a $1.2 billion materials science accelerator that DOE is building at the Oak Ridge National Laboratory in Tennessee. The Senate, in turn, fully funded the Administration's $279 million request for SNS, but only by cutting the budgets for high energy and nuclear physics. The shortfalls prompted an all-out lobbying push by a coalition of university presidents and scientific societies.

    View this table:

    That campaign, along with projections of a growing federal budget surplus, convinced legislators to match or exceed the Administration's request in nearly every field. The spallation source received its full request. A thicker wallet also paid for nearly $60 million in academic pork-barrel projects, including $3 million for a new nanotechnology research center at Notre Dame University in South Bend, Indiana, and $2 million for a Digital Millennium Center for high-speed computing at Tulane University in New Orleans, Louisiana. There is also $11 million earmarked for research in functional brain imaging at locations to be determined.

    Even the troubled National Ignition Facility (NIF), a $3.8 billion laser under construction at Lawrence Livermore National Laboratory in California, escaped the ax. Responding to revelations of mismanagement and massive cost overruns, the Senate had voted earlier to deny the Administration's request for a $135 million increase this year for the megaproject, which will allow researchers to study nuclear weapons without testing them and to explore the feasibility of fusion energy (Science, 18 August, p. 1126). But the final bill gives NIF $200 million, just short of the $210 million request. Congress did attach some major strings, however, including a directive to commission the National Academy of Sciences to review the project, a requirement that Livermore pay for some of the overrun out of its own operating budget, and a DOE study of scaling back the project. Livermore chief Bruce Tarter said he was “very pleased” that the laser had survived.

    Other lab chiefs were buoyed by the restoration of their internal grant programs, officially known as Laboratory Directed Research and Development (LDRD) funds. Last year, Congress had slashed the decentralized accounts, which many labs use to seed promising research, after concerns that some labs were misusing the money (Science, 5 November 1999, p. 1064). But the new spending bill allows directors once again to channel up to 6% of their core budget to LDRD grants. At Livermore, that means a jump from $35 million to $52 million. “It's a big relief,” says lab spokesperson Susan Houghton.

  7. CLINICAL TRIALS

    Panel Proposes Rules for Research Abroad

    1. Gretchen Vogel

    Before scientists begin a clinical study in the developing world, they should make sure any successful treatment that results will be made available not just to trial participants but to the whole host country, according to a controversial recommendation from a presidential panel. The U.S. National Bioethics Advisory Commission (NBAC) on 29 September released draft guidelines* that would set this high bar for clinical research in foreign countries. NBAC took up the issue last year in response to controversies over placebo-controlled trials involving HIV-infected mothers and international trials of AIDS vaccines.

    Ethicists and researchers have vigorously debated whether researchers from a wealthy country like the United States must provide the same standard of care to research subjects in foreign countries—even if they would otherwise have no access to such treatment. In the best known example, researchers came under attack for conducting studies that proved the effectiveness of a simple and cheap AZT therapy for HIV-infected pregnant women (Science, 27 February 1998, p. 1299). Some women received a placebo, even though AZT is effective and is standard treatment in the United States. The researchers considered this reasonable because the standard course of AZT is too expensive for most poor countries.

    The NBAC panel acknowledges such dilemmas. The report says that researchers and sponsors should provide “established, effective treatment” to all study participants, whether or not it would usually be available. However, the guidelines allow exceptions. For example, if a researcher can explain to an ethical review board why providing treatment would render a study irrelevant to the host country, then a trial without standard therapy might be acceptable. Offering such flexibility is a step in the right direction, says physician and bioethicist Robert Levine of Yale University School of Medicine. The requirement that all studies provide the best known treatment is “out of touch with the realities.”

    The NBAC report would permit some flexibility on informed consent as well. Researchers have complained that a traditional U.S. requirement—that each volunteer must sign a written document that outlines possible risks and benefits—is meaningless in countries where few people read or write. Although individual informed consent is required, the report says, a written document may not be. In places where a request to sign a document may seem threatening, for example, ethics review boards could allow researchers to document verbal consent of some kind.

    The panel's most controversial recommendation involves obligations both before and after a study takes place. Before work begins, the recommendations state, researchers and sponsors should explain how treatments that prove successful will be made available both to research participants and to the country as a whole. Although the principle is laudable, the guideline expects too much of researchers, says Francis Crawley of the European Forum for Good Clinical Practice in Brussels, Belgium. “These are enormously complex discussions,” he says. “Often there is no way [a researcher] can tell how a treatment might be made available.” Bioethicist Norman Fost of the University of Wisconsin, Madison, thinks such a requirement could slow down or prevent important trials. In the developing world, he says, participation in a trial is often a benefit, not a burden. In addition, he says, “there's no moral basis for the claim that individuals who aren't in the study are owed something.”

    NBAC will accept public comments on the draft through 13 November, says executive director Eric Meslin, and it aims to approve final guidelines in December.

  8. ETHICS

    Epidemiologists Wary of Opening Up Their Data

    1. Eliot Marshall

    ATLANTA—Epidemiologists, like journalists, have a tradition of protecting their sources, but now they're confronting demands that they open their files to the public. At the annual meeting of the American College of Epidemiology (ACE) here on 26 September, members debated how to comply with new federal rules that mandate data sharing. Finding a way to do that without jeopardizing subjects' privacy will be hard, many said. Indeed, some researchers warned that privacy concerns are already making it difficult, if not impossible, to recruit participants for some studies. Despite the sometimes heated discussions, Jonathan Samet, chair of epidemiology at Johns Hopkins University in Baltimore, Maryland, reminded the crowd that in reality, “there isn't a debate. There's a law.”

    Samet, ACE's president, was referring to a rule known as the “Shelby amendment,” which passed Congress in 1998. As interpreted by the Office of Management and Budget, it requires federally funded researchers to make available raw data that support results that have been used “by the federal government in developing policy or rules” (Science, 12 February 1999, p. 914). Some researchers say that the best way to deal with potential requests for data is to routinely deposit material in an archive that can be opened to the public when results are published. But this idea was not popular in Atlanta, where, by a show of hands, the audience voted overwhelmingly against it.

    Indeed, defenders of the public archive idea were hard to come by, says debate organizer Gina Etheredge, a clinical epidemiologist at Tulane University in New Orleans. She eventually recruited outside talent: Christine Bachrach, a demographer at the National Institute of Child Health and Human Development in Bethesda, Maryland. Bachrach said in a phone interview that researchers in her field routinely collect data with a plan to make them public. In Atlanta, she argued that public archiving “reinforces open scientific inquiry,” promotes “timely use of information,” encourages people to test new analytical methods and ideas, discourages repetition, and creates data sets that can be used for training. Bachrach warned the epidemiologists that, if someone does make a demand through the Freedom of Information Act, “your life is going to be a lot easier if you have put your data in a public archive.”

    Epidemiologist Manning Feinlieb of Johns Hopkins wasn't convinced. Feinlieb, like Samet, noted that a requirement for more data sharing is a “done deal,” but he explained why many of his peers would prefer to share data in other ways. For one, he said, it's “too much trouble” to label and document every scrap of data that is collected in a way that would make sense to a stranger. Nor do epidemiologists want to give away their intellectual property: “They don't want to be scooped” on their work, he said, particularly if they're just beginning to exploit a database that's taken years to build. They're also leery of getting ensnared in long-running squabbles of minor significance—“a big pain”—which they see as more likely to happen if data are dumped onto the Internet. Mandatory data sharing also may mean that more money and time must be spent on paperwork, he argued. And although Feinlieb agreed that public archives would be useful in training Ph.D. candidates, he worried that they might also spawn more secondary analysis and less original field research.

    Feinlieb also touched on a related subject—the privacy of medical records—that struck a nerve. Although everyone agrees that personal information should be kept anonymous and encrypted, he said, some panels that review and monitor clinical and epidemiological studies are requiring that individuals who join a study be warned that their privacy cannot be guaranteed. Such warnings and other requests for individual approval could become standard soon: The Department of Health and Human Services has published draft regulations, expected in final form next month, that may require individual consent before data from medical files can be screened and rendered anonymous for use in research.

    Imagine, said Etheredge, trying to recruit a subject and saying: “Tell me everything about yourself, and I promise to keep it secret—for a while. And then I'll put it on the Internet.” Vickie Mays, an epidemiologist at the University of California, Los Angeles, who studies HIV and sexual behavior among African-American gay men, warned: “We're really going to regret” adopting mandatory data release and consent forms with scary warnings about privacy loss.

    Observing that information technology is changing everyone's life, Samet said that epidemiologists may experience some especially “painful lessons.”

  9. PHYSICS

    Yoked Photons Break the Light Barrier

    1. Charles Seife

    It seems to flout the laws of physics, but scientists have found a loophole in the rules that govern diffraction. By exploiting entanglement, the quintessential “spooky” phenomenon in quantum mechanics, physicists at the Jet Propulsion Laboratory in Pasadena, California, have come up with a method for drawing tiny features on a microchip that would be impossible according to the classical theory of light. If it proves practical (always a big “if” where quantum effects are concerned), the technique—described in the 25 September issue of Physical Review Letters—could enable chip designers to circumvent the so-called Rayleigh limit, a physical barrier that plagues chip manufacturers much as the sound barrier used to bedevil aerospace engineers. As team member Jonathan Dowling puts it, “Murphy's Law has been repealed, at least in theory.”

    If so, the reprieve comes in the nick of time. Although computer chips are growing ever smaller and more powerful—doubling in speed and halving in cost every 18 months or so—it's getting harder and harder to manufacture those chips. One reason is that most chips are made by photolithography, a process in which the manufacturer shines light through a patterned “mask” onto a chip slathered with a light-sensitive coating called photoresist. The light toughens the coating, allowing the manufacturer to etch away unexposed parts of the chip.

    Sharper image.

    By halving photons' effective wavelengths, quantum entanglement may enable chipmakers to etch much smaller transistors.

    Unfortunately, as microcircuitry grows ever finer, chipmakers run smack into the Rayleigh limit, which dictates that the smallest feature a light beam can write on a chip is half the wavelength of the light. To etch smaller and smaller transistors, manufacturers must resort to shorter and shorter wavelengths—moving from red to blue to ultraviolet to extreme ultraviolet to x-rays. Short wavelengths, however, are both hard to control and tough on chips. The Rayleigh limit ensures that manufacturers pay dearly for smaller transistors.

    To smash the barrier, Dowling and colleagues imagine “entangling” two photons so that when they are shot at a beam splitter from opposite directions, they will always wind up moving together in lockstep. Thus yoked, the photons will remain inseparable until they strike a target—in this case, the chip-in-progress. “This strange quantum-mechanical disembodiment allows them to conspire to arrive at the same atom at the same time,” Dowling says.

    If the entangled photons are made out of red light, the optics will bend them just as they bend red light, Dowling says. But when the two photons hit the target together, their combined energy might equal that of a single ultraviolet photon—a particle with a shorter wavelength. “It acts like UV for all intents and purposes,” says Dowling. In fact, if you set up an interferometer, the interference pattern would look like one for ultraviolet photons rather than red ones: The fringes are twice as fine. That should make it possible to etch transistors twice as finely as the Rayleigh limit allows, Dowling says, provided that chipmakers can find photoresist that still hardens well when struck by photons with twice the usual angular momentum.

    “Dowling had a very brilliant idea to use this for lithography,” says Yan-hua Shih, an experimentalist at the University of Maryland, Baltimore County, who is trying to put the scheme into effect. Photoresist is his current stumbling block, he says. “Two-photon absorbing materials are not very sensitive; we're looking for a much better material.”

    Other scientists, however, think it will take more than tinkering to rout Rayleigh. Paul Kwiat, a physicist at Los Alamos National Laboratory in New Mexico, suspects that the difficulty of creating bright beams of entangled light will limit the usefulness of the technique. “But it's good to have people think about these things,” he adds.

  10. POPULATION GENETICS

    Estonia Prepares for National DNA Database

    1. Lone Frank*
    1. Lone Frank is a writer in Copenhagen, Denmark.

    TARTU, ESTONIA—If a nation's most valuable resource is its people, then how precious are its people's genes? For this tiny Baltic state, the opening bid lies somewhere between $100 million and $150 million. That's how much money Estonia expects to raise for a project, set to begin next year, that would compile DNA profiles and health information on 75% of the country's 1.4 million citizens. Officials hope that the database will not only allow researchers to track down disease genes and improve health care but also boost Estonia's budding biotechnology sector.

    Last month the Estonian parliament began considering a bill to regulate the collection of genetic information and database research, and observers predict quick passage. “I expect the final approval before Christmas,” says Minister of Social Affairs Eiki Nestor. The next step would be a $1 million test of the concept on 10,000 volunteers.

    With a pilot project possibly only a few months away, scientists held a meeting here last month for a global audience of colleagues and venture capitalists. “At this point we are interested in ideas and perspectives,” says Jaanus Pikani, chair of the Estonian Genome Foundation (EGF), which began organizing the project last year (Science, 12 November 1999, p. 1262). Prospective investors who attended the meeting think that Estonia should have little trouble finding backers. “Once a legal structure is in place,” says Todd Morrill of Venture Merchant Group in Walnut Creek, California, “success [will depend] on getting the pilot project under way.”

    Estonia hopes to chart a course different from that of a similar, but controversial, Icelandic project (Science, 30 October 1998, p. 859). In January, Reykjavík-based deCODE Genetics received an exclusive license to run Iceland's health-sector database for 12 years, a proprietary lock on the country's health records that allowed deCODE to raise nearly $160 million in a stock offering last July. Critics have complained, however, that the project requires individuals to opt out rather than making the company obtain informed consent ahead of time for health profiles. DeCODE is now negotiating the issue with the Icelandic Medical Association.

    Another issue involves the use of the databases. The information in the Icelandic project will be maintained anonymously, meaning that donors will not have access to their own information. By contrast, data and DNA samples in the Estonian project will be identifiable through a coded system. But it will belong to a nonprofit state-controlled foundation, and donors must give their informed consent for its use. If donors change their minds and want out of the database, their samples can be destroyed.

    “Valuable lessons from the well-known Icelandic project have been learned,” says University of Montreal law professor Bartha Maria Knoppers, chair of the Human Genome Organization's International Ethics Committee. She believes that the Estonian effort is “the more responsible approach, because it allows citizens to see what research is done with the information they donate. People want to know.” She says it's also important that the project educate the public on the information that is available after health data and DNA are analyzed.

    Estonia's decision to make the genetic data accessible to donors themselves means that donors someday may be able to take preventive measures against diseases to which their DNA places them at risk, or receive medical treatments tailored to genetic deficiencies. “The potential for a return for the health care system is substantial,” says Thomas Caskey, CEO of Cogene BioTech Ventures in Houston, Texas, and former president of the Merck Genome Research Institute.

    Participants in the pilot project will fill out extensive health questionnaires and give blood for genotyping. If all goes well, says Pikani, “we can move on to the major effort within a year.” Genotyping would be done on 1 million people over 5 years, using single-nucleotide polymorphism markers, and medical information would be updated continually.

    That prospect makes disease gene- hunters salivate. Topping the most wanted list are genes that contribute to major killers such as diabetes, heart disease, and Alzheimer's. The large sample size may allow scientists to home in on genes involved in diseases triggered by the interplay of genetics and the environment, says Max Baur, a medical statistician at Bonn University. “The success of the Estonian project,” he says, “hinges on high-quality medical data, good genotyping, and good data handling.”

    Companies are now talking with the Estonian government about how they might support—and profit from—a government-owned venture. Although the database will belong to a nonprofit foundation formed by the EGF and the Ministry of Health, a for-profit subsidiary will have the right to sell access and information. “I believe there will be interest, but investors will have to know exactly what they [are] buy[ing] into,” says Greg Lennon of Veragene, a genomics consulting firm in Maryland. Estonia plans to strike limited, nonexclusive deals that would, for example, allow a company to mine the database for clues to one or more diseases and receive intellectual property rights to treatments derived from its research. Access to data would be given to public researchers at no cost or for a handling fee. Estonian Prime Minister Maart Laar, a big supporter, is aiming for a balance between private and public involvement. “The important thing is that ownership [be] properly regulated,” he says.

    Estonian scientists predict that the project will be a boon to the country's embryonic biotech industry. Much of the massive genotyping, for instance, must be done in Estonia, as DNA samples cannot be exported without a special license from the Ministry of Health and Social Affairs. At the same time, “the database will boost research and make it possible for local scientists to attract funds from outside,” says Andres Metspalu, head of molecular diagnostics at Tartu University, who came up with the initial concept.

    The idea of collecting and storing a nation's health data and genetic profiles has sparked surprisingly little discussion in Estonia. “It has been hard to have a debate with no real opposition around,” observes science editor Tiit Kändler of The Estonian Daily. One reason, he and others suggest, is the country's eagerness to become a player in the world economy and to wipe away all vestiges of its Soviet past. There is much talk that the country needs to find its own Nokia, the phenomenally successful telecommunications giant that lifted the Finnish economy. Says Andrus Kaldalu of Tartu's Asper Biotech, “There is a feeling that biotech could be it.”

  11. PLANT GENOMICS

    Arabidopsis Comes of Age

    1. Elizabeth Pennisi

    Yeast, bacteria, and fruit fly geneticists helped bring molecular biology to plant research; now the complete genome sequence of a “model” plant is almost in hand

    Fresh out of graduate school in 1978, Shauna and Chris Somerville escaped to Paris to figure out what to do with their lives. Both had newly minted degrees in hand; Shauna, a master's in plant breeding, Chris, a Ph.D. in molecular biology, specializing in the lab bacterium Escherichia coli. Luxuriating in their temporary jobless state, they spent their mornings at the Institut Pierre and Marie Curie library and their afternoons at various cafés. There, over coffee and croissants, Shauna persuaded her husband to think about applying the tools of molecular biology to plants.

    Two papers helped make her case. One, a 1977 report, chronicled the successful use of a plasmid from a soil bacterium, Agrobacterium tumefaciens, to ferry genes into plants. The other was a 1975 review extolling the virtues of studying a common, mustardlike weed that was easy to grow in the lab. They decided to wed their two professions and use this weed, Arabidopsis, to tackle plant science's more perplexing problems, such as flower development and photorespiration. Their dream, recalls Chris, was to “create a new field,” centered on Arabidopsis, that would bring molecular biology techniques to bear on plants.

    As reports from a September meeting in Miami* make clear, some 20 years later, those café conversations have paid off. The Somervilles—with ample help from researchers ranging from Drosophila geneticists to plant physiologists—have firmly established the lowly weed Arabidopsis as the model organism for plant biology. A remarkably open and collegial community has blossomed around this weed. And, if all goes as planned, by the end of the year Arabidopsis will take its place among the pantheon of select research organisms that have had their full genomes deciphered. The genomes of the first three eukaryotes—the yeast Saccharomyces cerevisiae, the nematode Caenorhabditis elegans, and the fruit fly Drosophila melanogaster—have provided glimpses into such fundamental secrets as the signals that control development and the proteins that make up the cell's molecular motors, to name a few. But among these model organisms, only Arabidopsis offers the promise of elucidating properties unique to plants, such as how they make seeds and flowers, or how they make efficient use of sunlight. Already, there have been surprises: As described in Miami, plants are more similar, genetically, to humans than to yeast, bacteria, or nematodes. And that's just the beginning, says Athanasios Theologis, a plant biologist at the U.S. Department of Agriculture (USDA) Plant Gene Expression Center at the University of California, Berkeley: “Arabidopsis was a tremendous thing to happen in plant biology.”

    Scientists in 20 countries, part of a close-knit international consortium known as the Arabidopsis Genome Initiative, are now working around the clock to complete the sequencing by the end of the year. It will be tight, concedes Joseph Ecker, a plant biologist at the Salk Institute for Biological Studies in La Jolla, California, as the consortium is bent on perfection. Indeed, he adds, the first two chromosomes published set a new standard of excellence within the genome community. “There's just a huge dedication to get this as perfect as possible,” agrees Rod Wing, a plant molecular biologist at Clemson University in South Carolina.

    Modest beginnings

    Not even the Somervilles could have envisioned deciphering the weed's entire genome when they returned to the United States later in 1978 eager to begin Arabidopsis studies. The field was wide open. True, scattered references to experiments with Arabidopsis had appeared in the scientific literature since the late 1800s, and as early as 1943, the German biologist Friedrich Laibach had suggested it might be a useful model genetic organism. A few pioneers had followed that advice, among them George Rédei, a young Hungarian geneticist who brought Arabidopsis seeds with him to the University of Missouri, Columbia, in 1957. There he used radiation to create mutants with stunted growth, which he then hoped to use to track down the genes underlying these changes.

    Thrilled with the ease of working with Arabidopsis, which completes its life cycle in under 6 weeks and grows just a few inches overall, Rédei, too, started writing about the plant's potential. But his colleagues were nonplussed, to say the least. In 1969, “the NSF [National Science Foundation] program director informed me I had to quit Arabidopsis if I wanted to continue to get support,” recalls Rédei, now retired. But Rédei, who had inherited Nobel laureate Barbara McClintock's old lab on the Columbia campus, fortunately had some of McClintock's stubbornness. “I still continued working [on it]; I knew it was worth doing it.”

    Rédei finally found an ally in Maarten Koornneef, who became interested in Arabidopsis as a Ph.D. student at Wageningen Agricultural University in the Netherlands in 1976. Like Rédei, Koornneef began isolating mutants. But he went a step further, using the altered traits and their inheritance patterns to build a detailed genetic map—a key resource for finding specific genes. Even so, for the next several years, Koornneef, his professors, a few other European groups, and Rédei had the field mostly to themselves. They were “the guys that kept the torch burning [for Arabidopsis] during the dark ages,” notes Robert Pruitt, a molecular geneticist at Purdue University in West Lafayette, Indiana. Indeed, it was Rédei's work that the Somervilles came across during their Parisian hiatus.

    Chris and Shauna Somerville began their work at the University of Illinois, heading there from Paris to work with William Ogren, a plant physiologist. Fortunately, they came with fellowships that enabled the couple to finance their Arabidopsis experiments without having to get grants from NSF.

    Never shy, the couple boldly tackled one of the trickier questions in plant biology, namely, where a plant's respired carbon comes from, a research question that was plagued with conflicting and controversial results. While other plant biologists were using physiological or biochemical approaches to sort it out, the Somervilles used genetics, with impressive results. They made Arabidopsis mutants that required higher than normal levels of carbon dioxide in the air to survive. Then, by monitoring where carbon built up in the mutants, they identified once and for all the major source of carbon dioxide. With their 1979 Nature paper, “we solved the contentious problem of the day,” says Chris Somerville. The Somervilles, who are now at the Carnegie Institution of Washington's Department of Plant Biology at Stanford University, then went on to tackle other plant issues—such as the role of plant hormones in regulating growth or responses to infection.

    Grudging respectability

    Over the next 5 years, Arabidopsis—and the Somervilles' work—began to catch the attention of other researchers, many of whom were geneticists studying other organisms. One was Elliot Meyerowitz, a Drosophila molecular biologist at the California Institute of Technology (Caltech) in Pasadena. Curious about plant biology, Meyerowitz was intrigued when one of his students, Pruitt, said he wanted to work on Arabidopsis. “In the beginning, [Meyerowitz] said ‘Only work on it on the weekends,’” Pruitt recalls. “He saw it more as a distraction from what I was supposed to be doing.” But interest within the lab quickly grew, and by 1985, Pruitt and Leslie Leutwiler were working full-time on Arabidopsis. In a critical discovery, the pair figured out that the Arabidopsis genome was significantly smaller than tobacco's—another favorite of plant biologists. At the time, they estimated that Arabidopsis had 70 million to 150 million base pairs (it is now estimated to be 117 million), compared to 1.6 billion in tobacco; what's more, they found it had relatively little repetitive DNA, the bane of gene hunters.

    The next year, Caren Chang in Meyerowitz's group cloned the first Arabidopsis gene. “We found out that plant DNA was just like any other DNA,” Meyerowitz recalls, and thus the molecular techniques that had been developed for other organisms would also work on plants. With these findings, Arabidopsis's stature as a model organism rose, at least at Caltech. By 1989, Meyerowitz's group had dropped Drosophila all together in favor of this tiny plant.

    Meanwhile, plant biologists were watching with envy as geneticists working on such model organisms as the fruit fly and the worm reported a series of stunning advances. Complex signaling pathways were worked out in the fly, for instance, while the fate of every cell was traced throughout the life cycle of the nematode. Much of this progress, an increasing number of plant biologists realized, came from a concentrated push on one particular species. At a plant genetics meeting in 1985, “it was clear for the first time that we needed a plant that we could do genetics and molecular biology [on],” Koornneef recalls. But the community needed to pick one.

    Over the next few years, while Shauna took on Arabidopsis to study plant defenses against pathogens, her husband and Meyerowitz took up the case for Arabidopsis, arguing in favor of its benefits over petunia or tomato—two other plants in contention to be a model system. Not only was its genome suitable and its generation time measured in weeks, not months, but Arabidopsis also produced thousands of seeds and was self-fertilizing, making genetic studies easier to do. They proved able salesmen and mentors, churning out dozens of young Arabidopsis researchers and converting many more. In 1984, just 36 scientific papers dealt with Arabidopsis; by 1989 the number shot up to 216 and the number of Arabidopsis labs climbed from a half-dozen to more than 100.

    A new community began to emerge around Arabidopsis. It included not only plant biologists who previously worked on crops but also researchers from labs working on yeast, bacteria, and fruit flies who opted instead to work on this “seed machine,” as McClintock liked to call it. Noted yeast geneticists Gerald Fink at the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, and Ron Davis of Stanford were among the heavyweights to shift into the plant world, bringing with them their expertise in yeast molecular genetics. At Oklahoma State University in Stillwater, David Meinke developed a series of mutants that greatly expanded Arabidopsis's utility for developmental studies.

    From the outset, the Arabidopsis community adopted a culture of openness, sharing data and creating community resources. An international news group, established in the pre-Internet era, “took off like gang-busters,” Chris Somerville recalls. By 1992, it had some 1500 subscribers who regularly exchanged information on new mutants and protocols. Two new seed banks were established in 1991—one in Columbus, Ohio, and another in Nottingham, U.K.—relieving the pressure on Albert Kranz, who for decades had been distributing seeds to individual researchers from J. W. Goethe University in Frankfurt, Germany.

    Later, in 1993, when Chris Somerville and colleagues began isolating 30,000 expressed sequence tags, bits of genes that help researchers track down the complete gene, they put the data in GenBank, the public database, before they even looked at them. The effort, funded with a $1 million grant from NSF, would help the entire community, Somerville says, and he didn't think his group should have an unfair advantage. That tradition continues today. Even before the sequencing is complete, the collaborators are developing the next set of joint resources, such as mutants and microarrays necessary for interpreting genomic data. These, along with the genome data, are “leading to the democratization of plant biology,” says Jeffrey Dangl, a plant pathologist at the University of North Carolina, Chapel Hill. “Any student in any university with a good idea can access the resources to make discoveries.”

    The genome era

    In the late 1980s, Arabidopsis was swept up in the excitement and controversy of the fledgling Human Genome Project. By 1990, the Human Genome Project was well under way, with the express goal of sequencing not only the human genome but also those of several model organisms to enable comparative analyses. (It turns out it is far easier to figure out what a gene does in a fly, say, than in a person—and both, fortunately, share many of the same genes.) James Watson, who then ran the Human Genome Center at the National Institutes of Health, thought that Arabidopsis should be sequenced—but he didn't think NIH should foot the bill. So he turned to Mary Clutter of NSF. Intrigued, the agency sponsored a series of workshops that in 1990 resulted in a long-range plan for an international Arabidopsis genome project. The goal was to determine the function of every gene.

    But how that should be done proved contentious. Some, like Meyerowitz, wanted to take a functional approach and track down each gene through mutant screens and classical genetics approaches. Others wanted to follow in the footsteps of the Human Genome Project and sequence the entire genome, predict where the genes were, and then analyze their functions. There were concerns, too, that sequencing would be too taxing for plant science's meager budgets. The functional approach won out for the time being. In its 1991 budget, NSF asked for an additional $5 million to begin those studies. Also that year, the European Commission gave 3 million ECUs to 29 Arabidopsis labs. Their goal was to build another type of map of the genome, a physical map consisting of pieces of DNA, and develop new methods for identifying genes. “For [the] first 6 years, genome sequencing was not part of [the program],” says Salk's Ecker.

    But enthusiasm for sequencing was growing. Over beers one evening at Cold Spring Harbor Laboratory (CSHL) in New York, Ecker, along with Arabidopsis experts Rob Martienssen of CSHL, Joanne Chory of Salk, USDA's Theologis, and Meyerowitz, decided to push for all-out sequencing. They took their case to Watson, who helped them set up meetings the following year that brought together the funding agencies, sequencing experts, and Arabidopsis researchers. “The [1994] meetings were very crucial,” says Theologis. At one, he recalls, yeast biologist Mark Johnston of Washington University in St. Louis told the group that sequencing was the only way to find all the genes; classical genetics just wouldn't cut it. They also heard that sequencing was getting faster and cheaper. “That convinced everyone that [the sequencing] had to be done,” Theologis recalls.

    A European consortium got off the ground first, funded by the European Union. Building on their experience in mapping chromosomes 4 and 5, 10 labs joined forces in 1995 to sequence chromosome 4. In the United States, NSF, USDA, and the Department of Energy set up a sequencing program that awarded its first grants in 1996. Also that year, the international Arabidopsis Genome Initiative was formed, providing a formal name for the informal collaboration that has worked so well over the years. The group meets yearly and talks far more often to keep the effort running smoothly.

    By late 1999, the European Arabidopsis Genome Sequencing Consortium, with a little help from U.S. labs, finished chromosome 4. Right with them was The Institute for Genomic Research, which sequenced chromosome 2 in record time, thanks to assistance from Celera, a new genomics company down the road in Rockville, Maryland. When the two chromosomes were published back to back in the 16 December 1999 issue of Nature, they raised the bar for sequencing excellence. In contrast to, say, the Drosophila genome, which has lots of missing bits of sequence, or the human genome, in which just gene-rich parts of chromosomes are sequenced, the Arabidopsis group sequenced both entire chromosomes, including some of the notoriously tricky centromere region, which most sequencers have purposefully avoided in other organisms because of the low gene content and difficulty. “We did Arabidopsis right,” says W. Richard McCombie, who has been sequencing Arabidopsis at the Cold Spring Laboratory. “It's very satisfying to biologists. And having that information from a very manipulable organism is exceedingly valuable.”

    Biology transformed

    Getting a first glimpse of those sequence data converted any remaining naysayers. “As soon as we started putting the data out there, people got it,” says Chris Somerville. Suddenly, the time needed to actually track down a gene dropped from years to months, even weeks. Moreover, the genome has transformed plant biology, says Mary Lou Guerinot, a plant biologist at Dartmouth College in Hanover, New Hampshire. “People aren't thinking about [finding] one gene at a time; they are thinking about gene families, and if there's a [match to the gene] in other organisms,” she notes.

    Already, biologists are concentrating on the 8000 or so newly discovered genes that have no known function or matches in other genomes. North Carolina's Dangl, for example, has identified 10 genes that seem to be activated by infection or insect attack—genes that had eluded discovery by his and a dozen other labs for the past 15 years. Now, thanks to ongoing efforts by Ecker and others to create a mutant strain of Arabidopsis for each gene, it may soon be possible to work out the function of each of those genes.

    A few biologists are also trying to trace the evolution of plant biochemical pathways by looking for the genes involved. By scanning the existing genomic data, for instance, Anthony Bleecker of the University of Wisconsin, Madison, and his colleagues discovered that an Arabidopsis ethylene receptor gene is very similar to one in the cyanobacterium Synechocystis, a simple photosynthesizing microbe. Because this microbe is thought to be close kin to the microbe that was incorporated into ancient plant cells and became the chloroplast, the similarity suggests that the Arabidopsis gene migrated from the chloroplast genome into the Arabidopsis genome and subsequently evolved a new role in controlling plant development. “It gives us an idea about how plants got to be so different [from animals],” Meyerowitz explains. “All the sudden we have insight into how things came to be.”

    Not bad for a plant that few biologists had heard of a decade ago. Given these and other insights, plant biologists are clamoring for the sequence—but finishing it by the end of the year, when a paper is due at Nature and a party is planned at Cold Spring Harbor Lab, is going to be a stretch. “We're close, but we're not done,” concedes Michael Bevan, a plant biochemist at the John Innes Centre in Norwich, U.K. Meanwhile, in Japan, the United States, and Europe, computers and sequencing machines are running full tilt, polishing off the last pieces and filling in gaps (see graph at left). At the same time, two of the labs are feverishly scanning the entire genome, predicting and classifying genes. It's a frantic pace, involving scores of people—quite a contrast to the life the Somervilles were enjoying when they first envisioned this plant's future 23 years ago in Paris.

    • *The 12th International Genome Sequencing and Analysis Conference, 12–15 September, Miami Beach, Florida.

  12. ECOSYSTEM RECOVERY

    Ghost Towns Tell Tales of Ecological Boom and Bust

    1. Kathryn Brown*
    1. Kathryn Brown is a writer in Alexandria, Virginia.

    Scarred desert ecosystems are recovering at the sites of some abandoned boomtowns, but are slow to heal at others. Soil age appears to be the key difference

    The greatest mining scandal in U.S. history struck Greenwater, California, in 1906. A miner discovered copper in Death Valley's Black Mountains—and within months, this boomtown exploded with saloons, shacks, and 2000 treasure hunters. “Greenwater is destined to be the richest mineral-producing city on the whole globe,” declared one flyer.

    By 1908, the town was empty.

    Greenwater turned out to be more bust than boom. Only flecks of low-grade copper graced the mountains, while swindlers made a fortune selling shares in nonexistent mines. But today, Greenwater is finally paying off—with ecological data. Robert Webb, a hydrologist with the U.S. Geological Survey (USGS) in Tucson, Arizona, is studying eight Mojave Desert ghost towns, including Greenwater, to see how their long-abandoned soils have recovered from the pounding they took during the mining boom. Sprawling creosote, burrobush, and desert sage are reclaiming these boomtown sites, turning them into outdoor labs for desert recovery—and challenging popular ecology theory along the way.

    For at least 25 years, ecologists have described the parched desert landscape as “easily scarred and slowly healed.” Conventional wisdom once held that disturbed desert soils never fully recover. Without question, the landmarks are stark—in parts of the central Mojave, for instance, tank tracks left from World War II training exercises still gouge parts of the desert floor. But, popular wisdom aside, Webb and other scientists are also witnessing desert recovery.

    In Webb's study, disturbed soils at some ghost town sites have rebounded, defiantly blooming in less than a century—while others sport much sparser plant populations. By comparing the towns' geology and history, Webb is uncovering features, such as soil age, that determine rates of plant recovery. The emerging data could help the managers of the Mojave's six military bases and four national park areas make ecologically sound management decisions. “All landscape is not created equal,” Webb says. “If you have to route a road, if you have to run tanks over it, let's use our scientific sense about how to do it.”

    Resource managers say insight from the Mojave is badly needed. “There's a lot of pressure on park managers to make quick decisions, and we're starving to find immediate answers to problems,” says vegetation specialist Jane Rodgers of Joshua Tree National Park, which spans parts of the Mojave and Colorado deserts. “If we're looking at where to put a new campground, for instance, this information could be really useful.” The ghost towns also offer a rare look at desert recovery in action, adds Joseph McAuliffe, research director at the Desert Botanical Garden in Phoenix, Arizona. In more forgiving locations, abandoned towns have often been grazed by cattle or transformed by developers. Not so these sites in Death Valley, which sit empty and baking in the harsh sun. “This is one unique set of records,” McAuliffe says.

    Ghostly images

    The smallest of four North American deserts, the Mojave is a mercurial place. A mix of shadowy mountains and flat basins, the desert sits mostly in California, although its borders creep into Nevada and Arizona. Low valleys in the Mojave are very dry, as surrounding mountains block the moist Pacific air. The mountains of Death Valley, for instance, exist on just cups of rain, about 200 millimeters a year. But this is no wasteland. Creosote bushes cloak much of the rocky valley in evergreen, and in spring, native primrose, daisies, and sagebrush can splash color everywhere.

    There's another prominent feature on the landscape: people. The early mining boom set a frenetic pace for desert settlement, and it hasn't slowed yet. Today, the Mojave hosts military bases such as Twentynine Palms, China Lake, and Fort Irwin, where troops simulate desert warfare. Wilderness areas include Death Valley National Park and the Mojave National Preserve, among others. And in between, gas pipelines, off-road vehicles, and homeowners stake their claims on the land. “This caldron of the desert is full of values and conflicts,” remarks geographer Leonard Gaydos of NASA's Ames Research Center in Moffett Field, California.

    As the Mojave has grown more crowded, conservation has become a hot-button issue. “At one time, these military installations were in the middle of nowhere,” notes Ruth Sparks, a resource manager at Fort Irwin. “Now there are people living right next door, and they want to know how we're going to protect the environment.” The National Park Service, too, has come under pressure to preserve ecosystems inside park boundaries (Science, 7 April, p. 34). In the Mojave, the question is how. What, exactly, scars the desert landscape? And just how long do plant populations need to recover from a new road or a rolling tank?

    Webb of USGS has long wondered the same thing. In the 1970s, while studying off-road vehicle tracks in the Mojave, he decided to hunt for evidence of desert comebacks at old Mojave sites that had been heavily used and then abandoned. He found a 1961 paper in Science by Philip Wells (8 September 1961, p. 670), who had mapped the plants poking out of the ground in a former boomtown, Wahmonie, Nevada. Wells reported that a markedly new crop of pioneer plants, from bunch grass to weedy shrubs, dominated the Nevada site just decades after the boomtown's bustle had faded away. Inspired, Webb searched for other ghost town sites to study.

    He didn't have to look far. On the west side of Death Valley, the Panamint Range hosted a successful boomtown, Skidoo, from 1906 to 1917. At lower elevations on the east side, the Black Mountains housed Greenwater and other towns. Little trace of the towns remained by the 1970s, but old plat maps and town photos revealed their layout and vegetation during the boom years, 7 decades earlier. Webb surveyed the plants and studied the soil on the town sites.

    Sure enough, the disturbed sites looked starkly different than before the mining boom. A new crop of colonizers—fast-breeding, short-lived plants, like cheesebush—had begun moving in. The greenery was patchy, and long-lived plants like creosote, which had dominated the predisturbance landscape, were sparse. But what really interested Webb were differences between the sites. The Skidoo sites showed the stirrings of plant succession, with short-lived invaders giving way to a mix of species, including their longer lived peers. By comparison, the Greenwater site had barely begun recovering. Some 70 years after the town had collapsed, only the first colonizing plants peeked aboveground. Maybe Skidoo was coming back. Maybe Greenwater was damaged for good. But why?

    Soil strategy

    After 2 decades—and three return trips to the ghost towns—Webb has found some answers. Today, Skidoo has recovered almost completely. Spiny hop-sage shrub dominates the site, with ephedra and other pre-disturbance shrubs recolonizing their old haunt. From the air, Webb says, you'd never know a boomtown once sat at Skidoo. Greenwater hasn't fared so well. The total amount of green cover at the site is approaching that of its preboomtown days, but the mix of species is very different—almost completely short-lived, early invaders like cheesebush and Cooper's goldenbush. The changes in vegetation still mark a clear outline of the old Greenwater town site.

    All these differences, Webb says, come down to one basic factor: soil age. Greenwater's soil is at least 100,000 years old, according to calculations of soil formation rates in the region. By contrast, Skidoo's soil is less than 4000 years old, Webb says. Skidoo was built on a debris flow—a sludgy avalanche of sediment that had raced down the rainy mountainside and then settled in the dry canyon below. When the boomtown hit, this relatively fresh soil had little structure and an early cast of colonizing plants. “Young surfaces have vegetation in an early successional stage,” explains Webb. “And it's much quicker to get back to that stage than to the types of vegetation on older surfaces.”

    Based on the abandoned boomtowns, Webb estimates that after moderate disturbance, young Mojave soils can grow a new layer of perennial cover in an average of 80 years—but the original mix of plant species can take several millennia to return, if they return at all. That may seem slow, but it's comparable to recovery timing in other ecosystems, like old-growth forests leveled by logging, Webb says.

    The relatively quick recolonization of young desert soils makes sense to ecologist Jayne Belnap of USGS in Moab, Utah. “In the desert, the geomorphic age of soil means a whole lot,” Belnap says. For one, she notes, young desert soils have a coarse, gravelly texture that soaks up and holds water better than aged soils, which have grown fine or become clogged with clay. McAuliffe—who has found a similar recolonization of young soils at a military range in the Sonoran Desert—adds that the driest soils pack silt and clay beneath a stony top. “When tanks roll over these desert pavements, the scars essentially last forever,” McAuliffe says.

    Such vastly different soils suggest that the desert is not one simple, delicate landscape. “The desert is a patchwork of soils, and recovery may largely revolve around the geomorphic age of the surface below you,” Webb says. For resource managers in the Mojave, he adds, the conclusion is simple: If you have to disturb the desert, go for young soil sites.

    That strategy will come in handy, predicts Jennifer Haley, a resource manager at Lake Mead National Recreation Area. “We often have to choose which areas to develop,” Haley says, “and this information could help us make better choices.” And the effects of bad decisions linger, she adds. Fifty years ago, while building the Hoover Dam, crews carved a two-track road at Lake Mead. “Even now, we only have one or two plants at the site,” Haley says. “They just don't come back.” Still, turning such basic soil science into decisions about desert use is tough, cautions Dawn Lawson, a Navy natural resources specialist who works on Mojave military bases. “Can you develop a training scenario that meets military standards, incorporates federal regulations, and takes into account the age of the soil?” Lawson asks. “It's possible—but not always easy.”

    Now, with the Mojave ghost stories in hand, scientists hope to study other desert soils to see whether the plant recovery trends hold up. “We've got to make the leap from site-specific findings to the broader landscape, with models, to make the information more practical,” says Belnap. Jeff Herrick, a soil scientist at the U.S. Department of Agriculture's Jornada Experimental Range in Las Cruces, New Mexico, adds that it's important to find out whether the most fragile, or “easily scarred,” landscapes are also the slowest to heal. Resistance to damage and resilience afterward are different challenges for a patch of soil.

    Researchers also need to define “recovery” carefully, Herrick adds. There's a big difference between an emerging cover of green and the return of something close to the original mix of plant species. Webb agrees. As more people move into the desert, he says, a better understanding of desert ecosystems is urgently needed. The ghosts of the past may have a lot to teach us.

  13. ECOSYSTEM RECOVERY

    Ecologists Spar Over Population Counts of Threatened Desert Tortoise

    1. Kathryn Brown

    With a size that rivals Scotland, you'd think the Mojave Desert had space for everyone. But this arid neighborhood packs a lot of personalities—with six military bases, four national park areas, sprawling suburbanites, and off-road riders, there's considerable jostling for elbow room. One unassuming neighbor, in particular, has been the focus of many real estate fights: the desert tortoise. And the shell-slinging is starting again.

    In 1990, the Mojave tortoise was listed as a “threatened” species under the U.S. Endangered Species Act, a designation that has put it in the way of church leaders, military managers, and utility officials. All have had plans delayed—or blocked—in developing desert areas that might interfere with the animal or its habitat. Now, a controversial study in the October issue of Conservation Biology claims the tortoise's threatened status is based on inaccurate population counts. In the 1991–96 study, ecologist Jerome Freilich, now at the Nature Conservancy in Lander, Wyoming, and volunteers did weekly spring surveys of captured and marked tortoises in a 2.6-kilometer-square plot inside the Mojave's Joshua Tree National Park. Using a mathematical model to estimate tortoise populations, they found hugely varying tortoise numbers—between 43 and 97 animals—in different years, depending on rainfall. But the average estimate came out to 67 adult tortoises at the site—three times more than reported in a 1978 survey of the same spot.

    Tortoises, Freilich explains, hide in underground burrows during drought years. And that may have biased the studies used by the U.S. Fish and Wildlife Service (FWS) to list the tortoise, because they were conducted during a severe drought in the 1980s. “In some of their data points, there were hardly any tortoises,” Freilich says. “Were they truly gone, or were they just waiting for that nice rain in 1991? And did Fish and Wildlife go the extra distance to check this data?”

    Faster than a fleeting desert sunset, Freilich's study is drawing hostile fire. “Drought is part of the desert,” responds Kristin Berry, an ecologist with the U.S. Geological Survey who did some of the primary studies Freilich casts doubt on. “Some sampling is done in wet years, some in dry, and the differences are taken into account.” She adds that Freilich's paper contains “major flaws.” Among them: Freilich incorrectly described the 1978 survey at the Joshua site as a standard 60-day survey; in fact, says Berry, the survey was done on just 25 days, scattered from spring to fall.

    Freilich stops short of saying the tortoise's “threatened” status is wrong. Instead, he argues that a newer survey method, called distance sampling, should be used to verify the animal's numbers in the Mojave. With this technique, researchers map tortoises to given points and then estimate populations based on the animals' distribution. But Berry cautions that distance sampling is costly and may work well only in areas with high tortoise numbers.

    The tortoise flap doesn't surprise FWS biologist Ray Bransfield. “If you're in the tortoise community, there's a lot of sniping about this or that person's data,” says Bransfield. “Sometimes it's hard to be in the same room with these researchers.” At least the tortoises are agreeable, he adds: “These animals have a lot of charisma.”

  14. PROFILE

    For 'Father' of Abortion Drug, Vindication at Last

    1. Michael Balter

    Étienne-Émile Baulieu has campaigned tirelessly for approval of RU-486; now, the United States is about to join Europe and Asia in making the drug available

    BICÊTRE, FRANCE—Étienne-Émile Baulieu's office at the Bicêtre Hospital outside Paris is cluttered with keepsakes from a career in sex-hormone research. On a shelf above a row of industrial-sized drums of synthetic steroids is a photo of Baulieu with Gregory Pincus, his mentor and the creator of the first oral contraceptive. And propped against a few books nearby is an unframed 1989 commendation from the San Francisco Board of Supervisors, which reads: “For his discovery of the contragestion pill, RU-486, and with the fond hope that it may very soon be available to women throughout the world and particularly in the United States and San Francisco.” Those hopes have finally been realized. For Baulieu, the father of the “French abortion pill,” last week's approval of RU-486 by the U.S. Food and Drug Administration (FDA) vindicates his long crusade to make the drug available to women worldwide. “Enfin!” he says: At last!

    Since 1988, when RU-486 (mifepristone) first came on the market in France, it has been prescribed to hundreds of thousands of women in Europe and millions in Asia, particularly in China. It is now used in 22% of abortions in France, although overall abortion rates have declined in the last decade. Baulieu argues that the FDA's decision will bring benefits to American women, including “the right to privacy” in decisions about abortion. He also sees it as a boon to scientists. “This decision will open up research into other possible uses of RU-486 and similar drugs,” Baulieu says, citing recent work showing that the drug might be useful in easing difficult births and effective against endometriosis and even some kinds of cancer.

    Baulieu, still rugged and square-jawed at the age of 73, sprawls in his easy chair by a window overlooking the hospital's tree-lined grounds. Every few minutes his cell phone rings, bringing congratulations from well-wishers or details of an upcoming trip to the United States, where he looks forward to basking in the glow of victory. For more than a decade, Baulieu has campaigned—in books, articles, and interviews—for women to have the opportunity to use RU-486. Stanford University chemist Carl Djerassi, who synthesized the oral contraceptive, sees a similarity between Baulieu and Pincus: “[Baulieu] is very charismatic and a fantastic champion and entrepreneur, in the best sense of the word.” Baulieu always knew that the drug would become enmeshed in the politics of abortion, Djerassi says, and his unique contribution was his “bulldoggish perseverance.”

    The habit of crusading for what he thinks is just came early to Baulieu. As a teenager in Grenoble during World War II, he was active in the resistance against the Nazi occupation of France and eventually joined the French Communist Party, although he quit the party after the Soviet invasion of Hungary in 1956. After the war he went into medicine, but he soon veered toward basic research and in 1959 discovered a soluble form of the adrenal gland hormone DHEA. Pincus's work at Boston University on the oral estrogen contraceptive, Baulieu says, inspired him to pursue research on human reproduction. This work paid off in the late 1960s, when he isolated a key receptor for progesterone, a hormone that prepares the lining of the uterus for implantation of the developing embryo. The next step was to find a way to block it—RU-486 does just that.

    Although Baulieu is often called the inventor of RU-486, he is quick to credit his collaborators, especially the chemists at the French pharmaceutical company Roussel-Uclaf who synthesized the drug in the early 1980s. Baulieu developed the drug while working as a consultant to the company, but he says he hasn't received a single franc in royalties from the sale of RU-486, nor does he expect to. Asked if he regrets not having claimed a stake in the profits, he smiles wistfully and says, “I don't know what I would do if it were now.” But he has received other forms of remuneration: RU-486 has made him one of France's most famous and respected scientists, and in 1989 he won the Albert Lasker Prize, often a forerunner to the Nobel Prize, for the discovery of RU-486.

    Baulieu's praise for the scientists at Roussel-Uclaf does not extend to the company's higher echelons. He says he's still bitter because the company often knuckled under to protests by antiabortion groups. Indeed, the French government had to force Roussel-Uclaf to register RU-486 for use in France. And he criticizes Roussel-Uclaf's decision in 1988 not to try to market RU-486 in the United States. “That was such foolishness, and lack of courage, and even lack of commercial perspicacity,” Baulieu says. In the end, Roussel-Uclaf gave away the rights to RU-486. Its former president, Edouard Sakiz, who encouraged Baulieu to develop the drug, now owns the patent in France, while the company donated the U.S. patent to the nonprofit Population Council in 1994. The next year a group of investors formed the New York-based Danco Group to market and distribute mifepristone and to find a manufacturer for the drug. “No major pharmaceutical company has wanted to touch RU-486,” Baulieu says.

    In recent years Baulieu has turned his attention back to DHEA. A series of studies in his own and other labs suggests that oral doses of DHEA—concentrations of which diminish sharply as we age—might counteract effects of aging, such as degradation of the bones and skin or loss of sexual function. These findings have led to DHEA's popularity in the United States as an over-the-counter health supplement. Concerned about this trend, Baulieu and several colleagues argued in a paper in the 11 April Proceedings of the National Academy of Sciences that only rigorous clinical trials can determine whether DHEA should be marketed as an antiaging drug.

    Although Baulieu's direct involvement in RU-486 ended years ago, bitter opposition to the drug's use in the United States has made him a devil to antiabortion groups, while pro-choice advocates view him as something of an angel. “This drug,” he says, “is both admired and despised at the same time.”

  15. Computers Aid Vaccine Design

    1. Michael Hagmann
    1. Michael Hagmann is a writer based in Zürich, Switzerland.

    Immunologists are using computers to help them identify the antigen fragments that trigger immune responses and might thus provide vaccines for diseases ranging from malaria to cancer

    Immunology may seem like the archetypal “wet” science, but, like most areas of research these days, it is rapidly going silicon. Immunologists are using computers in pursuit of one of the holy grails of their field: how to predict which snippets of a protein, out of hundreds or sometimes even thousands of possible candidates, are most likely to spark a strong immune response. The goal is to produce a vaccine based on tiny fragments of an antigen, such as a piece of protein from an invading pathogen or a cancer cell. Such “subunit vaccines,” immunologists hope, will be less dangerous than those using whole pathogens or cancer cells.

    The strategy is based on years of evidence showing that the immune system chops foreign proteins into small peptides, each containing about 10 amino acids. Some of these peptides, or epitopes, are then displayed on “antigen-presenting cells,” where they are held in place by proteins of the major histocompatibility complex (MHC). These MHC proteins with their antigen fragments act like red flags, drawing the attention of the immune system's T cells, which either kill cells carrying the antigens outright or orchestrate an attack by various other immune players.

    Only a few peptide fragments have the right shape to fit into the MHC proteins, however, and the challenge is to figure out which ones will lock into place. Finding these epitopes, says immunologist Vladimir Brusic of Kent Ridge Digital Labs in Singapore, is “one of the bottlenecks in vaccine research.” To complicate matters even further, MHC proteins are highly diverse, with hundreds of slightly different forms that can vary in their peptide-binding preferences. And because an individual inherits only one variant from each parent, a vaccine based on just one peptide may not work for everyone. Immunologists have now enlisted computers to help them home in on the most promising pieces in this complex three-dimensional (3D) puzzle.

    The idea is to develop computer algorithms that use information immunologists have already gathered about how peptides fit into an MHC protein to predict which epitopes in an untested protein will bind. The best candidates would then be tested in cell cul-tures or animals. “The alternative, a brute-force approach where you test every single peptide [in a given protein], is obviously not possible for financial and logistical reasons,” says James Kazura, a malaria immunologist at Case Western Reserve University School of Medicine in Cleveland who is working with Brusic to find peptides for a badly needed malaria vaccine.

    More and more vaccine researchers are beginning to apply these predictive computer tools to their favorite diseases. They've already used them to identify peptides that trigger immune responses to the malaria parasite and to various types of cancer cells, as well as those that spark allergic reactions. And some of these peptides are beginning to move into clinical trials as antimalaria or anticancer vaccines.

    What makes for a good epitope?

    Computational approaches to epitope prediction vary, but they all have their roots in the late 1980s, when researchers got the first 3D structures of MHC proteins. These revealed that the molecules have clefts on their surfaces that hold the antigenic peptides—a discovery that opened the door to uncovering the rules of engagement between MHC proteins and peptides. Although “the early pioneers got the details wrong,” Brusic says, “their observations spurred a lot of interest in the field.”

    Over the following years, Hans-Georg Rammensee of the University of Tübingen in Germany compiled a huge database of the sequences of natural epitopes that he washed off different MHC class I proteins, which trigger the killer T cells of the immune system. Alessandro Sette, now at Epimmune, a biotech start-up in San Diego, California, did the same for epitopes for MHC class II proteins, which activate T helper cells, the master coordinators of the immune system.

    These databases provided the fodder for the first computer models to predict epitopes. Based on the sequences of natural epitopes, Rammensee, Sette, and others in their wake extracted so-called binding motifs for individual MHC molecules. Akin to a signature, such a motif consists of two or three key amino acids within a peptide that are thought to be essential for binding to a particular MHC molecule. But although these motif-based approaches pick up the typical mainstream binders for each MHC molecule, Brusic says they may miss oddball peptides that don't conform to the consensus motifs.

    To get around this potential shortcoming, Brusic and his colleagues have developed a computer algorithm based on artificial neural networks, an assembly of interconnected computer units akin to a simplified version of the human brain. When fed the sequences of numerous peptides known to bind to a given MHC protein, the neural net “learns” the features a peptide must have to be a good binder. Brusic then runs the sequence of an untested protein through the network, and the computer algorithm picks out the peptides that are likely to be bound by that particular MHC protein. Because neural networks do not simply recognize binding motifs, they “are particularly good at finding atypical epitopes,” says Brusic.

    By comparing his predictions for several proteins with experimental binding data for all the possible peptide fragments they contain, Brusic found that his neural nets can correctly predict up to 80% of the binding peptides. What's more, he adds, “they can improve as more data become available.” A big disadvantage, though, is that training requires binding data from hundreds of peptides for each MHC variant, and the nets can't immediately identify a peptide that will bind to many different MHC variants—a must for a widely applicable vaccine.

    TEPITOPE, an algorithm developed by Jürgen Hammer of Hoffmann-La Roche Inc. in Nutley, New Jersey, and his colleagues, does have that capability. The binding cleft of an MHC class II protein contains nine so-called “pockets,” minute indentations at the bottom of the cleft, each of which holds in place one of the binding peptide's amino acids. Comparing the sequences of different MHC variants, Hammer realized that their clefts contain only a limited number of these pockets in various combinations. In a painstaking series of some 10,000 binding experiments, published in the June 1999 issue of Nature Biotechnology, the Hammer team determined the affinity of 35 of these binding pockets for each of the 20 naturally occurring amino acids.

    The researchers then use the TEPITOPE algorithm to calculate the binding strengths, or matrices, of all the peptides in a protein antigen for the 51 most common MHC variants consisting of various combinations of the 35 pockets tested. Those peptides calculated to have the greatest binding strengths would be presumed to be the immune-stimulating epitopes. This matrix-based method also allows the identification of so-called “promiscuous” peptides that bind to several different MHC variants and are therefore the most promising candidates for a vaccine.

    The work of Darren Flower, a bioinformaticist at the Edward Jenner Institute for Vaccine Research in Compton, U.K., exemplifies a fourth type of approach to epitope examination, based on computer modeling of the 3D structures of peptide-MHC pairs. Because MHC molecules all have a very similar 3D architecture, his group, established just this summer, tries to model previously uncharacterized MHC structures and fit epitopes into their clefts. Flower says he hopes “to generate new information this way, especially for the large number of MHC alleles that haven't been properly characterized [by binding assays].”

    Putting prediction to the test

    Although this new, structure-based approach hasn't been tested yet, other prediction tools have already begun to prove their mettle. For example, researchers have used them to identify peptides that may be useful as cancer vaccines. Some of this evidence comes from cancer immunologist Walter Storkus of the University of Pittsburgh School of Medicine, who collaborates with Brusic. Storkus is focusing on several protein antigens that seem to distinguish melanoma cells from normal cells. Using Brusic's neural net, Storkus and his colleagues identified some 40 candidate epitopes from five different tumor antigens.

    Not all of them were hits. Out of 36 peptides tested, only 11 activated T cells in lab culture or triggered a hypersensitivity reaction in skin-prick tests. His team has just begun a clinical trial with 20 metastatic melanoma patients in whom previous therapies had failed to see whether various combinations of the epitopes that tested positive can lead to tumor regression by boosting the patients' tumor-specific immune responses.

    Similarly, Maria Pia Protti of the Scientific Institute H. San Raffaele in Milan, Italy, has applied Hammer's TEPITOPE algorithm to analyze a protein called MAGE-3, which is found on melanoma cells and also on lung and bladder cancer cells. It helped her identify 11 epitopes, all of which turned out to bind to all major MHC alleles. What's more, nine of the peptides also kicked off a strong T cell response when tested in lab cultures. Protti plans to start clinical trials in melanoma patients with some of her predicted peptides early next year.

    It may even be possible to marry epitope prediction to one of the hottest technologies around: DNA microarrays—chips or slides containing DNAs from thousands of gene snippets that are used as probes to determine what genes are expressed in cells (see p. 82). Hammer, working with Protti and cancer immunologist Ugur Sahin of the University of Mainz in Germany, has used such a DNA microarray, harboring almost 20,000 human genes, to compare the gene expression patterns of tumor samples from 20 colon cancer patients with those of healthy colon tissue. The team found 34 genes that were highly active in at least half the patients, indicating that their protein products might be good vaccine targets.

    However, the proteins made by those genes contain 19,000 overlapping peptide fragments—far too many to analyze to see whether they can evoke an immune response. But in less than a day TEPITOPE slashed that unworkable number to 130 candidate epitopes, all predicted to bind to many different MHC alleles. These can now be tested to see whether they evoke an immune reaction.

    Computational immunologists have other diseases in their sights in addition to cancer. One prime target is malaria, for which researchers so far have been unable to come up with a vaccine, despite years of searching. In the July Journal of Immunology, Sette, in collaboration with Stephen Hoffman of the Naval Medical Research Center in Silver Spring, Maryland, reported that Epimmune's proprietary Epitope Identification System, a matrix-based algorithm similar to Hammer's, had turned up 11 epitopes from five proteins of the malaria parasite Plasmodium falciparum. All 11 triggered T cell responses in malaria-exposed individuals from Indonesia, Kenya, and the United States, three populations that differ greatly in their MHC proteins. “We were aiming at as broad a population coverage as possible, because for a vaccine to be acceptable you need to provide answers to global problems,” Sette says.

    In as-yet-unpublished work, Case Western's Kazura, using Brusic's neural networks, has collected a similar assembly of malaria epitopes, but geared toward the predominant MHC allele in malaria-infested Papua New Guinea. To find out which peptides actually confer resistance to P. falciparum, he wants to rid 200 to 300 individuals of the parasite, determine which epitopes their T cells recognize, and record how long it takes until each individual contracts the disease again. “The theory is that if you have a strong T cell response against peptide X and it takes a long time before you're reinfected, then peptide X is a really good candidate for conferring protective immunity,” Kazura says.

    Allergies and autoimmune diseases, in which the immune system attacks the body's own tissues, are coming in for their share of attention as well, although here researchers want to find peptide vaccines that damp down immune responses instead of beefing them up, a strategy that seems to work in animals with experimental autoimmune diseases. The computer models are now helping immunologists identify peptides that might be useful. Initial studies over the last year or so have already picked up immunogenic epitopes triggering pollen allergies, diabetes, and Lyme arthritis. “The clinical follow-up would actually be very simple. One just has to compare T cell responses to the natural allergens with and without epitope [vaccination],” says Paola Panina-Bordignon, an immunologist at Roche Milano Ricerche, who used Hammer's binding matrices to get at the allergy-causing epitopes in ryegrass.

    With more and more fields beginning to use the various algorithms, Brusic predicts that “in a few years' time this will be a standard methodology for epitope [identification].” For Sahin and his fellow users, the tools have already become standard. “They're indispensable,” he says.

    There's still plenty of room for improvement, though. “On average only about 20% of all predicted epitopes are actually recognized by cytotoxic T cells,” says Hansjörg Schild, a colleague of Rammensee's at Tübingen. One possible reason is that the unrecognized peptides are not produced by the cell's protein-cleaving machinery. Schild and Rammensee have recently developed another computer program that might help—an algorithm for predicting how a protein will be cleaved. Combining that with epitope prediction should further reduce the number of epitopes that need testing, says Schild.

    Immunologists caution that, despite the number of promising preclinical studies, the algorithms still have to prove their worth on the real-life battlefield between pathogen and host, by pointing the way to new vaccines. Says vaccine researcher Anne DeGroot of Brown University in Providence, Rhode Island, who has herself developed a prediction algorithm based on MHC binding motifs, “You can do a lot of neat things on your computer, but ultimately you have to put [the predicted peptides] in vaccines and test whether they protect [from infection].” With clinical trials picking up speed on several fronts, the first answers are likely to come in before too long.

  16. Doing Immunology on a Chip

    1. Michael Hagmann*
    1. Michael Hagmann is a writer based in Zürich, Switzerland.

    Using microarrays to measure gene expression patterns may reveal the mysteries of normal immune cells and also of diseases in which immune cells go astray, such as autoimmunity and cancer

    For Louis Staudt, an immunologist at the National Cancer Institute (NCI) in Bethesda, Maryland, the key to the enigmas of the human immune system lies buried in what looks like a miniature painting by Piet Mondrian, the Dutch abstractionist who elevated colored rectangles to high art. Dubbed Lymphochip, Staudt's work of art consists of thousands of tiny squares that light up in different colors, each representing a different human gene. This chip allows him to monitor at one time which genes are turned on or off when, say, killer T cells are activated by pathogens, or B cells turn into life-threatening leukemias.

    Used by a small but growing number of immunologists, the Lymphochip and similar DNA microarrays mark the entry of immunology into the presumably golden age of genomics, a quantum leap that promises a new level of understanding of the genetic programs underlying immune responses, both normal and abnormal. “This approach allows you to look at the whole system at once instead of looking at one gene [at a time]. It's a ‘Let the system tell you what's going on’ approach,” Staudt says.

    Immunologists are already intrigued by what they are seeing. Microarray analysis is helping define the changes in gene activities that cause certain leukemias as well as those that govern the immune system's response to a range of pathogens. Microarrays are also illuminating the developmental paths leading from immature precursor cells to a plethora of immune warriors such as B and T cells. Indeed, says Lars Rogge, an immunologist at Roche Milano Ricerche in Milan, Italy, “this will change the way immunology is done. You can use arrays for any question you might want to ask.” Information gleaned by microarray analysis should also help to both diagnose diseases and find new therapies.

    Take Staudt's work. He and his colleagues have been focusing on identifying the gene changes that distinguish various malignancies of immune cells, or lymphocytes, from one another and from healthy cells. With this in mind, the NCI team, in collaboration with Stanford's Pat Brown, a founder of DNA chip technology, began developing the Lymphochip in 1998. The researchers collected 15,000 genes that are highly active in immune cells at various stages of development as well as in some leukemias. They added another 3500 genes known to be important in lymphocyte or cancer biology and started profiling a variety of B and T cell tumors.

    The initial results have been surprising. Staudt and his colleagues reported in the 3 February issue of Nature that diffuse large B cell lymphoma is not one disease, but rather “two separate diseases with two distinct profiles hiding within one clinical category,” says Staudt. What's more, patients with the disease vary in their responses to therapy, and the gene activity profiles correlate with that finding (Science, 8 September, p. 1670). The hope now is that clinicians can use such gene profiling to select the most appropriate treatment options for their patients and perhaps eventually to design better therapies.

    And what works for immune cell cancers should work for other immune disorders as well, Staudt predicts. “It's fairly obvious that autoimmune diseases or immune deficiencies are pretty heterogeneous, and one can easily surmise that there are different subtypes. For example, only a subset of multiple sclerosis patients responds to interferon __ treatment. I think we should look at their gene expression profiles” to see if they determine the patients' responses.

    Infectious disease expert David Relman of Stanford University is pursuing similar goals. “The idea is to recognize specific [hard-to-diagnose] infections based on the gene expression profile in cells from the host,” he says. That could be advantageous, because “the clinical sample doesn't even have to contain the bug you're after,” says Relman. “With current diagnostics, if you're looking at the wrong time or place, you may end up empty-handed.” A glimpse at the host response might also indicate the stage of infection, which could influence therapy.

    Relman says his ultimate goal is to understand more precisely how microorganisms cause disease. In as-yet-unpublished work, he and his colleagues have taken a step in that direction for Bordetella pertussis, the cause of whooping cough. The researchers used microarrays to compare the gene expression patterns of cells infected with either of two different B. pertussis strains, one with and one without the pertussis toxin. Seeing the genes that are turned on by the toxin, Relman hopes, will tell him how the toxin is doing the pathogen's dirty work of causing disease. He plans to do similar analyses to track the bacterium's counterresponse to the onslaught of the host immune system. “You'll eventually be able to listen to a two-way conversation” between pathogen and host at different times after their first encounter, he says.

    Other researchers are using microarrays to help uncover the genetic programs that cause lymphocytes to follow one developmental path instead of another or to become activated when they “see” a pathogen during normal immune responses. Such analyses can also provide clues to what goes wrong either in autoimmunity, when an overachieving immune system attacks the body's organs, or in immune deficiencies. An example comes from Rogge's group in Milan, which has applied the technology to the great coordinators of the immune system—the T helper cells.

    These come in two forms: TH1 cells, which mainly regulate cell-mediated immunity, including the activity of killer cells, and TH2 cells, which coordinate antibody production. In a study reported in the May issue of Nature Genetics, Rogge and his colleagues used microarrays to compare the expression patterns of some 6000 genes in the two types of helper cells and found 215 that are differentially expressed.

    One key difference is that genes involved in apoptosis, or cellular suicide, are up-regulated in TH1 cells. “TH1 cells are more susceptible to apoptosis” than TH2 cells, Rogge says, and this difference may explain why. Because apoptosis helps keep TH1 cells in check, it's also possible, he notes, that this failsafe mechanism may be defective in autoimmune diseases such as rheumatoid arthritis. The chips can now be used to check that hypothesis. In contrast, allergies and asthma are due to overzealous TH2 cells, and the identification of the genes that regulate the two cell types may lead to new treatments for these disorders, Rogge says.

    Autoimmune diseases are also high on the agenda of Christopher Goodnow, an immunologist at Australian National University (ANU) in Canberra. His team is trying to understand the genetic pathways that instruct immune cells to become tolerant of the body's own tissues and that are somehow perturbed in autoimmunity. In a study in the 10 February issue of Nature, Goodnow and his colleagues compared how an antigen affected gene expression patterns in normal and so-called anergic B cells, which, in the course of development, have somehow become refractory to self-antigens. In normal cells, the team found, the antigen switched off a whole series of genes that inhibit cell proliferation, enabling the cells to multiply in response. But that didn't happen in the anergic cells. “B cell activation may be more a question of down-regulating inhibitory genes than turning on [cell proliferation] genes, which is quite a surprise,” says Goodnow.

    What's more, the researchers found that a common immunosuppressive drug, FK506, turned off the inhibitory genes as well as blocking proliferation genes in B cells. “This may be the reason why immunosuppressive drugs don't do a good job at inducing tolerance,” speculates Goodnow. The gene profile of an anergic cell could now be used as a standard “to screen other drug candidates to find one that more exactly mimics the tolerance-inducing process.” Such a drug might only have to be given until the immune system has learned to tolerate, say, a liver transplant—instead of for life, as is necessary with today's immunosuppressives.

    Goodnow is also using microarrays to help him in another venture. Convinced that making real headway in understanding the immune system requires good animal models for all sorts of immune defects, in 1998 he established a brand-new facility at ANU aimed at churning out thousands of random mouse mutants over the next 5 years (Science, 2 June, p. 1572). “There are only five or six natural mouse mutants with defects in the immune system,” Goodnow explains. Analysis of the first 200 mutant mouse pedigrees has already yielded a rich harvest: 26 mutants with disrupted immune systems, including one where T cell development is blocked and another with hyperactive T helper cells. Goodnow is now using microarray analysis to relate clinical symptoms in the mutants to the underlying changes in gene activity.

    But while chips can reveal gene expression patterns, they are “only the first step,” warns Michael Cooke, an immunologist at the Novartis Institute for Functional Genomics in La Jolla, California, and himself a chip enthusiast. “They just give you a list of genes that are up or down [under certain conditions]. The real challenge is to come up with functional assays” that tell what the genes really do.

    Like other chip users, immunologists also face a looming data overload, as every single chip harbors thousands of genes. Distilling the few that actually matter for the biological process under investigation “requires a good team of computer scientists who will present and visualize those data in a way the human mind can comprehend,” says NCI's Staudt. Still, he is optimistic that the problems can be solved: “Very soon DNA arrays will change from being an emerging technology to becoming just another lab technique” for immunologists.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution