News this Week

Science  15 Nov 2002:
Vol. 298, Issue 5597, pp. 1014

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    A Tussle Over the Rules for DNA Data Sharing

    1. Leslie Roberts

    The matter-of-fact tone of the letter on page 1333 calling for unconditional sharing of DNA data gives no hint of the intensity of the debate that precipitated it. In the letter, the advisers to the international DNA sequence databases reiterate a long-standing policy: Any and all data deposited in GenBank or its counterparts in Japan and Europe are immediately and freely available to any researcher, for any purpose, no exceptions.

    Behind that letter is a bitter and sometimes acrimonious struggle over scientific credit. Some scientists at the big genome centers, including the one at Washington University in St. Louis, Missouri, claim that sequencers are being “scooped” by researchers, usually computational biologists, who snatch the unpublished data off the Web and publish a “global” analysis—defined roughly as a look at a whole chromosome or genome. They protest that they, as originators, should get the first shot at a publication—and the credit. They think they can preserve that right and ensure open access if the databases will let them post a single caveat: No one should publish a “global” analysis without asking for permission. It's scientific etiquette, they say.


    Robert Waterston says scientific courtesy has been lacking.


    Many bioinformaticists are not convinced. The goal of the Human Genome Project was to create this fabulous resource for the scientific community to use, they say. And that's what the community is doing. Yes, sequencers deserve credit, which has sometimes been missing, but no special privileges.

    More than egos are at stake, people on both sides agree. Until the issue is resolved, data will continue piling up on Web sites at various sequencing centers, where some researchers are storing their work pending publication and sharing it only with trusted colleagues or more broadly with some restrictions on its use. “There is a demimonde of data out on Web sites that are not part of public corpuses,” says Ewan Birney, a computational biologist at the European Bioinformatics Institute in Cambridge, U.K.

    This protective response was triggered by a “small number” of abuses, says Robert Waterston, head of the sequencing center at Washington University. “But people are using those abuses as justification not to put up data.”.

    “It's a real societal issue the community must grapple with,” says Gerald Rubin of the Howard Hughes Medical Institute in Chevy Chase, Maryland, who led the Drosophila genome sequencing project.

    From the outset, the publicly funded Human Genome Project prided itself on its policy of immediate data release. According to the 1996 “Bermuda conventions,” as a condition of funding, all members of the public sequencing consortium must post their data on GenBank within 24 hours of generation (Science, 16 February 2001, p. 1192). Openness became a key justification for continuing to fund the public project rather than using information from companies such as Celera Genomics, which has more restricted access.

    But sequencing has gotten faster and cheaper since then. A big group can now do a mammalian genome in 6 to 8 months, notes James Battey, director of the National Institute on Deafness and Other Communication Disorders and co-chair of a committee that coordinates genome issues across the National Institutes of Health (NIH). And that creates a dilemma. “If you put it out on the street within 24 hours, then bioinformaticists and others who had no role in producing the sequence can get a paper out before the sequencing center can.”.

    Some resent what they consider to be the free riders. “People have increasingly taken advantage and published without consultation or permission,” says Waterston. “I literally reviewed a paper that purported to analyze the human genome before the data were published.”.

    As the mouse genome neared completion about a year ago, such concerns prompted Francis Collins, director of NIH's National Human Genome Research Institute (NHGRI), to suggest a fix. He proposed to David Lipman, director of the National Center for Biotechnology Information, which runs GenBank, that the mouse data be freely available—with one condition: No one should publish a global analysis, within a defined time frame, say, 6 months or a year, without permission of the data producers.

    Battey thinks the request is reasonable. “There is no other area of science where those who generate data don't get first right to publish,” he says, adding that the community owes the scientists who sequenced the human genome “a huge debt of gratitude.”.

    But Lipman didn't buy it. In the end, GenBank got the mouse data with no special conditions. I was surprised that [Collins] asked for restrictions on the mouse data, particularly since they had been battling for years that data should be immediately available without restrictions,” says Lipman.

    What's imperative, says Lipman, is for NIH to decide whether sequence production is “research or a resource.” If it's a resource, he adds, then NIH should be “clear up front what the rules on data submission and sharing should be.” Rubin notes that, for the big genome endeavors, the community might be moving toward contract sequencing.

    Like Lipman, the advisory body to the international databases rejected special protection for sequencers. Inundated with requests from both big and small labs, says committee member Daphne Preuss, a molecular geneticist at the University of Chicago, the advisers decided to “remind the community” of a long-standing policy. “I feel data should be released immediately, and any group who wants to analyze and write them up should do so—with appropriate credit,” she explains.

    But the letter signed by Preuss and leaders of the International Nucleotide Sequence Databases is only fueling the flames. Although Birney agrees with its broad outlines, he would prefer a more flexible approach. Data producers, data users, and database managers “have to all get into a room and figure out the best structure” to ensure access but give credit to sequencers, so that they “don't just become unseen supporting cast.” The U.K. biomedical charity Wellcome Trust hopes to do just that at a meeting it is organizing this January.


    GOP Takes Senate, Budget Uncertain

    1. David Malakoff

    U.S. science advocates face a new political landscape in Congress. When voters handed Republicans control of the Senate last week, ending a brief era of divided government, they put President George W. Bush in a stronger position to advance policies—from a ban on human cloning to a permanent tax break for corporate research spending—with implications for scientists. The shift could also delay pending budget increases for the National Institutes of Health (NIH) and other science agencies.

    For the past 18 months, Democrats have held a single-vote majority in the 100-member Senate, giving them control of all committees and the legislative agenda. But the election will give Republicans at least 51 seats when Congress reconvenes in January. Republicans also strengthened their small majority in the House of Representatives.

    The upheaval could temporarily disrupt the flow of grants to researchers if congressional leaders decide to put off final action on spending bills that fund NIH, the National Science Foundation (NSF), and other science agencies until the new Congress convenes. Those bills, which would provide double-digit increases for NIH and NSF, cover the fiscal year that began 1 October, but none have yet been passed and the agencies are running on a temporary spending measure. If lawmakers, back in town this week for a special postelection session, decide to extend the temporary spending measure until the end of January, NIH and other agencies will be forced to delay awards for a slew of new grants—including bioterrorism research—planned for early next year.

    In the long term, lobbyists don't expect the Republican takeover to reverse growing bipartisan support for government spending on science. Key spending panels, for instance, are expected to be led by Republicans with a pro-research slant, including familiar faces such as Senators Arlen Specter (R-PA) and Kit Bond (R-MO).

    View this table:

    Republican control does worry some biomedical research groups that are opposed to a ban on research involving human cloning, however. The White House and the House of Representatives have backed legislation that would ban not just reproductive cloning but the use of cloning techniques to create embryos for research or therapies, but outgoing Senate Majority Leader Tom Daschle (D-SD) helped a bipartisan group block the bill in the Senate. The expected new majority leader, Trent Lott (R-MS), is believed to be more willing to bring the issue to a vote. “It certainly will be easier to get [a cloning ban] on the floor,” worries Anthony Mazzaschi, a lobbyist for the Association of American Medical Colleges in Washington, D.C.

    Republican leaders might also speed up action on other bills of interest to researchers. One creates a new Department of Homeland Security, which would back terrorism-related R&D. Another is a massive energy bill that authorizes extensive new research programs. The Bush Administration has also discussed making permanent an existing tax break for corporate spending on R&D.

    The new Congress will be missing some veteran science advocates, chief among them Representative Connie Morella (R-MD), whose district includes NIH and the National Institute of Standards and Technology. She lost a close race to lawyer Chris van Hollen. But the House's “physics caucus” remains intact: Representatives Vernon Ehlers (R-MI) and Rush Holt (D-NJ), the body's two academically trained physicists, won reelection easily.


    Leaks Produce a Torrent of Denials

    1. Jon Cohen

    France? That's how many researchers and policy-makers reacted when they read a page one Washington Post story on 5 November that listed France, along with Russia, North Korea, and Iraq, as countries that U.S. intelligence sources believe hold clandestine stocks of smallpox virus. French officials had an even stronger reaction: A statement issued by France's Ministry of Foreign Affairs categorically denied the assertion “in the strongest terms.”.

    The World Health Organization (WHO) declared in 1980 that its vaccination program had eradicated smallpox from the human population, and WHO member states agreed to destroy all but two stocks of the virus: one held at the U.S. Centers for Disease Control and Prevention in Atlanta, Georgia, and the other at VEKTOR in Koltsovo, Russia. Experts have suspected, however, that samples of the virus might be in the hands of ill-intentioned people in Russia, North Korea, and Iraq. French officials were outraged to be included in such company.

    The Post story reported that the assessment came from the Central Intelligence Agency's Weapons Intelligence, Nonproliferation and Arms Control Center, which had “high but not very high” confidence in its information about France. One high-level source confirmed to Science that the story accurately reflected the content of the intelligence reports. (And 2 months earlier, U.S. Health and Human Services Secretary Tommy Thompson, in a little-noted 4 September Post story, said, “We can speculate” that France has the virus.) But several experts questioned the accuracy of the reports themselves. D. A. Henderson, who headed WHO's smallpox eradication program and now advises the U.S. Department of Health and Human Services on bioterrorism, says, for example, “I know of no information that would indicate to me that it was being retained [by France].” He adds: “It is not impossible that someone provided information about ‘smallpox,’ not realizing that ‘smallpox vaccine’ is quite another virus. This has happened before.”.


    The Post story itself noted that some Administration officials were alarmed that France had been included in the list. As the story said, France holds a key United Nations Security Council seat and “is the linchpin of U.S. diplomatic efforts to establish a legal basis for war with Iraq.” (France did end up joining other members in approving a U.N. Security resolution that forces Iraq to disarm or face “serious consequences.”).

    Apart from the alleged French connection, the leaked information could influence intense discussions now raging in the Administration about how widely the U.S. government should distribute smallpox vaccines, which can produce severe side effects. “It's designed to increase the level of anxiety and to influence the opinion of people who are conservative [about mass vaccinations],” complains Kenneth Shine, former president of the Institute of Medicine and now head of the newly formed RAND Center for Domestic and International Health Security in Arlington, Virginia.

    One wing of the Bush Administration has pushed to make the vaccine available to anyone who wants it, whereas another has urged the more cautious approach of vaccinating only health care workers and other “first-line responders” in the case of a bioattack with the virus. Insiders expected a final decision several weeks ago, but indecision has prevailed as the White House continues to wrestle with the possibility that mass smallpox vaccination could do more harm than an actual bioattack with this vanquished killer.


    Report Boosts Work in Physical Sciences

    1. Andrew Lawler

    Physical scientists are striking back. A panel convened by the National Research Council (NRC) argues that areas such as fundamental physics and materials science deserve a prominent place on the international space station alongside experiments in the life sciences. That message is somewhat at odds with tentative NASA plans to revamp research on the orbiting laboratory.

    The panel's report, requested by NASA 2 years ago and submitted last week, promises to ratchet up the competition among different disciplines scrambling for limited time, space, and funding on the station. NRC moved up the report's release in the hope of influencing NASA's 2004 budget request, now under review by the White House. “This gives me ammunition to try and grow the program,” says Eugene Trinh, who heads the agency's physical sciences division. Other physical scientists say the NRC study could help rescue their discipline from second-class status.

    The 15-member panel rated fundamental physics, low-temperature, and precision clock experiments among the most important areas to pursue and those likely to have the highest impact; the collection of thermophysical data on liquids in microgravity was ranked near the bottom. Panel members—a majority of whom do not receive NASA funding—say they were impressed by a dramatic increase recently in the quality of both the investigators and the results from recent space experiments. “It is clear this research is contributing to a broader field, although most people don't know the microgravity program exists,” says Peter Voorhees, chair of the panel and a materials researcher at Northwestern University in Evanston, Illinois.

    Physical evidence.

    A Russian cosmonaut adjusts instruments to study external particles that might threaten the space station.


    The report appears to contradict one released in July that strongly emphasized biology and applied materials research such as combustion and de-emphasized fundamental physics. That panel, formed by NASA Administrator Sean O'Keefe and chaired by Columbia University endocrinologist Rae Silver, triggered dissents by several physical scientists on the panel who complained that their views had not been taken seriously (Science, 19 July, p. 316). “The conclusion of the [Silver report] is what biologists think, not what physical scientists think,” says Voorhees.

    NASA officials say that physical scientists have overreacted to the Silver report. “We have very good opportunities coming up,” notes Trinh, pointing out that physical scientists have been given about half the experiment slots on the station (and another 20% or so for commercial materials work). Trinh insists that there are no contradictions between the NRC study, which dealt with detailed research areas, and the Silver report, which covered the entire range of science.


    Whole Lotta Shakin' in Alaska, as Predicted

    1. Richard A. Kerr

    Predicting anything about earthquakes is fraught with uncertainty today, but 30 years ago it was a nightmare. So seismologists at the U.S. Geological Survey (USGS) found themselves out on a limb in the early 1970s when they insisted that a quake could shake the proposed trans-Alaska oil pipeline far more violently than engineers were assuming. Eventually, the seismologists got their way, and the pipeline was engineered to be more quake-resistant. Last week's temblor—the most powerful ever known to occur on U.S. soil—cut right beneath the pipeline, justifying the seismologists' concerns. And the engineering paid off: Not a drop was spilled.

    “I'm pleased the whole process led to a successful project,” says seismologist Robert Page of USGS in Menlo Park, California, who was involved in setting standards for the pipeline. “It's an example of how science can help reduce natural-hazard risks to society.”.

    In the early '70s, it was obvious that there would be risks involved in pumping a couple of million barrels of oil a day down a 1280-kilometer pipe across some of the wildest country in the world. Drawing primarily on skimpy geological evidence and one large earthquake that struck the region in 1912, USGS seismologists inferred that the Denali fault—which sliced across the proposed pipeline route in central Alaska—could unleash a magnitude 8.0 quake. Not a bad estimate: Last week's temblor measured 7.9.

    Not a drop.

    The quake on the Denali fault, which broke through the highway at lower left, could not rupture the trans-Alaska oil pipeline.


    But the contentious issue between USGS seismologists and some engineers in the debate over the pipeline's potential environmental impact was how strongly such a quake would shake the ground near the fault. Seismologists had just gotten their best measurements yet of ground shaking anywhere near a large, rupturing fault during the 1971 San Fernando earthquake in California. The results were sobering. “All I knew was that the ground was shaking harder than the earthquake engineers had been expecting,” says Page. Whether a Denali quake would work the same way and severely test conventional designs remained in contention; at stake was an $8 billion project—the world's largest privately funded project at the time—as well as the design standards for nuclear power plants on the seismically hazardous West Coast. But in the end, USGS seismologists were allowed to set a demanding seismic standard for the pipeline, and engineers designed kinks into the pipe so it could compress, extend, and slide sideways on Teflon-coated pads without failing.

    And survive it did. The pipe crossing the fault slid to the edge of its crossbeams—as intended—and slipped off at only one spot. Some supports failed, but the pipe held. Oil was flowing again after 3 days of inspection and shoring up. “It worked,” says earthquake engineering geologist Lloyd Cluff of Pacific Gas & Electric in San Francisco, who helped develop the final pipeline design. Page and USGS were right to stand by their science, he says; a basic scientific understanding of a fault combined with appropriately conservative engineering can accommodate even the uncertainties of 3 decades ago.


    U.N. Split Over Full or Partial Cloning Ban

    1. Gretchen Vogel

    Efforts to craft an international ban on human cloning stalled last week in the United Nations when 37 countries, including the United States and Spain, refused to support a proposal they said was too narrow. That proposal, sponsored by France, Germany, and 20 other countries, would have banned just reproductive cloning: efforts to implant cloned embryos into surrogate mothers and allow them to develop to term. The United States and its allies said they would support only a measure that banned all forms of human cloning, including so-called research cloning.

    Scientists are in almost unanimous agreement that human reproductive cloning is not only morally questionable but also dangerous for both surrogate mother and potential child. But some argue that research cloning, in which cloned human embryos might be used to produce embryonic stem (ES) cells, could be a boon to medicine. The resulting ES cells could be used to study genetic diseases or—eventually—treat sick patients. Opponents of embryo research argue that such experiments create human life only to destroy it.

    Fine with them.

    Human reproductive cloning advocates (left to right) Panos Zavos, Avi Ben Abraham, and Brigitte Boisselier won't be slowed by any U.N. resolutions this year.


    France and Germany announced 2 years ago that they wanted to craft a ban in the United Nations to block the efforts of some fringe groups to create cloned children. But that proposal ran into opposition from the United States, which offered its own alternative, a convention banning all cloning of human embryos. French and German diplomats argued that opinions varied so much that negotiating a complete ban would take too long. They pushed for an immediate ban on reproductive cloning while leaving open the possibility of eventually hammering out a broader ban.

    The U.N. committee in charge of international law was unable to reach a consensus on whether to support a complete or partial ban and decided on 8 October to postpone any further debate on the subject until next fall. A meeting to discuss the issue further is planned in South Korea next spring.

    In a statement, France and Germany said the failure to move forward on a ban on reproductive cloning “leaves the field open to those working toward giving birth to a cloned human being.” A spokesperson for the German mission to the U.N. says his country supports the idea of a ban on all cloning experiments but believes a ban on reproductive cloning is a more realistic goal. “It's more a difference in how to proceed,” he says. “In our domestic legislation we have prohibited all forms of cloning. On the other hand, we didn't see a chance to pass that here.”.

    Research cloning is expressly legal in several countries, including the United Kingdom, Singapore, and the Netherlands. Indeed, Scottish researcher Ian Wilmut, one of the creators of Dolly the cloned sheep, has said he plans to proceed with human cloning experiments with the goal of producing ES cell lines (Science, 4 October, p. 37).

    The United States has no national legislation governing cloning, and several privately funded U.S. groups are proceeding with research cloning experiments. A similar disagreement in the U.S. Senate earlier this year foiled efforts to pass either a ban on reproductive cloning or a ban on all human cloning research (Science, 21 June, p. 2117). However, that situation might change now that Republicans have regained control of the Senate (see p. 1313).


    Misspelled Gene Tames Malaria

    1. Deborah Hill*
    1. Deborah Hill is a science writer in Idaho Falls, Idaho.

    Malaria kills about a million people each year. But even in countries where the disease takes a heavy toll, the risk is not the same for everyone: Some people have a remarkable ability to suppress the malaria parasite's debilitating effects. Now, researchers have tied that resistance to a subtle variation in a single gene that can cut by nearly 90% the risk that an infection will become life-threatening.

    The gene mutation causes people to ratchet up production of nitric oxide (NO), a gas that plays a role in a diverse range of physiological processes. Previous studies with rodents had found that NO can protect against malaria and a variety of other diseases, says microbiologist Ferric Fang of the University of Washington, Seattle. But the new study provides some of the best evidence to date that NO plays an important role in disease protection in humans, says Fang, who calls the study a “significant contribution.”.

    The study was led by hematologist Brice Weinberg of the Veterans Affairs and Duke University Medical Centers in Durham, North Carolina. Weinberg's team sampled DNA from 185 Tanzanian children—47 of whom had been infected by the malaria parasite but remained healthy, and 138 who were sick with the disease. The researchers looked for mutations in and around the gene that encodes inducible nitric oxide synthase (NOS2), the enzyme that makes NO. They found that a single mutation in which a cytosine replaces a thymine in the NOS2 gene's promoter region—its DNA on-switch—turned up more often in the healthy children. Children with the mutation had higher than normal NO levels in their blood and urine, suggesting that the gas could be protecting them.


    A chance twist in the genetic code can protect against malaria.


    The team then analyzed DNA samples and clinical data from a 5-year study of 1106 children in Kenya run by the Centers for Disease Control and Prevention. They again found that the mutation in the NOS2 promoter had a protective effect. “Overall, the mutation lowered the risk of severe malaria by 88% in Tanzania and 75% in Kenya,” says molecular geneticist Maurine Hobbs of the University of Utah in Salt Lake City, a co-author of the study, which appears in the 9 November issue of The Lancet.

    This isn't the first mutation thought to protect against malaria, but “this study is one of the most compelling because they have demonstrated a connection between genetics, NO production, and clinical status,” says clinical immunologist Brian Greenwood of the London School of Hygiene & Tropical Medicine in the United Kingdom. “The story told by this study is very appealing and logical.”.

    Exactly how NO protects is still unclear, however. Researchers have hypothesized that high levels of NO might kill Plasmodium falciparum parasites in the liver, where the parasite first gets a foothold and begins to multiply. But somewhat surprisingly, the new mutation mitigates the severity of the disease without reducing the number of parasites in the bloodstream: Children with the mutation had parasite levels comparable to those of children without it.

    NO might play a variety of other protective roles. Animal experiments have shown that NO reduces expression of adhesion molecules on cell surfaces, which prevents infected red blood cells from sticking to blood vessel walls and causing the restricted blood flow associated with deadly cerebral malaria. NO also limits production of cytokines—proteins that stimulate immune responses and might contribute to tissue damage in malaria.

    Still, although researchers welcome the new finding as a basic insight into human immune defenses, they say it probably won't lead to immediate improvements in malaria treatment. “NO won't be the panacea for malaria; it's just one piece of the puzzle,” says study co-author Nicholas Anstey, an infectious disease specialist at the Menzies School of Health Research in Darwin, Australia.


    Antibodies Kill by Producing Ozone

    1. Jean Marx

    Antibodies have long been known as the immune system's reconnaissance forces: scouts that seek out foreign antigens and summon up the big guns to wipe them out. But evidence now indicates that antibodies may also be killers in their own right.

    In work published online today by Science (, Paul Wentworth, Richard Lerner, and their colleagues at the Scripps Research Institute in La Jolla, California, report evidence that antibodies, when provided with appropriate starting materials, catalyze the production of highly active forms of oxygen, likely including ozone. This can not only kill bacterial pathogens directly but might also promote inflammatory and other immune responses. The work puts antibodies in a whole new light, says immunologist Carl Nathan of the Weill Cornell Medical Center in New York City. Because they weren't supposed to have such direct effects, “it will be hard to think of antibodies in the same way [as before].”.

    Hints of antibodies' lethal nature began surfacing about 2 years ago. Thanks partly to early work from Lerner's lab, antibodies are known to have catalytic activities. And Wentworth, Lerner, and their colleagues showed that when they are supplied with a reactive form of oxygen known as singlet oxygen, antibodies can generate hydrogen peroxide from water (Science, 7 September 2001, pp. 1749 and 1806). Hydrogen peroxide is a well-known bacteria killer; it's often used as an antiseptic. But because the Scripps team generated singlet oxygen in a highly nonphysiological way and didn't show directly that bacteria die from the hydrogen peroxide produced, “people said, ‘What does this have to do with biology?’” recalls Lerner.

    The new work addresses that issue and suggests a surprising new twist to antibodies' modus operandi. First, the Scripps researchers demonstrated that antibodies can kill bacteria without help from any other immune system forces. In one set of test tube experiments, they showed that antibodies, in conjunction with a singlet oxygen-generating system that could not kill bacteria on its own, wiped out more than 95% of Escherichia coli bacteria. But the experiments turned up a puzzle: The antibodies weren't generating enough hydrogen peroxide to account for all the cell killing. That suggested that some other, more powerful bactericidal agent was also being formed.

    Ozone indicator.

    In the Arthus reaction, skin becomes inflamed at the site of antibody injection (bottom). Biopsies of such inflamed sites (tubes 3 and 6 from left), but not of normal skin (tubes 1, 2, 4, and 5), contain ozone as indicated by their reaction with the dye indigo carmine.


    Evidence from a series of experiments pointed to ozone as the most likely suspect. For example, the researchers found that antibodies provided with singlet oxygen produce an oxidizing agent that splits the dye indigo carmine, just as ozone does. Ozone hadn't previously been implicated in immune responses. “We're now in brand-new territory,” Nathan says.

    The work still left one big question hanging: Is there a plausible physiological source of singlet oxygen? (The Scripps team had again used a nonphysiological source.) Wentworth, Lerner, and their colleagues believe they have an answer. They report evidence that immune cells called neutrophils, which help destroy invading bacteria, can generate singlet oxygen.

    In addition to showing that antibodies can produce hydrogen peroxide and ozone, the team has linked this activity to an inflammatory response called the Arthus reaction in living rats. In this system, inflammation is induced by injecting an antigen—the researchers used bovine serum albumin—into the animals' bloodstream and simultaneously injecting antibodies to it into their skin; the skin becomes inflamed at the injection sites. Analysis of the inflamed skin tissue showed that it, too, contained an oxidizing agent that behaves just like ozone.

    Chemist Chris Foote of the University of California, Los Angeles, describes the work as “amazing.” It shows, he says, that “there's a powerful oxidant there that no one suspected.” He cautions, however, that the current experiments don't totally prove that the oxidant is ozone. Lerner concedes the point but says that other work now under way, including studies of atherosclerosis, will provide definitive proof.

    Meanwhile, Nathan suggests that antibody-mediated ozone production could contribute to a variety of inflammatory conditions, including rheumatoid arthritis and inflammatory bowel disease. He notes, for example, that the inflamed joints of arthritis patients contain something called rheumatoid factor, which is actually an antibody directed against other antibodies, as well as neutrophils. If this leads to antibody-catalyzed production of hydrogen peroxide and ozone, the result could be a double whammy, causing damage to the joint directly and also indirectly by enhancing the activity of neutrophil products. More work will be needed to test these ideas, but antibodies now appear to have more tricks up their sleeves than anyone expected.


    NASA's New Road to Faster, Cheaper, Better Exploration

    1. Richard A. Kerr

    In the wake of its disastrous Mars missions, the U.S. space agency is expanding its commitment to disciplined planetary missions each led by a lone scientist

    Former NASA Administrator Daniel Goldin had a vision: nimble space missions that would conserve time and money. They would be smaller than the bus-sized Galileo that traveled to Jupiter. They would reach their goals faster than the 10 years needed to launch the Magellan spacecraft to Venus, for far less money than the $3 billion extravaganza of the Cassini-Huygens mission now approaching Saturn. And they would be just plain better—certainly better than failures like the 1993 explosion of Mars Observer as it prepared to go into orbit.

    But when Goldin made his “faster, cheaper, better” vision a reality through his top-down, take-no-prisoners style of management, the results were sometimes disastrous. The dual loss of Mars Climate Orbiter and Mars Polar Lander in 1999 prompted a critical re-evaluation that found the faster, cheaper, better concept to be sound in theory but weak in implementation (Science, 7 April 2000, p. 32). Far from turning its back on faster, cheaper, better, however, NASA is about to expand the concept to encompass a large chunk of solar system exploration and is turning to the individual enterprising scientist to make it work. For most future planetary expeditions, a team led by a principal investigator (PI) will sell its mission to NASA and deliver on its promises of faster, cheaper, and better or perish in the effort.

    NASA started small when it began buying space missions from PI-led teams in the early 1990s. First came the mostly Earth-orbiting satellites in the Explorer program. These are cost-capped at anywhere from $15 million to $150 million per mission. Building on Explorer successes, NASA expanded the approach to the Discovery program of $299-million-and-less planetary missions (Science, 27 May 1994, p. 1244). Now, NASA is planning new efforts that will include Discovery-size missions to Mars, called Scout missions, and double-Discovery-size missions elsewhere in the solar system under the New Frontiers program. Both new types are in the new budget expected from Congress. That's a big boost for PI-led missions. “I think [New Frontiers] is one of the most exciting things in solar system exploration,” says Colleen Hartman, director of NASA's Solar System Exploration Division. New Frontiers, along with Discovery, “is the future of solar system exploration.”.

    PI-led missions will loom large in planetary science in part because NASA likes the innovative science that competing teams led by a single investigator come up with and the way such missions consistently come in on time and on or below budget. Planetary scientists like such missions, naturally enough, because they can be in charge and keep the focus on the science despite cost and engineering pressures. And in the end, they're popular because NASA-imposed discipline and PI-induced focus make them work. “This is a great time to be in planetary exploration,” says planetary geophysicist Maria Zuber of the Massachusetts Institute of Technology. “A lot of things are going to happen.” They will, that is, if PI-class missions survive their current growing pains and the vagaries of space exploration.

    Good deal.

    The faster and cheaper NEAR spacecraft recovered from near disaster to orbit the asteroid Eros and then touch down on the surface.


    Mission best buys

    The increasingly popular Discovery-style mission is a package deal. As Wesley Huntress, who was then NASA associate administrator for space science and is now at the Carnegie Institution of Washington's Geophysical Laboratory, put it in 1994 as he launched the Discovery program: “We're asking for PIs to come in with a whole mission. If we like it—if we like your science, if we like the way you're going to manage it, if we like the cost—we'll buy it, pay you, and you do it.” To date, NASA has bought nine Discovery missions, from Mars Pathfinder's renowned Sojourner rover to a yet-to-be-launched mission to search for Earth-sized planets around other stars.

    The rules for the PI-led deal have evolved a bit since Pathfinder's start in 1993, but the essentials are an open competition, a fixed price, and a fixed time to launch. Before Discovery, under the so-called strategic planning approach, a committee of planetary scientists decided, for example, that a spacecraft eventually named Cassini-Huygens would go to Saturn carrying 18 instruments—basically at least one for every planetary science specialty, from the de rigueur cameras to cloud-piercing radar. Congress had to authorize each specific mission and its budget. Most scientists didn't enter the process until they competed to provide and operate the instruments. Overall management of the mission resided with a NASA center, usually the Jet Propulsion Laboratory (JPL) in Pasadena, California.

    In the Discovery mode, Congress writes a check and NASA chooses what to spend it on. When NASA announces that the next chunk of Discovery money will soon be available, individual scientists decide where they want to go, what science they want to do, and how they are going to get there. Gathering a team of scientists, engineers, and managers from universities, industry, and NASA centers, the PI oversees development of a mission proposal that meets NASA's Discovery specifications: a cost cap of $299 million for the mission—from design through launch to data analysis—and a 36-month time limit from detailed design to launch. Then the competition begins. Every time NASA announces a Discovery opportunity, which happens every couple of years, about two dozen proposals come in. NASA whittles them down to one or two winners through parallel scientific peer review and a technical, management, and cost evaluation process. The technical review judges whether the proposers can really get that much science for that little money while running as little risk as claimed.

    Not to be.

    The beautifully functioning CONTOUR spacecraft was blown up leaving Earth orbit.


    So far, the PI approach has worked well. The Explorer program of Earth-orbiting satellites that went this route in the early 1990s has a dozen spacecraft currently studying everything from the chemical composition of interstellar gas clouds to the workings of Earth's magnetosphere. The Discovery program has launched five missions, all on schedule and within budget. Mars Pathfinder landed a rover at one-tenth the cost of a Viking lander in the 1970s. The Near Earth Asteroid Rendezvous (NEAR) spacecraft orbited Eros for a year and as a bonus even touched down on the surface. Lunar Prospector chemically, mineralogically, and geophysically mapped the moon, reporting ice near the pole. At $63 million, it was the cheapest planetary mission ever. Stardust is collecting its namesake samples and preparing to sweep up comet dust in 2004; all samples will be returned in 2006. And Genesis is loitering at Earth's L1 point toward the sun, collecting solar wind particles for return in 2004. Only the Comet Nucleus Tour (CONTOUR) has run into a real showstopper. The spacecraft itself operated perfectly after launch, but something—presumably its rocket motor—blew it apart as it rocketed out of Earth orbit last August.

    To Mars and beyond

    Well before the CONTOUR failure, NASA began expanding PI-class missions beyond Explorer and Discovery. In July next year, it will pick the winning Scout mission from about 25 proposals submitted last August for a 2007 launch to Mars. The proposals range from traditional flybys, orbiters, and rovers to balloons, free-fall penetrators, gliders, and a mission to skim through the upper atmosphere to return a sample of martian dust to Earth. “The [planetary science] community really came out of their socks with innovations,” says the Mars Exploration Program's lead scientist, James Garvin of NASA headquarters. “We're thrilled.” The cost cap is $325 million, on a par with the ongoing Mars Odyssey and Mars Global Explorer missions now orbiting the planet but less expensive than a single Mars Exploration Rover mission billed at $350 million to $400 million, two of which will be launched next year as part of NASA's strategically planned Mars program (Science, 10 May, p. 1006).

    In addition, PI-led planetary missions will be scaled up for the non-Mars part of the solar system. Like Discovery, New Frontiers will be a new line in the federal budget, so that NASA will not have to go back to Congress for approval of each mission. That smoothes the funding process, although Congress still takes a hand in funding specific larger missions (Science, 18 October, p. 511). The New Frontiers cost cap will be $650 million, with no more than 47 months to launch; the first in the series must leave Earth no later than 2008. With that sort of funding, missions could launch every 3 years or so with more ambitious goals. New Frontiers missions might include a flyby of Pluto and comet nuclei in deep-space storage beyond it or the return of a bit of the moon's mantle from an existing crust-piercing giant crater. Those are among the five “flight mission priorities” recommended last July by the National Research Council's decadal survey of solar system exploration.

    Truly cheap.

    The Lunar Prospector reported ice at the moon's pole in the course of the least expensive planetary mission ever.


    How to please everyone

    The imminent expansion of Discovery-style planetary science suits scientists just fine. “Discovery is great,” says Joseph Veverka of Cornell University, who as PI of CONTOUR just lost his whole Discovery mission. “It's a very healthy process. It's a great idea to extend it to other NASA endeavors. It should be the entire Mars program.” The planetary community's enthusiasm is fueled in part by the frequency of Discovery missions. A new one starts every 2 years or so, whereas flagship missions like Galileo are a once-in-a-decade event. Cheaper, smaller, and therefore more frequent is better, everyone agrees.

    In fact, the frequency of the missions helps keep the size and costs down. In lean times, when missions are infrequent, a committee of scientists might pile instruments on a spacecraft for fear it's the last bus out of town. Even in better times, though, a committee can end up designing a complex mission in order to please as much of the community as possible, says Michael A'Hearn of the University of Maryland, College Park, PI of the Discovery Deep Impact mission to glimpse the interior of a comet.

    In Discovery, the cost cap tends to counter this natural expansiveness. “NASA says, ‘Here's your budget. Do it within this cost or we cancel your mission,’” says William Borucki of NASA's Ames Research Center in Mountain View, California, PI of the new-start Kepler mission to search for extrasolar planets. “And they do cancel missions.” Last January NASA pulled the plug on the Explorer mission FAME—intended to measure the precise position and brightness of 50 million stars—when it was projected to run over its $160 million budget by at least $60 million. And A'Hearn is sweating out a NASA review that is considering whether Deep Impact is pushing its budget limits too hard.

    Bang for the buck.

    The cash-strapped Deep Impact mission promises to blast a comet nucleus with a 32,000-kilometer-per-hour projectile.


    Keeping focused is key to the success of Discovery missions, most observers say, and the lone PI is crucial to maintaining that focus. In the pre-Discovery days, “NASA centers [such as JPL] were everything,” Huntress said in 1998. “Scientists were along for the ride. It was thought that planetary science is too complicated for scientists.” That has changed. “In terms of focus and [making] a host of tradeoff decisions, the role of the PI is absolutely central” in Discovery-style missions, says geophysicist Sean Solomon of the Carnegie Institution of Washington's Department of Terrestrial Magnetism, PI of the MESSENGER mission to Mercury. “In contrast to other solar system missions, you don't have the usual tension between engineers and the science team, because every important decision falls on the PI-scientist. Science informs each decision. It's a very good model for a mission.”.

    A watchful eye

    Not that the PI is doing all the rocket science in a third-of-a-billion-dollar mission. “The PI has a great deal of control but also has a management team that knows how to do a mission,” says Borucki. “I'm the person who takes the blame if it fails,” says Stardust PI Donald Brownlee of the University of Washington, Seattle. But it's his project manager, Kenneth Atkins of JPL, and engineers at Lockheed Martin Astronautics in Denver, Colorado, who “really do the nitty-gritty stuff. Difficult things do come up, but we can usually reach a consensus. When we can't, I have the ultimate responsibility.”.

    Of course, NASA isn't simply handing money over to an appealing mission. First, it submits the proposal to a rigorous technical review. “NASA is letting a scientist run a $300 million program, but it's saying, ‘Convince us you have a good team, there are no technical showstoppers, and it's low risk,’” says Solomon. “Those elements are not always present in comparable NASA-sponsored missions.” The review of proposals, which is run out of NASA's Langley Research Center in Hampton, Virginia, “is extremely intensive and comprehensive,” says Borucki. “It means we will have spent a great deal of time on design” before actually building the spacecraft. In the case of Borucki's Kepler mission, a decade of repeated Discovery submissions, reviews, rejections, and redesigns finally brought success.

    Big ideas.

    Under the New Frontiers program, PIs could take on larger missions, such as a trip to Pluto and its huge moon Charon.


    The Discovery process has also succeeded in the eyes of NASA, scientists, and engineers because it brings out the best and brightest ideas. “Competition is good,” says Noel Hinners, who recently retired from Lockheed Martin Astronautics. “It does stimulate innovative ways to do new science.” An alternative to strategic planning was essential to Stardust's mission to sweep up the first samples ever of interstellar material and comet dust. “We proposed this mission [to other programs] in a variety of forms and never succeeded,” says Brownlee. It seemingly lacked the broad appeal of less-focused missions. “Discovery opened a real opportunity. It probably never would have flown otherwise.”.

    Not without flaws

    Of course, Discovery missions have not been perfect. The CONTOUR loss made that point all too clearly. A formal mission loss report is due out shortly, but most scientists are assuming it was an unlucky break, a failure in a tried-and-true rocket engine that could have happened no matter how risk-averse managers were. There have been other, less catastrophic failures, however. The NEAR mission almost ended when a miscalculation of the jolt from the spacecraft's rocket shut down its firing and caused NEAR to fly by Eros instead of going into orbit. Quick-thinking controllers managed to line up a second approach a year later. But NEAR's near-infrared spectrometer broke shortly after entering orbit, and its gamma ray spectrometer proved to be less sensitive than expected and never returned useful data from orbit. And the essential batteries on Genesis are operating above design temperatures, which could shorten the mission.

    The Discovery cost cap can also limit the science return. NEAR carried lower cost solar panels and a high-gain antenna that could not be steered; the whole spacecraft had to be turned toward a target, which degraded both infrared and x-ray data. The gamma ray instrument's lack of a boom to hold it away from the spacecraft's contaminating gamma rays added to the instrument's problems. And the laser altimeter on MESSENGER will not be able to trace the topography of Mercury's southern hemisphere because of the spacecraft's elliptical orbit required by the small size of its onboard rocket.

    Planetary scientists have been largely tolerant of the Discovery program's limitations but have pushed the limits fairly hard. The latest Discovery missions—MESSENGER, Kepler, and Dawn—are at the ultimate cost cap and represent the limit in spacecraft and mission complexity within NASA's intent for Discovery. The supereconomy missions like Lunar Prospector are things of the past. And inflation is starting to pinch Discovery budgets, especially in the realm of launch costs.

    But Scout and especially New Frontiers have come along just in time. Scientists' enthusiasm for competing to lead missions seems undiminished. “It's such a great opportunity to propose for what we consider key scientific questions,” says Larry W. Esposito of the University of Colorado, Boulder. He has frequently proposed and has never won, but “I've become addicted to proposing for missions.”.


    HHS Intervenes in Choice of Study Section Members

    1. Dan Ferber

    A grants-review panel on occupational safety has become the latest body to feel the scrutiny of the Bush Administration

    The Bush Administration's efforts to filter the scientific advice it receives have spread into peer review itself, according to agency officials and the chair of a panel that examines grant proposals for a controversial area of public health.

    In a letter in this week's issue of Science (p. 1335), epidemiologist Dana Loomis of the University of North Carolina, Chapel Hill, charges that the Department of Health and Human Services (HHS) is playing politics with the membership of a study section that reviews research grants on physical injuries in the workplace for the National Institute of Occupational Safety and Health (NIOSH). In the past few months, says Loomis, the department has rejected three people who were proposed by science administrators at the National Institutes of Health (NIH), which manages the study section—“at least one” for her support of an ergonomics rule that was overturned last year by the Bush Administration. Knowledgeable agency staffers confirm the account. HHS spokesperson William Pierce declined to discuss specifics, saying that HHS Secretary Tommy Thompson is simply exercising his prerogative to be involved in the choice of advisers.

    The department's role in appointing study section members follows other recent changes to federal science advisory committees, which are set up to provide federal agencies with independent scientific advice for policy decisions (Science, 25 October, pp. 703 and 732). The NIH peer-review process “really works, and I think meddling with it by inserting any sort of political test really endangers the entire endeavor,” says molecular biologist Keith Yamamoto of the University of California, San Francisco, who chaired an advisory panel that helped NIH fine-tune its respected peer-review system. “I can't imagine that this has ever been done before,” says Linda Rosenstock, dean of the School of Public Health at the University of California, Los Angeles, who directed NIOSH during most of the Clinton Administration.

    As HHS secretary, Thompson oversees both NIH and NIOSH, which is part of the Centers for Disease Control and Prevention (CDC). According to knowledgeable agency sources, Thompson's staff rejected three of six names put forward by NIH science administrators to replace members rotating off the 18-member panel. Thompson's office instead has settled on two members recommended by newly appointed NIOSH Director John Howard. The Safety and Occupational Health study section, which last year handled 140 proposals, helps NIOSH decide how to divvy up its budget for research on workplace-related disease and injury, including musculoskeletal disorders such as back injuries and carpal tunnel syndrome.

    Heavy lifting.

    Pamela Kidd says HHS officials wanted to know her views on workplace safety before appointing her to a review panel.


    NIOSH science is no stranger to politics. In 2000, Rosenstock persuaded NIH to take over management of the study section to shield it from an ongoing, heated political debate over the cause of injuries in the workplace. The debate was triggered by a proposed rule from the Clinton Administration requiring businesses to do more to prevent and report work-related injuries. Manufacturers and other business interests attacked the rule and the NIOSH-funded science that supported it, and congressional Republicans tried unsuccessfully in 1996 to kill off the agency. NIOSH survived and issued a final rule on the eve of Clinton's departure in January 2001, after an internal panel and two other blue-ribbon panels had found that the scientific evidence bolstered the link between workplace conditions and injury. Even so, a Republican-led Congress killed the rule 2 months later.

    As this debate ran its course, the two agencies began fighting over how to manage the committee. NIH wanted to retain final say over the appointments, but NIOSH wanted that authority for itself, in keeping with its responsibility to administer the grants. The dispute became so heated that NIOSH threatened to shut down the study section; in September, NIH told NIOSH to take over its operations. Howard says the two agencies are “working through” their differences.

    The rejected panelists—ergonomics experts Laura Punnett of the University of Massachusetts, Lowell, and Catherine Heaney of Ohio State University, Columbus, and Manuel Gomez, director of scientific affairs at the American Industrial Hygiene Association—say they are surprised at the furor surrounding their nominations. Loomis says he had put forward Punnett and Heaney because “we needed the ergonomic expertise badly.” Punnett, in particular, has testified publicly in favor of the ergonomics rule and in lawsuits by carpal-tunnel-syndrome patients against keyboard manufacturers. “I was shocked,” she says about being rejected. “I think it conveys very powerfully that part of the goal is to intimidate researchers and limit what research questions are asked.” Gomez says he is “baffled” as to why his nomination was rejected. Heaney could not be reached for comment.

    One nominee who was recently screened for the panel says that she was asked politically charged questions by a member of Thompson's staff. Pamela Kidd, an expert in injury prevention and associate dean of the College of Nursing at Arizona State University in Tempe, says that the staffer called in September and asked if she would be an advocate on certain issues involving ergonomics if appointed to the panel. “I was intrigued and offended at the same time,” Kidd recalls. “I purposely answered in a way that would not put me on either side.”.

    The NIOSH study section is an advisory committee whose members are appointed by the HHS secretary, unlike NIH study sections, which are appointed by the NIH director. Rosenstock says she “never had any influence from the secretary” when choosing members for the study section. But Thompson, Pierce says, is adhering to custom while taking a more hands-on approach to managing HHS. “We're doing the job we're being asked to do,” says Pierce.


    Seeking the Signs of Selection

    1. Steve Olson*
    1. Steve Olson is a science writer in the Washington, D.C., area and is the author of Mapping Human History: Discovering the Past Through Our Genes.

    New genetic techniques are spurring the search for evidence of natural selection at work in human prehistory, and they may offer insight into ancient and modern pathogens

    A woman miscarries; a child dies of malaria; a young man is ravaged by AIDS. Each is a human tragedy that leaves its mark on lives and families. But over the sweep of human history, such tragedies also can leave their mark on the human genome. Genetic mutations that help protect against viral infections, for example, should give those who inherit them an advantage in the reproduction and survival sweepstakes. The theory of natural selection predicts that over many generations, these mutations will spread through populations and so appear in genomes today.

    Scientists have long sought the genetic imprint of natural selection to understand the forces that have shaped human traits. But it's been a bit like trying to solve a crossword puzzle in which the clues have been scrambled. Other demographic events such as migrations, population contractions and expansions, and mating traditions have also left their mark on our genomes, making the effects of selection and history hard to untangle.

    So until recently, researchers had uncovered few solid cases of human genes under selective pressure. But they now have two powerful tools to guide the search: efficient sequencing techniques and the almost complete human genome sequence. These have already helped add several new genes to the list of those affected by selection (see table). And because some of the most potent selective forces have been pathogens, researchers are hoping the search will help them zero in on parts of the genome involved in disease. “We should be able to find disease genes without actually having patients, because we are descended from people who were resistant to diseases, and that resistance is engraved in our genes,” explains geneticist David Reich of the Whitehead Institute in Cambridge, Massachusetts.

    Earlier this month, in the fourth of a series of sesquiannual meetings at Cold Spring Harbor Laboratory,* about 150 anthropologists, geneticists, and pharmaceutical company researchers spent 5 days examining progress in the hunt so far and the implications for biomedicine. “What this meeting represents is a fusion of anthropology, population genetics, and clinical medicine to create a new field, evolutionary medicine,” said meeting co-organizer Douglas Wallace, director of the new Center for Molecular and Mitochondrial Medicine at the University of California, Irvine.

    Direction and balance

    The effort to understand human traits and diseases in terms of natural selection began with Darwin, who sought in his last book, The Descent of Man, “to see how far the general conclusions arrived at in my former works were applicable to man.” But as the quest moved to the genetic level in the 20th century, the task proved more difficult than expected. To detect selection, researchers first must determine how a genetic sequence would change under neutral conditions in which selection was not a factor. That's easy to do for an infinitely large, randomly mating population—but human populations have never met those conditions.

    Nevertheless, geneticists have succeeded in finding a few clear examples of directional selection, in which a particular version or allele of a gene has been so beneficial that it has spread quickly and widely, thus reducing levels of genetic variation. The allele that allows adults to digest lactose is a good example: The pastoralists who carried it could drink milk as adults, boosting their survival and reproduction, so the allele became common and in relatively short order displaced other versions of the gene in those populations. Geneticists also have found some cases of what's called balancing selection, in which a gene shows more variation than expected because people with two different versions of it—heterozygotes—have an advantage over those who carry two copies of the same allele. The classic example is a hemoglobin allele that causes sickle cell anemia if inherited from both parents but protects against malaria if paired with a normal version of the gene.

    But beyond these well-known examples, the pickings have been slim. “For directional selection, people have been able to find a handful of genes,” says geneticist Michael Bamshad of the University of Utah in Salt Lake City. “For balancing selection, they have found even fewer.”.

    But now that geneticists can sequence thousands of base pairs from global samples of hundreds of people, the search has gained new life. For example, Bamshad and his colleagues have sequenced the regions surrounding a cell surface receptor gene called CCR5 in more than 200 people worldwide. This is a crucial receptor because many viruses, including HIV, seem to use it to gain entry to cells.

    In work published 6 August in the Proceedings of the National Academy of Sciences, Bamshad found up to twice as much genetic variation as usual in a noncoding, regulatory region of CCR5, and characteristics of the variation suggest that it has been maintained for a long time. Bamshad believes that those patterns are the result of balancing selection: As in the case of sickle cell anemia, people with two different copies of the gene apparently suffered from fewer or milder illnesses, thus leaving a legacy of variability that continues to have advantages today. Indeed, among people infected with HIV, heterozygosity in the CCR5 gene seems to slow the progression to AIDS. HIV itself is too new to have been the selective force on CCR5, but older candidates include poxviruses, which ravaged human populations in the past, says Bamshad.

    Ken Kidd of Yale University has taken a similar survey of a different set of genes: the ADH genes that produce alcohol-metabolizing enzymes. Many members of eastern Asian populations have a variety of unpleasant reactions to alcohol, including flushed faces, racing hearts, and upset stomachs. Partly as a result, these populations tend to have low levels of alcoholism. These reactions are due to a variety of mutations, some of which hasten the metabolism of ethanol to toxic acetaldehyde and trigger the adverse effects. But any selection seems not to have been acting on human drinking habits; instead, many researchers believe that the reactions offered protection against an unknown parasite that was even more sensitive to acetaldehyde than humans are.

    When Kidd examined the frequency of these mutations throughout eastern Asia, he found that certain mutations often occurred together, creating a unique combination of genetic variants that together promote acetaldehyde production. This combination occurred so frequently that it was unlikely to have spread by chance, says Kidd. Thus although no one knows what parasite might have been involved, the prevalence of the mutations suggests that selection was indeed at work, he says.

    View this table:

    In a more controversial study, Robert Moyzis of the University of California, Irvine, reported on efforts in his lab by Yuan-Chun Ding and others to detect selection in the human dopamine receptor DRD4. A particular version of the receptor—an allele with seven repeats of a 48-base pair insert—has been linked with the personality traits of novelty-seeking and attention-deficit/ hyperactivity disorder (ADHD).

    Moyzis's team found that the seven-repeat allele is surrounded by a large block of sequence in which genetic variations tend to be inherited together, a pattern called linkage disequilibrium. This means that the ancestral chromosomal region on which the mutation first appeared has not yet been broken up by the recombination events that occur each generation, during the formation of sperm and egg cells. That suggests that the allele is much younger than the other common DRD4 alleles, perhaps appearing just 30,000 to 50,000 years ago.

    Normally an allele this young would be relatively rare, but instead it is quite common in some populations. Such common young variants can be a sign of selection, because new mutations favored by selection replace other alleles faster than if they were neutral.

    But the example is controversial because it's hard to know why an allele that now predisposes people to ADHD might have had a selective advantage, admits Moyzis. He notes that the allele appeared during an “interesting” time in human history, when modern humans expanded into new environments worldwide. He speculates that the allele increased the probability that some individuals would leave their homelands and seek out new challenges.

    Seeking youth and abundance

    As these researchers examine particular genes, others are looking more broadly for signs of selection. For example, Pardis Sabeti, David Reich, Eric Lander, and their colleagues at the Whitehead Institute have developed a technique that spotlights the common young variants that can signal directional selection: They seek common alleles surrounded by extensive linkage disequilibrium (see diagram). The team applied this method to variants of two genes, G6PD and CD40L, known to confer partial protection against malaria. In work described last month in Nature, they uncovered a clear selective signal: Each gene appears to be just a few thousand years old but is much more widespread than expected under a neutral model. Now the team is extending its method to scan the entire genome for other selective episodes that have occurred over the past 10,000 years. (Before that, recombination probably would have scrambled the signal.) “It seems to be a very powerful way of looking for selection and pinpointing important functional variation,” said Benjamin Salisbury of Genaissance Pharmaceuticals in New Haven, Connecticut, which is conducting its own search for selection in 7000 genes.

    Selection stands out.

    Mutations favored by selection are both abundant in populations and surrounded by large blocks of linked DNA (a sign of youth).


    The search for selection is generating considerable interest in the pharmaceutical industry. If a pathogen has exerted selective pressure on a gene, that gene could be a promising target for a new drug or vaccine. “We can use the experiment that nature has already conducted to give us a clue about how to combat a disease,” said Genaissance's J. Claiborne Stephens.

    But many geneticists have cautioned that the new efforts to detect selection will face many of the obstacles that have previously stymied researchers. One problem is that demographic processes and random chance can mimic selection. For example, the seven-repeat allele of the dopamine receptor is common in Native Americans but virtually absent in eastern Asians. But that could be an accident of history, not a sign of selection, if the first people to colonize the Americas just happened to have the allele.

    Also, genes and human traits generally have a complex linkage, not a one-to-one correspondence. “No one has found a variant that explains more than a couple of percent of any common disease, and all of these diseases are going to be highly multigenic,” said geneticist Jody Hey of Rutgers University in Piscataway, New Jersey. “That's bad news for the gene mappers and pharmaceutical companies.”.

    For example, Kidd calls the link between certain genetic variants and low levels of alcoholism in eastern Asian populations “one of the strongest associations found in the study of complex diseases.” Yet he acknowledges that alcoholism has social and psychological dimensions too. And although certain alleles might protect against alcoholism, their absence does not boost susceptibility, because the mechanisms of addiction are distinct from those of alcohol metabolism.

    No one expects the search for signs of selection to suddenly become easy. But after years of frustration, researchers are welcoming the new data and methods that might finally yield progress. “We're starting to see a clear connection between the study of history and practical biomedical applications,” says geneticist Stephen Wooding of the University of Utah. “We can generate clinically testable hypotheses that we were never able to generate before.”.

    • *“Human Origins and Disease,” 30 October to 3 November, Cold Spring Harbor Laboratory, Cold Spring Harbor, New York.


    Antihydrogen Rivals Enter the Stretch

    1. Charles Seife

    Two teams are vying to probe the properties of mirror-image atoms—a race in which key ideas of physics are at stake

    Antimatter, the fuel of countless science-fiction spacecraft, has catapulted into the front pages twice in the past few months. Two rival teams are hot on the trail of antihydrogen, the antimatter doppelgänger of the simplest element, hydrogen. Known as ATHENA and ATRAP, they use similar techniques, nearly identical equipment, and the same particle beam, an antiproton factory at CERN, the European particle physics laboratory near Geneva. Within a month, each announced that it had created tens of thousands of “antiatoms” cold and slow-moving enough to be studied. The results marked the first surge forward in a race to measure antihydrogen's spectrum—a discovery that could rattle the foundations of physics and will likely net a Nobel Prize for whichever team gets there first. Not surprisingly, the competition is intense.

    Physicists have known for decades that every bit of matter—whether it be an elementary particle like an electron or composite object like a proton—should have an antimatter equivalent with identical mass and equal but opposite charge. Yet almost all of the known mass in the universe seems to be made out of matter rather than antimatter. Some underlying asymmetry must have led the cosmos to “prefer” matter to antimatter. To learn more, physicists around the world have been waiting for someone to trap and study antihydrogen: the simplest antimatter “element,” an antielectron bound to an antiproton in a combination that nobody has ever spotted in nature.

    The ATHENA and ATRAP teams are fighting to be the first to analyze antihydrogen's properties. Their goal is to make and then capture enough to measure its spectrum, the wavelengths of light it absorbs. Scientists think that antihydrogen's spectrum should be identical to hydrogen's. If it's not, a key principle in physics known as CPT symmetry will have to be discarded, forcing a drastic revision of physicists' understanding of subatomic particles.

    As the two groups close in on that prize, the rivalry is heating up. “There's been some friction between them, and I'm regretful that there has been,” says Daniel Kleppner, a physicist at the Massachusetts Institute of Technology and an expert in dealing with cold hydrogen. “It may diminish each other's pleasure of discovery.” But Kleppner stresses that the tension shouldn't detract from the real story: the production of significant amounts of cold, slow antihydrogen. “The fact that both groups have gotten antihydrogen is a major accomplishment,” he says. “The friction shouldn't deflect from their achievements.”.

    Until a few months ago, the smart money was on ATRAP. Its leader, Gerald Gabrielse, a physicist at Harvard University, was the recognized expert in the field of trapping and cooling antiprotons. “He was the first person to do atomic physics with antiprotons, he figured out how to cool them, and he did a marvelous measurement of the [lack of] mass difference between antiprotons and protons,” Kleppner says. “So I view him as having opened up the field.” Others on the 15-member ATRAP team had worked on the experiment that first made hot antihydrogen at CERN in 1995. The new team quickly started racing toward cold antihydrogen production.

    But ATRAP was not the only horse in the race. ATHENA, led by CERN's Jeffrey Hangst, had access to the same beamline, the Antiproton Decelerator (AD) at CERN, which takes protons created at near the speed of light down to about 10% of that speed. Drawing on years of experiments with AD's predecessor, LEAR, both groups settled on electromagnetic bottles known as Penning traps to cool antiprotons down to about 4 kelvin, confine them with antielectrons, and induce the two to combine.

    ATHENA and ATRAP follow the same basic recipe for antimatter. Each gets its antiprotons from AD and its antielectrons from a radioactive sodium isotope that emits the particles as it decays. Each captures the antiprotons, cools them to a few kelvin, and shoots them and the antielectrons into opposite ends of a trap where they can mix.

    Going negative.

    rdquo; Antiatom” traps use sandwiched potentials (inset) to bring oppositely charged particles together.


    The traps—meter-long cylinders that corral the particles with electromagnetic fields—face a daunting challenge: Because antiprotons and antielectrons have opposite charges, a potential that captures antielectrons repels antiprotons and vice versa. That makes it difficult to build a single trap that can hold both of them, because a trap that appears like a valley to an antielectron looks like a hill to the antiprotons. To bring the particles together, both teams use a trap within a trap—in effect, two hills framing a valley (from the antielectrons' point of view) or two valleys flanking a hill (from the antiprotons' point of view) (see figure). When an antielectron binds to an antiproton (something that occurs with only a handful of the thousands of cold antiprotons in each shot), the resulting neutral antiatom can no longer be easily controlled by electric or magnetic fields. It escapes the trap and floats away.

    That's where the big differences start. To tell that they've created antihydrogen, the ATHENA physicists look for gamma rays that are produced when an antihydrogen atom is annihilated by collisions with ordinary matter. By subtracting the background gamma rays from their total count, they can estimate the number of antiatoms. Gabrielse's ATRAP team, by contrast, lets the untrapped neutral antihydrogens float into an additional trap, which tears apart the antihydrogen atom. The team then counts the liberated antiprotons as they annihilate on contact with ordinary matter. As a bonus, the ionization trap also yields information about how tightly the antielectron is bound to the antiproton.

    The race for the spectrum began with a false start last year, when ATRAP succeeded in cooling antiprotons together with antielectrons but couldn't prove that it had made the particles combine. This September, the underdog ATHENA drew first blood in the duel for antihydrogen. It announced in Nature that it had produced an estimated 50,000 slow-moving antihydrogens (Science, 20 September, p. 1979).

    “ATRAP was leading the way in technology over time. ATRAP was always ahead,” says Steve Rolston, a physicist at the National Institute of Standards and Technology in Gaithersburg, Maryland. “Coming in second was a shock.” Gabrielse acknowledges that he was taken aback. “It surprised me when they got the paper published, but such is life,” he says. At first, he expressed some reservations about ATHENA's results—it can be easy to mistake background events for antiatoms—but he concedes that ATHENA probably created antihydrogen. “They're honest people and did a fairly careful job,” he says. “Right now, I presume they have seen antihydrogen atoms.”.

    But Gabrielse was poised to strike back. In October, ATRAP released a paper that will be published in Physical Review Letters that goes beyond ATHENA's work: Not only does it claim the production of about 170,000 cold antihydrogen atoms, but it begins to analyze their properties. With its ionizing trap, Gabrielse's team confirmed predictions that the antielectrons would occupy a high “orbit” around their antiprotons, placing the antihydrogens in a loosely bound, high-energy “excited” state.

    This high-energy state complicates the task of taking antihydrogen's spectrum. An antihydrogen in a highly excited state won't absorb the wavelengths of light that physicists are so interested in. To get much information, physicists have to coax the antiatoms back down to their ground state—a slow process compared with the seconds it takes to mix the antiprotons and antielectrons. Thus, the teams will have to trap a lot of antihydrogen for minutes, hours, or even longer before they can get a reasonable spectrum. Existing equipment might not be up to the job, says Rolf Landua, a physicist on the ATHENA team.

    Another problem is that the teams' source of antiprotons has just dried up. The current run of CERN's beamline just ended, so both teams will have to wait until next year to resume the race. And the budget problems at CERN (Science, 29 March, p. 2341) will interfere with their scramble to collect antihydrogen. “[AD] will be shut down for 1 year, which is a huge disappointment,” says Gabrielse. “Without antiprotons, it's hard to make progress.”.

    For now, frozen neck and neck, the rivals can only plan their next round of experiments, fine-tune their equipment, and wait. Says Gabrielse: “I would give a lot for one more week of beam time.”.


    Labor Seeks Fertile Ground on Ivy-Covered Campuses

    1. Jeffrey Mervis

    Graduate student unions aren't a new phenomenon at state universities. But their presence at elite private schools is raising the ante for scientists

    CAMBRIDGE, MASSACHUSETTS—When graduate students at Cornell University in Ithaca, New York, overwhelmingly rejected joining the United Auto Workers (UAW) last month, they scotched what would have been only the second student union at a private U.S. university. A week after the 24 October vote, UAW organizer Joan Moriarty, a Ph.D. candidate in labor economics, still shakes with anger as she recounts the bruising 18-month battle for the hearts and minds of her 2300 colleagues, two-thirds of them in science and engineering fields. Fierce opposition from Cornell's president, a vocal antiunion student group, and reports that some faculty members had warned their grad students that a “yes” vote could jeopardize their careers swung the vote against the union, she believes. However, others say that organizers erred by pushing for a vote before they were ready and hooking up with UAW. “It was a setback, not a defeat,” she asserts, tearfully vowing to continue the fight.

    Whatever happens at Cornell, Moriarty won't be alone. Similar organizing efforts are being waged on dozens of U.S. campuses. No longer exclusively blue-collar, unions also represent some 40,000 graduate students at 27 universities around the country. Unlike their counterparts in many countries, U.S. graduate students often carry heavy teaching loads—spending 20 hours or more a week on duties only tenuously related to their graduate training. Their unhappiness over pay, benefits, and job-related working conditions—as well as nonfinancial issues such as inadequate grievance procedures and career counseling—has been red meat for union organizers. Although teaching assistants still dominate most union bargaining units, research assistants (RAs) are becoming more prominent in the wake of a 2000 ruling by the National Labor Relations Board that RAs perform “work” apart from pursuing their degree requirements.

    Ironically, there is a dearth of rigorous, academic research on how graduate student unions affect academia, notes Elaine Bernard, a labor educator who heads the Labor and Worklife program at Harvard University. Earlier this month, her program joined with a network of labor economists to put on a 2-day meeting here to explore scientific workforce issues, including the rise of graduate student unions and the status of postdocs, a traditionally downtrodden class of researchers who have begun to improve their status through cooperative rather than confrontational tactics (see sidebar). The network is funded by the New York City-based Alfred P. Sloan Foundation, which has a long-standing interest in the health of the U.S. scientific work force.

    What's at stake. The first graduate student union was established at the University of Wisconsin in 1969. Most have been formed in the last decade, however—and none without a fight. The early conflicts took place at public universities, which are governed by state labor laws that are often more receptive to unionization. The battleground has now spread to private institutions. One of the longest running, and most bitter, fights is being waged at Yale University, where administrators have steadfastly refused to recognize the AFL-CIO-affiliated Graduate Employees and Students Organization (GESO) formed in 1990. This year alone, the results of elections at three elite private universities in the northeastern United States remain in limbo as administrators from Tufts, Brown, and Columbia fight the legitimacy of union drives on their campuses.

    Tufts president Lawrence Bacow, in a statement issued shortly before the contested vote on his campus last April, offered a widely held view among university administrators: “In my view there is nothing a union can add to what graduate students can do for themselves,” Bacow said. “I fear that a union may introduce discord and do damage to both graduate and undergraduate education.” Union organizers scoff at such attempts to brand them as an unwanted, foreign presence on campus. “We're students, and we're doing this because we have nothing left to lose,” says Maris Zivarts, a sixth-year doctoral candidate in molecular biology at Yale and secretary-treasurer of GESO. “We're just trying to make things better for the profession and for academia as a whole.”.

    Co-chair of the Sloan network Richard Freeman, a Harvard economist, hopes his colleagues can fill in some of the research gaps that Bernard noted. For example, Freeman and graduate student Emily Jin are analyzing stipend data on graduate assistants from around the country for an upcoming paper on how a union presence affects pay levels. Whatever they find, however, both sides acknowledge that stipend levels are only part of a raft of issues addressed by collective bargaining.

    One major bone of contention is whether unions alter the academic climate. A 2000 survey of faculty attitudes toward graduate student bargaining on five campuses with unions found that unionization had little impact on a professor's ability to advise, instruct, or mentor graduate students. But the author, Gordon Hewitt, now head of institutional research at Hamilton College in Clinton, New York, says there's no evidence the survey results have had any impact on labor-management relations. “It's a political issue, and people will say whatever they have to,” he says.

    Despite the continued labor friction on many campuses, some academic leaders believe that universities need to find a way to accommodate unions. “Five years ago we issued a statement saying that graduate teaching assistants are students only, not employees,” recalls John Vaughn, executive vice president of the 62-member Association of American Universities. “I'm not sure that we could say the same thing today.” Although Vaughn says that he fears the effect on academic life of a continued growth in graduate unions, he also suggests that universities must accept some responsibility for the trend. “In many ways,” he says, “the growth of graduate unions is our fault because of our lack of responsiveness to [graduate students'] problems.”.

    That confession might be small consolation to Cornell's Moriarty, who believes so strongly in the union cause that she refused on principle to apply to a university-run emergency fund for a medical condition because she didn't want to accept a handout in lieu of what she regards as a rightful health benefit. But Vaughn's admonishment resonates with some academics. Graduate students “are the embodiment of what makes the U.S. university system so special—the collocation of education and research,” says Harvard mathematician Daniel Goroff, co-chair of the Sloan network. “And they need our attention.”.


    Collaboration Pays Off for Postdocs

    1. Jeffrey Mervis

    While graduate students are battling university administrators over their efforts to unionize, postdocs are taking a decidedly less confrontational approach. So far, it seems to be paying off. Take the 4-year-old Stanford University Postdoc Association, which represents 1400 postdocs on campus. The association has worked with the university administration to help its members obtain higher stipends, better health and family benefits, and improved working conditions. Stanford has even created and staffed a postdoc office to serve their needs, and it also supplies the necessary muscle.

    One recent victory is the university's decision to set a $36,000 minimum salary next year for new postdocs—some $4000 over a de facto standard from the National Institutes of Health. “We didn't give [faculty] a choice,” says cell biologist W. James Nelson, senior associate dean for graduate student and postdoc education at Stanford's medical school. “Postdocs are the engine driving academic research, especially clinical research,” says Nelson, who runs an 18-person lab with five postdocs. “And improving their lot is simply the right thing to do.” This fall Stanford also set a 5-year limit on service as a postdoc, in effect forcing faculty members to find a permanent position—with a competitive salary and full benefits—for any scientist deemed essential to the lab.

    Stanford is one of some 45 U.S. institutions that have ponied up money for administrative offices to meet the needs of postdocs, who in the past decade have formed 48 associations. Until Stanford postdocs organized, says association co-chair Karen Christopherson, a neurobiologist, “we were isolated and disempowered. There wasn't even a formal grievance procedure.” Indeed, until last year Stanford classified postdocs as “nonmatriculating graduate students,” an undignified misnomer whose main benefit, Christopherson jokes, was a movie discount.

    Not child's play.

    Child-care subsidies, a new benefit for postdocs at the Fred Hutchinson Cancer Research Center in Seattle, are a top priority for organizers.


    Campus leaders in the movement, who are laying plans for a national organization, say that they believe their approach serves them better than collective bargaining. “We've gotten a more effective response by not being a union,” says Orfeu Buxton, a sleep and neuroendocrinology researcher at the University of Chicago and a member of the National Postdoctoral Association steering committee.

    “More power to them if they can get what they want through requests and petitions,” says Jelger Kalmijn, a research assistant at the University of California, San Diego, and president of the University Professional and Technical Employees, which represents some 4000 nonstudent research assistants in the University of California system. “But one day they'll come up against an issue they can't resolve that way.” Union organizers also say that any agreement between individuals is subject to change unless it's written into a binding contract.

    That possibility has already occurred to postdocs. A survey last spring found that 28% of Stanford postdocs favored joining a union immediately, with another 42% saying that they thought it might be a good idea. Only 3% said they would never join. “But there's concern that we'd have to give up some control and that it might add an element of mistrust to the relationship,” says Christopherson. The issue is still under debate, she adds, noting that only about 10% of the members took part in the survey.

  15. Doing the Wave in Many Ways

    1. Charles Seife

    “Kaleidoscopic” doesn't begin to describe the variety of niches coherent waves have come to fill since scientists started giving them their marching orders. This sampler presents a few of the more prominent applications researchers have found for coherent light, sound, particles, and atoms

    Laser Light

    The jewel in the coherence crown is undoubtedly the laser, which has infiltrated every cranny of modern life. Light waves don't naturally march in step—a beam from a flashlight is not coherent—but a laser beam is coherent because it exploits a peculiarity of atoms.

    If you pump the right amount of energy into an atom, you can excite it, causing an electron to flit about at a much higher energy than it ordinarily would. Left to itself, the excited atom would eventually spit out a photon carrying the extra energy and relax back into its ground state. Instead of waiting, though, you can tickle the atom with another photon with that characteristic energy. This induces the atom to disgorge a photon, which then moves in lockstep with the photon you used to tickle the atom. The two photons encounter other excited atoms, which spit out more photons that move in lockstep, which encounter more atoms, and soon you have a powerful, coherent beam of light.


    Lasers can scan a price tag, burn out a tumor, or measure the distance to the moon.


    What can you do with that light? Interfere it with itself, measure distances, focus an enormous amount of power on a small spot many meters away, use it to carry messages without having it scatter away, and many other applications. That is why scientists are constantly trying to broaden the types of lasers available: making them out of silicon, getting them to emit shorter and shorter wavelengths, and coaxing them to fire over a shorter and shorter time span.


    Ordinary photographs are flat, purely two-dimensional images. The illusion of depth is superficial; if you move your head to the side, your perspective on the scene doesn't change. Holography, by contrast, uses coherent light to give a real 3D view of an object. If you move to the side while gazing at the holographic portrait of a person, a face-on view will become a profile.

    Holography exploits coherence and interference patterns to achieve what an ordinary photograph can't. In one scheme, the holographic “camera” is a fancy interferometer. One beam illuminates the target and is reflected onto a photographic plate, but before striking the plate, it is combined with a “reference” beam that interferes with the light coming from the target. The plate records an interference pattern. A laser light, shined on the plate, reconstructs from that interference pattern the original image in three dimensions. Unlike a photograph, which can't be reconstructed when cropped, a small segment of the hologram contains information about how to reconstruct the entire image (albeit at lower resolution).

    Holography isn't merely about pretty pictures. Some scientists are using short-wavelength beams, such as x-rays or neutrons, to make holograms of objects as tiny as atoms, giving fully 3D views of subatomic structure. Others are using the enormous amount of information in a hologram to store data—with greater densities and faster access times than can be achieved with a magnet-based hard disk.


    Because coherent beams march in step, they are perfect for interferometry: the measurement of interference between two or more beams.

    Waves behave like a series of crests and troughs. When waves from two sources combine, what happens depends on the waves' phases—the timing of when their crests and troughs strike a given point. If the waves are in phase, then when they are combined, their crests arrive at the same place at the same time. The crests reinforce each other, making bigger crests; likewise for the troughs. The two beams, combined, are brighter than each individual beam. If they are exactly out of phase, though, a crest meets a trough, a trough meets a crest, and the waves cancel each other. The combined beam is dark.

    An interferometer takes advantage of this property by splitting a beam into two identical beams, sending them down two different paths, and then recombining them. If the paths are the same length, the beams will still be in phase when they recombine; the detector sees a bright beam. If one path changes length a fraction of the wavelength of the beam, then the detector sees the beam dim and then disappear. Because visible light wavelengths are a few hundred nanometers, the interferometer is sensitive to changes in distance much smaller than a human could measure by hand.

    Acoustic Waves.

    Laser Interferometer Gravitational Wave Observatory in Hanford, Washington..


    A CD player uses a laser interferometer to measure tiny pits on the disk, which a digital player interprets as bits. And the Laser Interferometer Gravitational Wave Observatory, operating simultaneously in Washington state and Louisiana, uses two enormous interferometers in hopes of picking up a gravitational wave, which would change the distance between two mirrors by much less than the diameter of an atom.

    Acoustic Waves

    Sound is a wave, although it's not precisely like a light wave or a water wave. Those are transverse waves, in which the wave motion is up and down or left and right. Sound waves are compression waves, alternate rarefactions and compressions of a material such as air. And a single-frequency sound source such as a tuning fork emits waves that are all in phase: They are coherent, at least until the medium they travel through destroys the coherence.

    By recording sound, amplifying its energy, and sending it back to its source, “time-reversal mirrors” could use the noise of an enemy submarine against it.

    A little less than a decade ago, physicists such as Mathias Fink of the University of Paris VII realized that they could exploit that property to focus an enormous amount of sound energy on a given point in space. All they had to do was “reverse time.”.

    Set up an array of detectors in the right place in, say, the ocean, and they can easily pick up the sound of a point source such as a submarine pinging its sonar. Because the ocean is a messy place, littered with silt and temperature gradients and bubbles and such, the waves will get distorted en route to the detectors. Even if the sensors are parked on a sphere around the source, each will detect a slightly different sound wave. But Fink and others realized that if each detector then took what it heard, reversed it, amplified it, and played it back into the ocean, all these different-sounding waves would travel back where they came from and would all converge at the same time, with the same phase, on the original source. The sonar ping would hit the sub with tremendous strength.

    Sub hunters hope to use these “time-reversal mirrors” to pinpoint enemy subs with great precision, and doctors hope to focus ultrasound energy to break up kidney stones without damaging surrounding tissue.

    Quantum Effects

    Schrödinger's cat: alive or dead? It's a matter of coherence, or more precisely, decoherence. Quantum objects such as atoms and photons behave differently from macroscopic ones such as cats or rocks. Thanks to a property known as superposition, for example, a small thing like an atom can be both spin up and spin down at the same time, but a large thing like a cat (to cite physicist Erwin Schrödinger's classic example) can't be simultaneously alive and dead.

    Matter Waves

    Decoherence may explain why the quantum properties of familiar objects vanish unnoticed.


    What makes quantum objects quantum and macroscopic objects macroscopic? It seems to have to do with the process by which quantum objects lose their quantum nature: decoherence. (In a sense, a coherent beam of light behaves like a single quantum object.) When a photon or an atom is measured, it is forced to “choose” whether it's spin up or spin down, and at that moment, it behaves like a classical object rather than a quantum one. The quantum state decoheres.

    Decoherence can strike when information flows from the object into the outside world—from a measurement or from the stray bounce of a molecule of air. The bigger and warmer an object is, the more difficult it is to isolate it and prevent information from flowing from it into its environment, making it decohere more and more quickly. This hemorrhage might be what makes big things behave very differently from small things: Macroscopic objects might have a quantum nature, but it disappears too quickly to measure, leaving behind only the grin of Schrödinger's cat.

    Matter Waves

    Light behaves like both a particle and a wave. So does matter. Individual electrons, for example, can interfere with themselves, leaving “dark” and “light” spots on a detector just as photons do. And just as a traditional laser creates coherent beams of light waves, an atom laser would create coherent beams of matter waves.

    Faint matter waves from an “atom laser” could prove handy for fine etching and imaging.CREDIT: MASSACHUSETTS INSTITUTE OF TECHNOLOGY.

    Cool certain types of atoms enough to still almost all of their jittery thermal motion, and their waves will overlap. The atoms lose their individuality and begin to act like a single superatom: a giant, coherent lump of matter known as a Bose-Einstein condensate (BEC). In 1997, Wolfgang Ketterle, a physicist at the Massachusetts Institute of Technology, allowed matter from a BEC to leak out of its cage, dripping down under the influence of gravity. Although hardly a robust beam like a traditional laser, it was an atom laser of sorts: Matter waves, all locked in phase, were marching in the same direction. Since then, researchers have been improving the brightness and robustness of these beams. Even though atom lasers are nowhere near as bright as an ordinary laser, they have a potentially important advantage: The wavelength of a chunk of matter is much, much smaller than that of a beam of anything but the highest energy light. Because of this small wavelength, an atom laser could do finer work than an ordinary laser can. A matter interferometer could sense much smaller distance changes, a matter hologram could image much tinier features, and an atom laser could, in theory, create structures in silicon or other materials that are far too small for lasers to make.

  16. Battle to Become the Next-Generation X-ray Source

    1. Robert F. Service

    For the first time in decades, researchers are looking past synchrotron storage rings for their x-ray future

    It's crowded at the summit of x-ray science these days. Some 50 state-of-the-art x-ray facilities, called synchrotrons, dot the landscape worldwide. Another 20 are either under development or on the drawing boards. All are stadium-sized rings that produce bright beams of x-rays capable of the Superman-like feat of peering into the heart of matter to see the atomic structure of molecules. Together they've produced countless discoveries, ranging from atomic-scale maps of proteins that have been linked to disease to insights into the riddle of high-temperature superconductivity. But now, for the first time in decades, x-ray researchers have set their sights on new peaks that promise to raise their science to even loftier heights.

    Researchers and science funding agencies are pondering a range of new x-ray sources, some of them add-ons to existing synchrotrons, others major new facilities in their own right. If built, these machines will generate much shorter x-ray pulses than top-of-the-line synchrotrons do today. Researchers hope to use those shorter pulses—possibly as short as 10 to 100 quadrillionths of a second, or femtoseconds—as ultrafast strobe lights to see not only the atomic structure of molecules but also the dance of atoms as they make and break chemical bonds. And one class of machines—called free electron lasers (FELs)—promises to give x-ray researchers something they have never had before: powerful beams of coherent high-energy x-rays. Like light waves from the laser at a supermarket checkout stand, coherent x-ray photons would travel together in perfect unison, rising and falling in lockstep. That regular behavior is expected to create x-ray beams billions of times more powerful than those available today, an accomplishment that could make it possible to collect entire data sets with just one blast of photons rather than the hours or days of beamtime needed today.

    X-ray scientists insist that the new x-ray sources won't compete with one another. “The … facilities are complementary in terms of the science that can be done,” says Roger Falcone, a physicist at the University of California, Berkeley. But because money is always tight, proponents will likely have to square off to persuade backers and funding agencies to support their projects. Still, no matter which camp scores first, researchers should gain access to x-ray beams unlike any seen before. “It's a very exciting time in pushing x-ray science into directions it hasn't been able to go to before,” says Eric Rohlfing, who oversees atomic, molecular, and optical sciences research at the U.S. Department of Energy (DOE) office in Germantown, Maryland.

    Next big thing?

    DOE must decide whether designs such as this add-on to the Stanford Linear Accelerator will make it off the drawing board.


    Stepchild to wunderkind

    Not that the development of synchrotrons has been dull. The first synchrotrons were little more than particle accelerators built for high-energy physics experiments. Those accelerators whipped charged particles such as electrons to near light speed in giant ring-shaped machines and then smashed them into one another. Physicists then surveyed the wreckage for clues to the underlying structure of matter. But as electrons whip around storage rings, they shed x-rays, a side effect that causes them to lose speed. Particle physicists bemoaned that speed trap and tried to find ingenious ways to avoid it. But other researchers soon figured out that the x-rays could be useful in their own right. So they built second- and third-generation synchrotrons optimized to create far more powerful x-ray beams.

    The problem, says Robert Schoenlein, a condensed matter physicist at Lawrence Berkeley National Laboratory in California, is that synchrotrons are somewhat inflexible when it comes to the length of the x-ray pulses they provide. A major reason is that when synchrotron operators inject packages of electrons into the storage ring, their primary goal is to keep them traveling and delivering x-rays to users for as long as possible. That means fiddling with the electrical charges as little as possible. One result, says Schoenlein, is that modern synchrotrons tend to produce trains of x-ray pulses each lasting tens to hundreds of picoseconds, and there is little that users can do to change that timing.

    That might not sound like much time. But compared with the movement of electrons around an atom, it's an eternity. Trying to see the motion of electrons on that time scale is like trying to capture an image of a speeding bullet with an Instamatic camera: At best all you see is a blur. “Worldwide, there is a tremendous [interest] in being able to probe very fast time scales for very small structures,” says Keith Hodgson, associate director of the Stanford Synchrotron Radiation Laboratory in Palo Alto, California. But because synchrotrons can't shorten their pulses, they aren't likely to take researchers where they want to go. “Conventional synchrotrons are near the end of the road,” says Schoenlein. “To make major advances now will require new sources.”.

    Four such sources are now vying for attention and funding. All are based on schemes for extracting x-rays from powerful particle accelerators, although researchers are also making steady progress in making x-ray beams from tabletop lasers (see sidebar).

    The first is a synchrotron add-on called a slicing source. The approach uses an ultrafast laser to slice out a wedge of electrons from a pulse traveling through a synchrotron. This smaller slice of electrons then produces a shorter pulse of x-rays, possibly as short as 100 femtoseconds, says Falcone. Relative to the cost of a new facility, slicing is cheap. DOE is currently considering adding a slicing source to the Advanced Light Source in Berkeley, California, a project expected to cost about $5 million. The downside of the technique is that a slicing source discards most of the electrons in the original pulse. As a result, the x-ray beam it produces might be only a thousandth as strong as a typical synchrotron pulse.

    Dimness won't be a problem with the Short Pulse Photon Source (SPPS), a separate add-on proposed for the Stanford Linear Accelerator Center (SLAC) in California. Instead of working with a ring-shaped synchrotron, SPPS is designed to make short pulses of x-rays from electrons fired down a straight-line linear accelerator, or linac. Like synchrotrons, linacs produce and accelerate intense short pulses of charged particles. Because researchers need not worry about keeping particle pulses stable for long periods, they can use a series of tricks to coax the electrons into creating shorter, denser bunches.

    SPPS will turn these electron bunches into x-rays with the help of a common synchrotron device called a wiggler, a several-meter-long piece of equipment that houses an array of magnets with alternating polarity. As an electron beam hurtles through a wiggler, the alternating magnetic fields make it veer slightly back and forth, like a skier on a slalom course. With each turn the electrons shed x-rays, which by the end of the course pack as many as 100 million photons into a pulse lasting between 100 and 200 femtoseconds. Like slicing sources, the add-on is cheap; its expected price tag is just a few hundred thousand dollars.

    SPPS has its limitations, though. When possible, x-ray researchers prefer “tunable” x-ray sources that let them choose the wavelength that works best for their experiments. But the x-rays SPPS spits out all have the same fixed energy level. What's more, it can fire only about 10 pulses a second, too few to capture more than one frame of a high-speed event. Even so, says Falcone, “it creates an intermediate solution for the field” between today's third-generation sources and a possible fourth-generation source dedicated to producing short pulses of x-rays of a tunable wavelength.

    Bright futures

    Topping the list of ideas for that fourth-generation source is an x-ray version of an FEL. FELs have been around for decades, but all of them built so far turn out longer wavelength photons, ranging from infrared down to ultraviolet. Getting an FEL to produce the most energetic “hard” x-rays long seemed out of reach. But in the 1980s researchers discovered that—at least in theory—adding a 100-meter-long wigglerlike device called an undulator to a linac could create a tunable source of 200-femtosecond x-ray pulses. By manipulating the energy levels of electrons in the linac, operators can produce x-rays of different wavelengths. The long undulator also allows x-ray photons emitted by electron bunches to build up, stimulating the emission of still more x-rays—an advantage that not only makes the x-rays coherent but also creates beams up to 10 billion times brighter than those that top-of-the-line synchrotrons produce today (Science, 10 May, p. 1008).

    X-ray experts are convinced that all that x-ray power would produce revolutionary science. Ultrabright x-rays, for example, might enable researchers to image the atomic structure of complex proteins without first coaxing millions of copies of the protein to line up in an orderly crystal, a requirement that's impossible to fulfill with many proteins today. “It's going to be qualitatively different in what you can do from a third-generation synchrotron,” says Hodgson.

    View this table:

    ps = picoseconds, or 10−12 seconds.

    fs = femtoseconds, or 10−15 seconds.

    That potential has already enticed funding agencies in the United States and Germany to push ahead with x-ray FEL plans. At the DESY particle physics laboratory in Hamburg, Germany, for example, researchers are weighing plans to build an x-ray FEL alongside a proposed particle collider dubbed TESLA. Depending on whether the facilities share equipment and technology, the x-ray FEL could cost between $470 million and $700 million and could open its doors to users in 2010. U.S. officials, meanwhile, just approved $6 million to draw up a detailed engineering design for the Linac Coherent Light Source (LCLS), an x-ray FEL designed to piggyback on SLAC. By using 1 kilometer of SLAC's existing 3-kilometer linac, LCLS could be built for as little as $250 million. If funded and built on schedule, LCLS will open its doors in 2008, according to project coordinator and SLAC assistant director John Galayda.

    Beyond FELs, one more shadowy range of x-ray peaks beckons researchers. A new class of machines called recirculating linacs is beginning to gain attention. “A recirculating linac is a hybrid between a linear accelerator and a storage ring,” says the Berkeley Lab's Schoenlein. The machines also accelerate beams of charged particles and use them to produce x-rays. But instead of sending the electrons out in a straight line, as in a linac, or in a loop, as in a synchrotron, the machine forces them to spiral outward in a path resembling a paper clip (see figure below). In a unique twist, electrons traveling through successive spirals all swoop back through one common section of the accelerator. The design saves money by using the same equipment to give electrons multiple energy kicks. And as with a linac, the machine isn't a ring, so operators have the flexibility of manipulating the electron pulses to create ultrashort bursts of x-rays. “Nobody has prototyped one of these in the x-ray range,” Hodgson says. “But it's really quite an exciting possibility.”.

    Energy saver.

    Recirculating Linacs, such as this schematic layout based on designs from various laboratories, aim to provide ultrashort x-ray pulses with a smaller budget and footprint than today's synchrotrons. Electrons spiral outward as they gain energy.


    Researchers in several groups have come up with recirculating linac schemes for compressing electron pulses as they travel, allowing them to turn out extremely short bursts of x-rays down to 100 femtoseconds and possibly even shorter. “That could potentially revolutionize some of the ultrafast x-ray work,” says Stephen Leone, a physicist with a joint appointment at UC Berkeley and the Berkeley Lab, who chaired DOE's most recent advisory committee report on x-ray facilities. Early-stage proposals for building x-ray recirculating linacs are now being drawn up at the Daresbury Laboratory in Cheshire, U.K., and at Berkeley Lab and Cornell University in the United States.

    Just how all these proposals will sort themselves out remains to be determined. DOE's Basic Energy Sciences Advisory Committee is gearing up to set priorities for future x-ray choices. Their decision, expected next year, is likely to shape the landscape of x-ray science—both in the U.S. and in other countries whose plans depend on the course it follows—for decades to come.

  17. High-Powered Short-Pulse X-ray Lasers: Coming Soon to a Tabletop Near You?

    1. Robert F. Service

    While researchers dream of turning energetic “hard” x-ray sources into ultrafast strobe lights, another class of machines—tabletop lasers—has been freezing fast action for years. Ahmed Zewail won the 1999 Nobel Prize in chemistry for using lasers with 200-femtosecond pulses to watch the breakup of isocyanide, among other chemical reactions. Femtosecond lasers have traditionally turned out photons of infrared light, a range of wavelengths that are far longer than those of x-rays and thus have a harder time spotting the motion of atoms. But researchers are continuing to make strides in developing femtosecond lasers that fire shorter wavelengths of light.


    Small, powerful short-pulse lasers could find a host of applications in science and industry.


    The first step was relatively easy: By simply shining infrared laser pulses through crystals capable of doubling the frequency of the light (thereby halving its wavelength), researchers produced laser light in the visible and even ultraviolet portions of the spectrum. But because such crystals absorb x-rays, other strategies have been needed to shorten the wavelengths even further.

    Four years ago, laser physicists led by Margaret Murnane and Henry Kapteyn, then at the University of Michigan, Ann Arbor, reported that by firing infrared laser pulses into argon gas, they had ripped electrons off the argon atoms. The laser's oscillating electromagnetic wave carried these free electrons away from their parent atoms and then smashed the electrons back into the atoms, causing the atoms to emit soft x-rays (Science, 29 May 1998, p. 1412). Murnane and Kapteyn, now at the University of Colorado and JILA, a joint research institute of UC Boulder and the U.S. National Institute of Standards and Technology, and their colleagues refined the technique. In the 19 July issue of Science (p. 376), they reported that they had tuned the device to produce coherent light, all the photons marching together in quantum-mechanical lockstep. That coherence enabled them to use the short-pulse lasers to record high-density holographic data and should allow such lasers to be used for testing of advanced computer chip patterning components and possibly even developing a microscopic version of a computerized tomography scan, capable of imaging cells in exquisite detail.

    Right now, Murnane and Kapteyn's technique is bumping against a wall. To turn out even more energetic hard x-rays, Kapteyn explains, researchers would need even higher powered lasers capable of carrying electrons away from ions and then smashing them back together with even more force. Unfortunately, such powerful beams scatter the ionized electrons like buckshot. “So you end up getting incoherent and dim emission,” Kapteyn says. “On the other hand, to the extent that we can develop techniques to fix that problem, it's really not known what the potential of these techniques is to generate hard x-rays.” In view of how quickly short-pulse lasers have developed so far, this dark-horse technique for generating hard x-rays is still very much in the race.