News this Week

Science  14 Sep 2001:
Vol. 293, Issue 5537, pp. 306
  1. STEM CELLS

    HHS Inks Cell Deal; NAS Calls for More Lines

    1. Constance Holden

    A month after President Bush came out with his policy on embryonic stem cells, the National Academy of Sciences has weighed in with a report stating that far more cell lines need to be available for research. It has also endorsed the ethically controversial practice called “therapeutic cloning” for the purpose of producing tissue that is genetically matched to patients.

    The report, by a committee headed by Johns Hopkins University biologist Bert Vogelstein, stresses that existing lines will have increasingly limited use as they age and accumulate mutations. The group repeated concerns that have been voiced since Bush's 9 August announcement: that many of the 64 cell lines that qualify for federal funding are not yet, and may never become, viable for research. At hearings last week, Health and Human Services Secretary Tommy G. Thompson admitted that only 2 dozen lines are currently ripe for use. The panel also noted that all the cells that qualify for federal funding are cultured with the aid of mouse cells, which could mean that their products would be unsuitable for human therapies.

    The panel says that public funding is the most efficient way to further stem cell research. It also recommends that a national advisory group of scientists and ethicists be established at the National Institutes of Health to oversee the research.

    Meanwhile, true to its promise to move rapidly on the stem cell front, the Public Health Service (PHS) signed an agreement the day after Labor Day designed to make existing cell lines more readily available. The government's “groundbreaking” agreement with the U.S. group that holds the patents on human embryonic stem cells, announced by Thompson at a 5 September Senate hearing, clears away some legal underbrush, enabling government researchers to obtain cell lines, do basic research, and publish unfettered by intellectual-property restrictions. WiCell, the provider, says it is committed to making similar agreements with other research institutions that receive federal funds.

    Ready to roll.

    Trays of embryonic stem cells in James Thomson's lab.

    CREDIT: JEFF MILLER FOR THE UNIVERSITY OF WISCONSIN, MADISON

    Now, for $5000, a would-be stem cell researcher can obtain two vials of cells and technical assistance in cultivating them, says Carl Gulbrandsen, president of WiCell, the nonprofit stem cell research institute run by the University of Wisconsin, Madison. Gulbrandsen says WiCell has enough cells to supply all comers. “We could supply hundreds,” he said. As of 5 September, he said, WiCell had had more than 100 inquiries.

    Under the memorandum of understanding between WiCell and PHS, scientists at the National Institutes of Health (NIH) will have a free hand as long as they stick to basic research. “This provides a template” that universities can use to negotiate their own agreements, says Maria Freire, who until last week headed the Office of Technology Transfer at the NIH. She says WiCell's earlier cell-transfer agreements have not been “nearly as user-friendly as this one.” The new accord was promptly reached, she says: “Wisconsin understood it was critical to get these [cells] in the hands of researchers.”

    Like WiCell's earlier agreements, this one prohibits using the cell samples to try to create whole embryos or for any therapeutic or diagnostic purposes. And in a change of policy, WiCell is forgoing “reach-through” rights—that is, it won't claim patent rights to any new discoveries, such as a useful new molecule, that researchers make using its cell lines in basic research. Freire says if researchers come up with a discovery with commercial potential, the developer might have to negotiate a new agreement with WiCell. At that point the developer could potentially run up against claims by Geron, the California company that is licensed to develop six types of human tissue from WiCell lines. (Geron's attempt to expand its stem cell territory last month elicited a lawsuit, yet to be decided, by the Wisconsin Alumni Research Foundation, or WARF; see Science, 17 August, p. 1237).

    Theoretically, WARF, which owns the WiCell patents, could try to prevent people from buying similar cells from, say, Sweden or India, because the Wisconsin patents cover both the substance of the cells and the method for deriving them. But WiCell says it will not object to the use of other embryonic stem cell lines as long as the other providers' conditions are generous, too.

    Thompson has promised that by next week, NIH will post on the Web a detailed registry describing the 64 stem cell lines that qualify for federally supported research.

  2. STEM CELLS

    $2.2 Million for Cells to Fight Parkinson's

    1. Constance Holden

    While the government has declined to fund the derivation of new stem cell lines, private foundations have been busy. On 7 September, the Michael J. Fox Foundation for Parkinson's Research announced a $2.2 million research initiative to establish a line of dopamine-producing neurons from stem cells; Parkinson's researchers everywhere would have access to this line.

    Foundation spokesperson Michael Claeys says the initiative grew out of a 6 August scientific conference to assess cell-based therapies for Parkinson's. “This initiative has been driven by scientific opportunities, not by any public policy,” he insists. Applications will be accepted from people deriving or already working with adult, fetal, or embryonic stem cells. “The skill required to do this pretty much eliminates people who aren't in the game already,” notes Claeys. Speed is also required: The deadline for letters of intent is 5 October, and the application deadline is 16 November.

  3. NATIONAL CANCER INSTITUTE

    Klausner Quits NCI to Head New Institute

    1. Eliot Marshall*
    1. With reporting by Jocelyn Kaiser.

    Richard Klausner, director of the National Cancer Institute (NCI), announced this week that he has resigned, effective at the end of the month. He will become the first director of a new philanthropic outfit in Washington, D.C., the Case Institute of Health Science and Technology, established with $100 million in support from America Online founder Steve Case and his wife, Jean Case. “One of the great things” about the new job, Klausner said, is that he will remain close to NCI and continue to run an intramural lab there. The Case Institute, according to Klausner, will invest in a spectrum of health projects ranging from developing tools for molecular biology to bioinformatics and even methods of improving water quality in the developing world.

    New foundation.

    After 22 years at NIH, Klausner is moving on.

    CREDIT: SAM KITTNER

    Klausner's departure had been rumored for months, although he denied as recently as 3 weeks ago that he was leaving (Science, 31 August, p. 1569). In an interview the day before he announced his departure at a meeting of the National Cancer Advisory Board (NCAB), Klausner denied any connection between his move and a clampdown on NCI management by the Department of Health and Human Services, including revocation of large salary increases he had approved for NCI's top administrative officer and others. Reports suggesting he is leaving as a result, Klausner said, are “absolutely false” and “made up of whole cloth.” Far from welcoming his departure, Klausner said, the administration recently urged him to stay and head the National Institutes of Health (NIH).

    Klausner, who has been at NIH for 22 years, took charge of NCI in 1995. He made policy changes designed to make the administration more flexible and promote a molecular understanding of cancer.

    Biologist Phillip Sharp of the Massachusetts Institute of Technology, a member of NCAB, said Klausner made NCI into “an open and forward-looking organization.” At the NCAB meeting, Sharp praised Klausner for his leadership and “putting cancer research at the cutting edge of science and technology.” The administration has not yet named an acting NCI director.

  4. ASTRONOMY

    Report Finds Fault With NSF Oversight

    1. Andrew Lawler

    A mixture of relief, praise, and criticism greeted the publication last week of a much-anticipated report* on support for astronomy in the United States. As Science reported 2 weeks ago (31 August, p. 1566), a panel of the National Academy of Sciences argued strongly against merging the astronomy programs of NASA and the National Science Foundation (NSF)—a possibility the White House had asked the academy to consider. But the panel has stirred up debate with recommendations to improve coordination of federal astronomy programs, while highlighting flaws in NSF support for the ground-based portion of the discipline.

    The relief came from the panel's rejection of the idea of wholesale restructuring, on the grounds that multiple funding sources strengthen the field. But the panel noted that the growing influence of NASA, the interdependence between space-and ground-based telescopes, and the increasing role of state and private funds and facilities require “systematic, comprehensive, and coordinated planning.” According to the panel, chaired by former aerospace executive Norm Augustine, the planning should be carried out by a board representing several federal agencies and led by someone of the White House's choosing. The report also urges NSF to set up its own astronomy advisory panel and to build closer ties to nonfederal players.

    Clearer vision.

    Report says that greater cooperation will help private facilities such as the UC Observatories/Lick Observatory.

    CREDIT: ROGER RESSMEYER/CORBIS

    No one disputes the need for greater coordination of the field. But another advisory body at NSF isn't practical, says Robert Eisenstein, chief of NSF's math and physical sciences directorate. And, he adds, “if we do it for astronomy, there are 40 other directorates that will say, ‘What about us?’” Joseph Miller, director of the University of California Observatories/Lick Observatory in Santa Cruz, likes the idea of more community input at NSF. But he's troubled by the prospect of an interagency body setting priorities for the bulk of the country's astronomy portfolio. “We fear this could turn into some top-down monolithic program” that leaves little room for independent voices, says Miller, whose facility is funded by the state and by private foundations.

    Apart from better coordination, most of the recommendations focus on the need to improve NSF's management of U.S. astronomy. The agency has lagged in supporting new instruments and allocating research grants as ground-based optical and infrared astronomy facilities have proliferated, the report notes. The Augustine panel suggests that NSF come up with its own strategic plan, including timelines and objectives, an open bidding process for all new facilities, and a more comprehensive accounting system for each project. It also suggests that NSF could learn from media-savvy NASA about how to publicize its scientific discoveries.

    Eisenstein acknowledges that tight funding and a focus on large facilities have resulted in “a big squeeze on grants.” But he says that accepting unsolicited proposals from academics for new facilities, rather than holding open competitions, has served astronomy well by encouraging creative ideas.

    However, both Eisenstein and Miller agree that the academy report could be a boon to a long-discussed proposal for NSF to pay for additional instrumentation at private observatories in exchange for blocks of time on those telescopes, which NSF would then dole out to researchers. “We need to start with practical things, and I have high hopes for this,” says Miller. Eisenstein says he hopes to find enough money in NSF's 2002 budget, now under review by Congress, to begin funding the exchange program, assuming that both sides can agree on how to structure the arrangement. “The burden of proof is on us—with the full cooperation of the community—to figure out a way to implement this [program],” says Eisenstein.

    Miller and a group of directors of private observatories say that such an agreement would be a welcome sign that NSF is listening to them. And they hope that the Augustine report will foster a new era of greater cooperation. “At least this gives us a mandate to make the best use of funds in a coordinated way,” says Paul Goldsmith, director of the National Astronomy and Ionosphere Center in Arecibo, Puerto Rico.

  5. GENOMICS

    Painting a Picture of Genome Evolution

    1. Jennifer Couzin*
    1. Jennifer Couzin is a writer in San Francisco.

    Normally, we associate evolution with organisms growing more complex as they acquire new genes over time. But as a new analysis of the genome sequences of two bacteria shows, genes can be lost as well as gained during evolution. Even more intriguingly, the work provides snapshots capturing gene decay in the act and thus illuminates the actual genomic changes that occurred over tens of millions of years of evolution.

    The research, which is described on page 2093 by microbiologist Didier Raoult of the Marseilles School of Medicine in southern France and his colleagues, focuses on two pathogenic bacteria: Rickettsia conorii, the culprit in Mediterranean spotted fever, and R. prowazekii, which causes typhus. These organisms diverged from a common ancestor 40 million to 80 million years ago, and evidence of accumulated mutations in a gene shared by the two indicates that R. prowazekii is evolving more rapidly. To explore how the two grew apart, the Raoult team sequenced the complete 1.3-billion-base-pair genome sequence of R. conorii and then compared it to R. prowazekii's genome sequence, which was determined 3 years ago by Charles Kurland of the University of Uppsala in Sweden and his colleagues.

    Evolution clue.

    The newly sequenced genome of Rickettsia conorii, shown here inside a host cell, is providing insights into evolution.

    CREDIT: COURTESY VSEVOLOD POPOV/UNIVERSITY OF TEXAS MEDICAL BRANCH IN GALVESTON

    The two Rickettsia are good subjects for this analysis partly because both are obligate intracellular parasites, which means they can survive only in the cells of their insect vectors or in the cells of animals they infect, such as humans. Thus, they rarely encounter other species with which they can exchange genetic material, making it easier to trace how their individual genomes change over time.

    Scientists have long predicted that, for a minute bacterium trapped in an animal's cell, shrinking the genome can preserve energy and improve efficiency. The new analysis by the Raoult team gives a stamp of approval to this theory. It shows that R. prowazekii's genome is smaller overall—1.1 billion bases compared to its cousin's 1.3 billion. It also has one-tenth as much repeated DNA and far fewer active genes; whereas R. conorii has 1374 such genes, R. prowazekii has only 834.

    What's more, remnants of nearly half the genes that no longer function in R. prowazekii remain in its genome. The arrangement of this “junk” DNA even mirrors the configuration of the active genes in R. conorii. “It was like having one of the two being the ancestor of the other one and then seeing what has happened during all these years,” says Raoult.

    “This [sequence] is telling us something about evolution that maybe we already should have known,” says David Walker, a pathologist at the University of Texas Medical Branch in Galveston, referring to the fact that bacterial genes decay. Because remnants of many of the genes lost by R. prowazekii stay behind in the pathogen's genome, he adds, the new sequence could shed light on why genes degrade and how their functions change as they do.

    Indeed, the work already offers a remarkably clear view of the stages of gene decay. As the R. prowazekii genome sequence shows, first genes are interrupted by a stop codon, a sequence of three nucleotides that tells the protein-synthesis machinery that it has reached the end of the gene and should stop. Occasionally, these interrupted genes continue to make incomplete proteins of their own. But as degradation progresses, genes lose the ability to produce proteins and eventually stop being copied into messenger RNA altogether, although they remain identifiable.

    The decay is likely the combined result of random mutations and adaptation. Struck by a mutation that disables a gene, individual Rickettsia microbes either die or pass the altered genes on to their offspring. Raoult points out that some of the genes lost make enzymes needed to produce amino acids also generated by the host—meaning that the bug can abandon these genes without losing access to the amino acids. “If you don't have some positive benefit from that gene, you lose it,” says Nancy Moran, an evolutionary biologist at the University of Arizona in Tucson.

    Having painted the outline of Rickettsia evolution with broad brush strokes, scientists now hope to focus on the details of how gene inactivation occurs. Moran notes that many of the genes lost perform basic functions such as DNA repair. Thus, it's possible that the loss of, say, one specific DNA repair gene instead of another affects which mutations stick. By clarifying how genes lost may guide the bacterium's evolution, scientists can perhaps grasp how its existing design came to be.

  6. ASTROPHYSICS

    Orbiting Observatories Tally Dark Matter

    1. Charles Seifey*
    1. *“Two Years of Science With Chandra,” 5–7 September.

    WASHINGTON, D.C.— As galaxy clusters belch x-rays in all directions, they reveal the hidden mass in the universe. At a meeting here celebrating 2 years of observations with the Chandra X-ray Observatory,*

    astronomers claimed that Chandra observations, along with pictures from the Hubble Space Telescope, enabled them to calculate the amount of dark matter in the cosmos—and seriously damage one theory about its nature.

    One of the biggest puzzles in astrophysics is the nature of dark matter, the invisible substance whose gravitational pull holds galaxies together. “It's been about 25 years since we appreciated that dark matter is the dominant form of matter in the universe,” says Joel Bregman, an astrophysicist at the University of Michigan, Ann Arbor.

    In the past few months, astronomers have measured the amount of dark matter by looking at wiggles in the cosmic background radiation (Science, 4 May, p. 823) and by analyzing the distribution of galaxies in space (Science, 13 April, p. 188). They've concluded that ordinary matter makes up only about 5% of the mass needed to give space the shape that cosmologists prefer, while dark matter makes up another 25% or so. (The mysterious “dark energy” or “quintessence” seems to make up the remainder.) At the symposium, Steven Allen, an astronomer at the Institute of Astronomy in Cambridge, U.K., presented new evidence that those figures are correct.

    Weighty matters.

    Warped images of galaxies in cluster Abell 2390 helped reveal the mass in intervening space.

    CREDIT: HUBBLE SPACE TELESCOPE

    With the Chandra satellite, Allen and his colleagues observed the x-rays emitted by gas inside massive galaxy clusters. “For the very first time, we're able to accurately measure the temperature of this gas,” Allen says. From the temperature profile and density of the gas, the team figured out how much mass is holding the cluster together. “It's very straightforward,” he says.

    Meanwhile, pictures from the Hubble Space Telescope and ground-based observatories gave an independent measurement, based on how much the extreme mass of the cluster bends light, a phenomenon called gravitational lensing. The more lensing, the more mass is concentrated in the cluster. Although the two methods are very different, their results agree. “With the optical data and the x-ray data, you get the same answer,” says Allen. The values for the amounts of matter and dark matter in the universe match what the cosmic background and galaxy-distribution data imply. “It's the most accurate determination to date of the amount of dark matter in galaxy clusters,” he says.

    John Arabadjis of the Massachusetts Institute of Technology has used Chandra x-ray data to draw an even stronger conclusion about dark matter. Some theorists postulate that dark matter is self-interacting—that is, particles of it are fairly likely to collide with one another. In that case, the collisions should force the dark matter to spread out more than it would otherwise. This hypothesis seemed to explain the distribution of matter in the centers of dwarf galaxies, but according to Arabadjis, Chandra's x-ray measurements show that dark matter in galaxy clusters doesn't spread out as one would expect if the particles collided easily. Thus, the model that succeeds in dwarf galaxies seems to fail in larger structures. “We can more or less say that self-interacting dark matter is dead now,” Bregman says.

    Paul Steinhardt of Princeton University is less sure. “The model's been declared dead many times,” he says. Steinhardt thinks the study's assumptions are too crude to give definitive answers yet. And even if Arabadjis is right, he says, “there's plenty of room left in the self-interacting picture. But the simplest version might be in trouble.”

  7. GEOLOGY

    Swiss Scientists Trace 645-Year-Old Quake

    1. Giselle Weiss*
    1. Giselle Weiss is a writer based in Allschwil, Switzerland.

    At dinnertime on 18 October 1356, residents of Basel, Switzerland, felt the jolt of an earthquake that toppled churches and castles 200 kilometers away and triggered weeklong fires. The ground seemed to slumber after that. Although a few obscure accounts tell of periodic tremors in the area up to 1721, the nature of the 1356 earthquake—the largest historical seismic event in central Europe—remained a mystery. Now, on page 2070, researchers report that they have identified an active fault that may explain not only the 1356 earthquake but two earlier ones as well. The finding provides the first indication of how frequently such events shake the upper Rhine graben, the rift valley system to which Basel and its environs belong.

    Unlike the San Andreas fault in California, European faults responsible for earthquakes are hard to identify, says Domenico Giardini, director of the Swiss Seismological Service and a co-author of the report. That's because small earthquakes leave little trace on the surface, and major earthquakes are too rare to have left much of a historical record.

    With a magnitude estimated at between 6 and 6.5, however, the 1356 quake should have been big enough to leave a visible mark, says Mustapha Meghraoui, a geologist at the University of Strasbourg, France. Meghraoui and colleagues Bertrand Delouis and Matthieu Ferry of the Swiss Federal Institute of Technology in Zürich set out to find it. After poring over aerial and satellite photographs and topographic maps, they zeroed in on a 50-meter-high escarpment that runs for 8 kilometers along the western side of the Birs valley in Reinach, south of the city of Basel. Something had obviously happened there—but was the feature really due to an earthquake, or just to erosion or landslides?

    Ravaged.

    A 1544 woodcut shows the earthquake that leveled Basel 2 centuries earlier.

    CREDIT: NISEE/EERC

    Searching for clues, the researchers visited an archaeological dig in the area. There, in the wall of a trench, they spotted signs of movement along a fault: a sharp contact between a very young sediment and an old sediment. The team crossed the road and started trenching at the base of the scarp, using geophysical evidence such as differences in the electrical resistivity of the ground to pinpoint the most promising sites.

    “Before you open the trench, you cannot be 100% certain that you will find the earthquake,” says Delouis, a seismologist, who says he followed close behind the digging machine. To date, eight trenches have been opened. Painstaking examination of the wall contents have revealed blocks of sand and clay clearly separated on a steep diagonal—the trace of so-called normal faulting, in which extensional (or pull-apart) forces cause blocks of crust to slide up and down relative to each other along a rupture. In the fourth trench, carbon-14 dating confirmed that three earthquakes had nudged the earth upward a total of 1.8 meters over the past 8500 years.

    The new findings suggest that the fault unleashes a 1356-type earthquake every 1500 to 2500 years on average. That may not seem like much to worry about. But averages say little about when the next quake will strike, Meghraoui points out. Besides, he says, the Rhine graben probably harbors other faults capable of rattling the area: “The challenge is to find them and build a realistic seismic hazard assessment.”

    Donat Fäh, a geophysicist with the Swiss Seismological Service, says that data from this and future studies will go into regional earthquake catalogs to help develop building codes, especially for critical facilities such as chemical and nuclear power plants and long-lived features such as artificial lakes.

  8. COGNITIVE NEUROSCIENCE

    Moral Reasoning Relies on Emotion

    1. Laura Helmuth

    Suppose, in a classical moral dilemma, you see a trolley with five frightened people in it headed for certain disaster. They can be saved from plunging off a cliff if you hit a switch and send the trolley onto another track where, tragically, another person is standing who would be killed by the trolley. What to do? Most people say that it's worth sacrificing one life to save five others.

    But suppose the doomed trolley can only be saved if you push a bulky person onto the tracks, where his body would stop the trolley but, alas, he would be killed. Although faced with the same trade-off of five lives for one, most people say it would be wrong to stop the trolley this way. Paradoxes such as this mean job security for philosophers. They've been debating them for decades but have been unable to come up with a logical reason why sometimes it's OK to kill one person to save five, whereas other times it's not.

    Now, an interdisiplinary team has offered its philosopher colleagues a helping hand. According to a brain imaging study presented on page 2105, even if an ethical problem is posed in strictly rational terms, people's emotional responses guide their solutions. The study, says cognitive neuroscientist Martha Farah of the University of Pennsylvania in Philadelphia, “pushes outward on the boundaries” of cognitive neuroscience. Rather than studying how people perform relatively simple tasks such as movements, the team is exploring “something quintessentially a form of higher human thought.”

    Right or wrong?

    Sometimes saving a net four lives just feels wrong.

    ILLUSTRATION: JOE SUTLIFF

    Intrigued by the dilemma of the moral dilemmas, a team led by Joshua Greene, a philosophy grad student at Princeton University in New Jersey, used functional magnetic resonance imaging to spy on people's brains while they read and reasoned their way through a number of scenarios. Some resembled the “switch tracks” dilemma, others the “push body,” and some had no apparent moral component, such as deciding whether to take a bus or train to some destination.

    While the people were deliberating the body-pushing set of moral dilemmas—but not the other scenarios—emotion areas of their brains lit up, the team found. These areas, the medial frontal gyrus, posterior cingulate gyrus, and angular gyrus, have been shown to be active when someone is sad, frightened, or otherwise upset. The team's scan didn't register parts of the frontal lobes that are strongly associated with emotions and judgment, so “it's not the prettiest picture,” says Farah. Even so, she says it's still clear that some dilemmas activate emotion areas of the brain and others don't.

    “From a utilitarian point of view, these situations are identical,” says psychologist Jon Haidt of the University of Virginia in Charlottesville; “they differ only in that one of them feels wrong.” Greene points out that the study doesn't resolve whether it's right or wrong to push someone into the path of a runaway trolley, but it does begin to answer a related question: how people decide what's right and wrong.

    The findings are bad news for the majority of moral philosophers and ethicists, who maintain that moral decisions must be based on pure reason, says philosopher Stephen Stich of Rutgers University in New Brunswick, New Jersey. After all, he says, people in the scanner are “thinking of abstract, hypothetical problems, of the sort philosophers have been reflecting on for decades.” Instead of discounting emotion, Stich says, his colleagues should treat it as an important part of people's moral reasoning.

  9. NEW FACILITIES

    Congress Grills NSF on Selection Process

    1. Jeffrey Mervis

    Michael Marx wants to understand why there's so much more matter than antimatter in the universe, making possible the world as we know it. Before probing this mystery, however, the particle physicist must struggle with another, more earthly puzzle—understanding how the U.S. National Science Foundation (NSF) ranks competing big-ticket projects like Marx's.

    Marx thought he had the NSF part of the equation solved last October. That's when the National Science Board (NSB), which oversees the agency, approved a $120 million accelerator experiment at Brookhaven National Laboratory in Upton, New York, that would allow him and a team of scientists from around the world to measure a phenomenon, called charge-parity violation, that provides a glimpse into the first few moments after the big bang. However, Marx's excitement cooled in April when he looked at NSF's 2002 budget request and couldn't find a $25 million downpayment for the two detectors that make up the Rare Symmetry Violating Processes (RSVP) experiment. “I was shocked,” he recalls. “They told us that we were on the very fastest track.” Two months later, his disappointment turned to anger when he learned that an influential member of Congress was planning to put money into NSF's budget for another facility—a neutrino detector dubbed Ice Cube at the South Pole—also approved by the science board but not requested by NSF (Science, 27 July, p. 586).

    En route.

    This Gulfstream V will become a research plane, one of several new facilities funded by NSF.

    Testifying last week before the House Science Committee's research subcommittee, NSF director Rita Colwell and NSB vice president Anita Jones offered a glimpse into how the agency selects such projects as RSVP and Ice Cube from a pool of contenders. The hearing, prodded by a report from NSF's inspector general that faulted the agency's management of large facilities under construction, also featured the first public listing of projects approved by the science board (see table).

    SOURCE: NSF

    One revelation was that the science board does not prioritize its choices after screening for scientific merit. “Our job is to [whittle them down] from a huge list to a small number of projects,” explained Jones, a computer scientist at the University of Virginia, Charlottesville. “The board expects them all to go forward, budget permitting.” Representative Nick Smith (R-MI), who chaired the hearing, expressed dismay that the board doesn't rank them. “Do we really want OMB [the Office of Management and Budget] to make that decision and then leave it to politicians to decide what to fund?” he asked.

    Jones defended the board's neutrality, saying it provided NSF with greater flexibility. Colwell added that her top priority is completing projects that have already received some funding, after accounting for balance across disciplines and the readiness of individual projects. Each fall NSF hashes out the list with OMB, which this year created a logjam by ordering no new starts.

    That explanation wasn't much solace for RSVP's supporters, however. At the hearing, Representative Felix Grucci (R-NY), whose district includes Brookhaven, pressed Colwell for information about the status of the project. She dodged his question, saying that he'd have to wait until the Bush Administration's 2003 budget is unveiled in February.

    However, Colwell was more forthcoming on how NSF plans to handle future big projects. She announced the formation of an office for large facilities to try to ensure that every project is built on time and on budget. “We want to bring in some expertise that hasn't been resident here,” says Tom Cooley, NSF's chief financial officer, about a new deputy who will oversee a half-dozen staffers and work cooperatively with science program managers. NSF hopes to fill the top slot by this winter, at a salary of about $130,000.

    Marx, a professor at the State University of New York, Stony Brook, now working full-time as an RSVP project manager, is eager to work with the new office as an NSF-funded project. In the meantime, he'd welcome more “transparency” in how selections are made. “I think it's wonderful that NSF has more good ideas than money to build them,” he says. “It just would be nice to know what's going on.”

  10. AIDS RESEARCH

    Debate Begins Over New Vaccine Trials

    1. Jon Cohen

    PHILADELPHIA, PENNSYLVANIA- Government health officials are wrestling with a tough decision: Should they approve the most ambitious clinical trials to date of an AIDS vaccine, even if the two candidates have clear shortcomings. At a meeting* here last week, the U.S. military and the U.S. National Institutes of Health (NIH) unveiled detailed plans to launch phase III “efficacy trials” next year of nearly identical vaccines. The separate trials would cost a total of at least $95 million and involve nearly 27,000 participants from the United States, Thailand, and several countries in the Caribbean and South America (see table). But, as a vigorous debate here indicated, some researchers have deep reservations about whether these tests should go ahead.

    David Baltimore, the Nobel laureate who heads NIH's AIDS Vaccine Research Committee (AVRC) and runs the California Institute of Technology in Pasadena, summed up the dilemma: “We have no other materials that are worth considering for phase III trials. It will take at least four more years for that. And four more years will be demoralizing for the entire vaccine enterprise.” But then again, Baltimore and other researchers acknowledged that these two candidate vaccines have serious weaknesses, because in smaller human tests they have not triggered powerful immune responses against HIV.

    The proposed trials would test vaccines used in a one-two punch called a “prime-boost.” The first vaccine, the “prime,” consists of HIV genes stitched into canarypox, a bird virus that does not harm humans. Made by the Franco-German pharmaceutical company Aventis Pasteur, the vaccine aims to teach the immune system to produce “killer cells” that would home in on and destroy HIV-infected cells. The “boost” would come from a genetically engineered version of HIV's surface protein gp120. This second shot, made by VaxGen of Brisbane, California, stimulates production of antibodies that, theoretically, can prevent HIV from infecting cells in the first place.

    The debate surrounding these efficacy trials echoes a dispute that rocked the field in 1994 over plans to test gp120 vaccines singly (Science, 24 June 1994, p. 1839). At the time, NIH decided not to fund efficacy trials of gp120 vaccines made by two California biotechs, Genentech and Chiron, because phase II data suggested that antibodies triggered by the vaccines could only stop wimpy strains of HIV.

    Researchers from both the U.S. military and NIH's HIV Vaccine Trials Network (HVTN)—a collection of academics who design and conduct the tests—said that they will stage efficacy trials of prime-boost vaccines only if phase II studies now being completed show that at least 30% of vaccinated people developed killer cell responses at some point during the trial. Larry Corey, who heads the HVTN's Core Operations Center at the Fred Hutchinson Cancer Research Center in Seattle, Washington, says the 30% benchmark will provide enough statistical information to determine whether levels of killer cells in vaccinated people correlate with protection from HIV infection. But data from phase II trials of these vaccines suggest that meeting this 30% goal is far from a given, as Mark de Souza of the Armed Forces Research Institute of the Medical Sciences in Bangkok, Thailand, described. Early results from a U.S. military study of canarypox in that country indicate that only about 22% of vaccinated people developed killer cells, de Souza reported. “It's going to be close,” acknowledges the lead AIDS vaccine researcher for the U.S. military, John McNeil of the Walter Reed Army Institute of Research in Rockville, Maryland. “But I think [the vaccine is] good enough to go forward.”

    Other investigators are skeptical about the 30% target. “That's not good enough for me,” says Douglas Richman, a virologist at the University of California, San Diego. Richman, who sits on the AVRC, worries that if only 30% of people develop killer cells, the vaccine might fail too often to be of practical use. Mark Feinberg of Emory University in Atlanta further questions whether such a low response would truly allow researchers to determine whether the killer-cell responses correlate with immunity. “We have a hard time figuring out correlates of immunity in AIDS vaccine monkey experiments where we study the animals much more intensively,” he notes.

    Both Feinberg and Richman, like many of their colleagues, reserved judgment about whether the efficacy trials should proceed, saying they first want to review the phase II data. But Brigitte Autran, an immunologist at Hôpital Pitié-Salpêtrière in Paris who has evaluated killer-cell responses in recipients of the canarypox vaccine, says that “there's no good scientific basis for these trials.” She is especially dubious about conducting two similar trials. Susan Buchbinder of the San Francisco, California, Department of Public Health, who described the HVTN trial at the meeting, counters that the two trials complement each other and may pool data.

    The military hopes to review its phase II trial data over the next few weeks and make a decision before the end of the year. HVTN will not complete its phase II study until December and will probably need at least a month to collate the data—just in time for January's meeting of NIH's AVRC. NIH may also sponsor another public meeting to weigh the risks and benefits of proceeding with this costly, complex trial.

    • *AIDS Vaccine 2001, sponsored by the Foundation for AIDS Vaccine Research and Development, 5–8 September.

  11. MICROBIOLOGY

    Do Chronic Diseases Have an Infectious Root?

    1. Carl Zimmer*
    1. Carl Zimmer is the author of Evolution: Triumph of an Idea.

    A wealth of evidence suggests that pathogens may play a role—perhaps even a causal one—in chronic diseases like Alzheimer's or MS. Proving that theory, however, is another matter

    In the 1970s, epidemiologists documented remarkably high rates of multiple sclerosis (MS) on the isolated Faeroe Islands in the North Atlantic. MS is nothing new to medicine, but it was new to the Faeroe Islands: There was no sign of it there before the 1940s. The epidemiologists found that the disease got its start during an outbreak coinciding with the arrival of British soldiers during World War II.

    That an influx of visitors can trigger an outbreak is not unusual. But for it to trigger this specific disease was, because MS was generally thought to be a chronic condition brought on by genetic factors and perhaps a defective immune system.

    But if MS is caused by a germ, what germ is it? That's a question that scientists have been asking about a growing list of chronic diseases that were once thought to be mainly a matter of genes or lifestyle. Since the 1970s, epidemiological clues have emerged for other diseases, and in some cases, scientists have been able to make a persuasive case for specific bugs.

    In the 1980s, for example, researchers discovered that bacteria cause ulcers and that certain viruses trigger cancer. And in the past 2 decades, dozens of pathogens have been implicated in a range of diseases, from Alzheimer's to arthritis. Recently, some biologists have argued that evolutionary theory predicts that all but the rarest chronic diseases must be caused by infections.

    But despite many exciting hints, researchers are a long way from clinching the argument. Alzheimer's disease, MS, and schizophrenia offer three cautionary lessons. Every step of the research is fraught with controversy, from isolating the pathogens to determining how they might cause the disease to sorting out how a host's genetic profile influences the course of disease. And because these illnesses are chronic, scientists have to confront frustrating questions about cause and effect that don't come up with acute illnesses like Ebola or the mumps. Is the pathogen the cause of a particular disease or just a late-coming bystander? And what if two or more pathogens are implicated in the same chronic disease? Are both the cause, or neither? “There is a real chicken-and-egg problem here,” says Stephen Reingold, vice president of research at the National Multiple Sclerosis Society.

    A humbling lesson

    For researchers who suspect that pathogens lie behind many chronic diseases, ulcers are the great inspiration. In 1981, a young Australian gastroenterologist named Barry Marshall learned of a mysterious bacterium lurking in the stomachs of patients. Over the next few years, he discovered that people suffering from ulcers often carried the microbe, which came to be known as Helicobacter pylori. Defying decades of conventional wisdom, Marshall speculated that the bacterium—and not acid or stress—might actually cause ulcers.

    Inspirational.

    The discovery that H. pylori—not acid—causes ulcers gave credence to the idea that pathogens lie behind many chronic diseases.

    CREDIT: A. B. DOWSETT/SPL/PHOTO RESEARCHERS

    To test his idea, Marshall swallowed a broth full of H. pylori, and sure enough, he soon developed gastritis, the prelude to ulcers. Marshall cured himself with antibiotics, and subsequently, he and his co-workers successfully treated a number of people suffering from ulcers, clearly pinning the bacterium as the culprit. Other researchers have since shown that H. pylori infects perhaps one-third of all people, causing not only ulcers but gastric cancers as well.

    “All of us have been humbled by the Helicobacter story,” says Subramaniam Sriram of Vanderbilt University in Nashville, Tennessee. “It's made us look at infectious agents once again.” Robert Yolken of John Hopkins University agrees: “The Helicobacter model is the big success story.” But it was not the only one. At about the same time that Marshall was swigging H. pylori, other researchers were finding some of the first compelling evidence that cancers could also be triggered by viruses. Hepatitis B was associated with liver cancer, for example, while human papillomaviruses were linked to cervical cancer.

    For other chronic diseases, however, the evidence is little more than circumstantial. Multiple sclerosis, for example, sometimes strikes its victims more like an epidemic than a genetic disorder, as it did in the Faeroes. Similarly, schizophrenia has signs of being triggered by infections during pregnancy. It is more likely to strike people born in cities than on farms and to strike people born in winter (when the flu and other diseases are common) than other times of the year.

    Even without decisive evidence, some biologists argue that pathogens must cause most common chronic diseases. Foremost among these advocates is Paul Ewald, an evolutionary biologist at Amherst College in Massachusetts. Ewald suggests that people with chronic diseases ought to leave fewer children and grandchildren behind to propagate their genes than do healthy individuals. And so genetic disorders should gradually reduce themselves to minuscule levels. (The only exceptions would be disorders that are balanced by some benefit provided by the same genes, as in the case of sickle cell anemia, which is linked to protection from malaria.)

    Purely genetic disorders, Ewald contends, can't cause more than about 1 death in 10,000. “That's the point at which you'd be able to barely maintain a genetic disease,” he says. “When you get above that, you know that something must be maintaining it.”

    To Ewald, that something is most likely a pathogen, because it can cause a chronic disease without paying this evolutionary penalty. As long as the pathogen can escape to new hosts before its own dies, it can continue to create sickness. Humans may evolve better defenses against a parasite, but the parasite can respond in kind, evolving new tricks for getting around them.

    Ewald has captured public attention, with features on his work appearing in magazines like Newsweek and Atlantic Monthly. But he has had a mixed reception among specialists in chronic diseases. “Infection may be important at a broad level, but so is starvation,” says Paul Ridker of Brigham and Women's Hospital in Boston. Although he finds the theory intellectually stimulating, Ridker argues that it is no substitute for detailed research. Perhaps not surprisingly, scientists who are investigating possible infectious causes tend to be positive. “Generally, I think Ewald's right,” says Alan Hudson of Wayne State University in Detroit, Michigan. But even champions of pathogens like Barry Marshall, now at the University of Western Australia in Crawley, concede that “so far nothing looks as good as H. pylori—and most of the leads have been rather weak.”

    Mysterious MS

    Take MS, a disease in which immune system cells attack the insulating sheath of myelin that surrounds neurons in the brain and spine. As the disease progresses, its victims typically lose their muscle coordination, speech control, and eyesight. More than 300,000 people in the United States alone suffer from the disease—many more than evolutionary theory would predict if it were strictly a genetic disorder. Epidemiological studies also hint at a pathogen. People who migrate before age 15 from MS hot spots (such as Australia and Ukraine) to places with low rates are less likely to contract the disease than are those who stay behind. That decline is consistent with a scenario in which MS is caused by an infection that strikes in adolescence.

    Experimental studies likewise hint that a pathogen could cause MS if its own proteins resembled myelin. Once the immune system became primed to attack the invader, it might inadvertently ravage the myelin as well. Indeed, in the July 2001 issue of the Journal of Clinical Investigation, a team of immunologists at Northwestern University Medical School in Chicago described creating symptoms resembling MS in a mouse by using an infectious agent. They injected the mouse with a normally harmless virus—but to which they had added a gene from a bacterium, Haemophilus influenzae, that makes a myelinlike protein. The mouse's immune system quickly became primed to attack the engineered virus; within 2 weeks its myelin was under assault as well.

    Ubiquitous.

    Herpes simplex 1 and 2 are widespread viruses that have been implicated in Alzheimer's and schizophrenia, respectively.

    CREDIT: GOPAL MURTI/SPL/PHOTO RESEARCHERS

    But this sort of research enables scientists to create animal models of MS—not to identify the actual culprit in humans. Over the years, scientists have prowled for pathogens that could cause this sort of mimicry in association with MS, and they've found no shortage of candidates, 17 microorganisms in all. But today researchers are focusing the hunt for an MS agent on two ubiquitous pathogens: a virus and a bacterium that were both discovered just 15 years ago.

    Human herpesvirus 6 (HHV-6) usually infects people when they are just a few months old, causing a sizable fraction of the fevers experienced by babies. After causing a brief illness in its host, the virus goes into hiding and may lie dormant for the rest of the host's life. But researchers have found that it sometimes becomes active again. Patients who receive bone marrow or organ transplants are often plagued by reactivated HHV-6, possibly because their immune systems are compromised.

    Researchers studied this connection in 1995 at the Pathogenesis Corp. in Seattle, Washington (now part of Chiron), finding evidence of HHV-6 in the brains of several dozen people with MS. In these patients, the viruses were producing proteins, and they lurked close to the myelin of their hosts. In people who did not suffer from MS, the virus was there, but researchers found little evidence that it was active.

    Since then, several groups have pursued this lead with distinctly mixed results. Donald Carrigan and Konstance Knox of the Institute for Viral Pathogenesis in Milwaukee, Wisconsin, published results last year showing that 56% of patients with MS had active HHV-6 in their brains, whereas healthy subjects had none. But several other teams have failed to find the virus in people with MS, while a paper this January in the Journal of Medical Virology found HHV-6 in one-third of healthy people's brains.

    Carrigan and Knox dispute those negative findings, arguing that other researchers did not check carefully enough to see whether the virus was active or dormant in the brains. That hasn't been enough to sway many herpes experts, however. “I am frankly rather skeptical about the possible link between HHV-6 and MS,” says Steven Dewhurst of the University of Rochester in New York. Just this May, he had more reason for doubt: At the annual meeting of the American Academy of Neurology, a Swedish team reported treating people with MS with valacyclovir, an antiviral medication. It produced no change in the symptoms.

    Another suspect in MS is the bacterial scourge Chlamydia pneumoniae(see table). Its cousin, C. trachomatis, is notorious as a sexually transmitted disease and this year was implicated in cervical cancer. C. pneumoniae invades the lungs, where it sometimes causes respiratory diseases. It can then settle into a host's body for decades, living quietly in white blood cells. Like HHV-6, C. pneumoniae is practically universal. Just about everyone becomes its victim at some point.

    SCIENCE SOURCE/PHOTO RESEARCHERS

    C. pneumoniae was first suspected to play a role in heart disease. Shortly after its discovery in 1986, researchers encountered it lurking in the coronary blood vessels. Several teams also found that people with heart disease were more likely to have antibodies to C. pneumoniae than were healthy individuals. Later research has shown that the bacteria actually live in the lesions associated with atherosclerosis. In 1999, scientists reported that a protein made by C. pneumoniae closely resembles one found in heart muscle. As it tries to attack the bacteria, the immune system may attack the heart as well, creating the inflammation that may cause atherosclerosis (Science, 26 February 1999, p. 1335). Two clinical trials are now under way to see whether antibiotics can lessen further damage in people with heart disease. But many researchers still doubt that the connection is real. “The data are actually weaker than people think,” says Ridker.

    In 1998, Vanderbilt's Sriram reported that he had found C. pneumoniae in yet another part of the body: in the cerebrospinal fluid of a man with MS. Sriram subsequently looked at other people with MS and reported that genetic profiling revealed that 97% of them had the DNA of C. pneumoniae in the fluid, while only 18% of the controls did.

    Sriram's results raised the possibility that C. pneumoniae could trick the immune system into attacking myelin just as it attacked heart tissue, or at least make a bad situation worse by aggravating the inflammation. Animal experiments suggest there might be something to this idea. In the August 2001 Journal of Immunology, Hudson of Wayne State and his colleagues reported that when they injected a protein from C. pneumoniae into the brains of rats, it produced remarkably MS-like symptoms. “His is a very seminal paper,” says Sriram.

    Multifaceted.

    Some studies suggest that C. pneumoniae is involved in both MS and heart disease; others refute that.

    CREDIT: KARI LOUNATMAA/SPL/PHOTO RESEARCHERS

    But when other researchers tried to confirm Sriram's results, many of them failed. Skepticism has been running high, and in April 2001, the title of a review in Trends in Microbiology bluntly summed up the feeling of many researchers: “Chlamydia pneumoniae and multiple sclerosis; no significant association.”

    Sriram disputes this finding, noting that other labs used different methods than his and might have missed the bacteria. To resolve the issue, he and the other researchers agreed to conduct a blind test of cerebrospinal fluid from the same set of MS patients and controls. Sriram found Chlamydia in 73% of the people with MS and 23% of those without. The other three labs found no evidence of Chlamydia at all.

    Ewald is among Sriram's defenders, arguing that his methods are more sensitive than those of other labs. “A scientific response to the test would be, ‘Well, it looks like the Vanderbilt group was right after all!’” he claims. But Sriram concedes that the debate is open: “I hope that physicians will view this as a debate that's ongoing and not an observation that's being finalized.” If the association is real, he notes, it's possible that the bacteria only arrive in the brain after MS has already begun. “Having this infection on top of the preexisting damage may be harmful,” he suggests. “The ultimate answer would be a successful clinical trial showing that when you eliminate the agent, you eliminate the disease.” As a small step in that direction, Sriram is running a trial on MS patients with antibiotics.

    Analyzing Alzheimer's

    C. pneumoniae plays an equally controversial role in the debate over Alzheimer's disease. Theoretically, Chlamydia is a compelling candidate for a causal agent. In Alzheimer's patients, protein clumps appear in the brain and neurons become tangled; researchers suspect that inflammation is a key ingredient in this recipe. “One of the things Chlamydia does better than anything is elicit inflammation,” says Hudson. “If they are in the brain, they are causing inflammation.”

    In 1998, Hudson and his colleagues reported genetic evidence of C. pneumoniae in the brains of 17 out of 19 Alzheimer's patients. Meanwhile, 18 out of 19 healthy people tested negative. When the researchers examined the diseased brain tissue, they found evidence of the bacteria in the very regions of the brain that had been damaged.

    Mind-boggling.

    Controversial evidence links a common herpesvirus, HHV-6, with schizophrenia.

    CREDIT: A. B. DOWSETT/SPL/PHOTO RESEARCHERS

    Once again, other labs tried to confirm Hudson's results. Two failed to find any bacteria, and a third obtained results that were ambiguous at best. “The interest in pursuing associations between Chlamydia and Alzheimer's has lost a great deal of steam,” claims Robert Ring of Wyeth-Ayerst Neurosciences in Princeton, New Jersey, one of the researchers who failed to find a link.

    Hudson, however, is suspicious of the methods used in the studies. In two out of three cases, the scientists tried to find Chlamydia in brains preserved in paraffin instead of fresh tissue. “It's pretty erratic getting stuff from paraffin-fixed samples,” says Hudson. And he also maintains that the bacteria exist at low levels that can be missed if researchers don't run enough tests. “If you come up negative, how do you know you didn't just miss the DNA?” he asks.

    New research bolsters his case. At an international Chlamydia meeting last August in Helsinki, two teams reported finding Chlamydia in fresh tissue of numerous Alzheimer brains and in almost none of the healthy brains.

    One of the complicating factors in the search for chronic bugs is that many of the best candidates, like Chlamydia and HHV-6, are widespread. Far more people carry them than develop the diseases in question, which suggests that other factors, such as the genes of their hosts, must play a role. Ruth Itzhaki of the University of Manchester Institute of Science and Technology in the U.K. is exploring the pathogen-gene relationship in her work on Alzheimer's.

    Itzhaki has found preliminary evidence linking another herpesvirus, herpes simplex virus type 1 (HSV1), to Alzheimer's. As with HHV-6 and C. pneumoniae, most people become infected with HSV1 at some point. The virus lurks primarily in the nerves surrounding the mouth, and in 20% to 40% of its hosts, it causes occasional cold sores. Among young people, HSV1 is entirely absent from the brain, Itzhaki's team has found. But it is often present in the brains of elderly people. Itzhaki suspects that the virus sneaks into the brain as the immune system declines with age. Itzhaki's team has found that 63% of elderly people carry the virus, while 74% of elderly people with Alzheimer's do.

    This small difference between the two groups might suggest that HSV1 is a minor risk factor for Alzheimer's. But the genetic evidence suggests otherwise, says Itzhaki. Those of her subjects who carried a gene variant called ApoE4, a known risk factor for Alzheimer's, as well as the herpesvirus, were much more likely to have Alzheimer's than were people with either the gene or the virus alone (Science, 15 May 1998, p. 1002). She concluded that the combined risk accounts for the disease in 60% of the 61 cases her team has examined.

    Itzhaki speculates that people with ApoE4 may not be able to repair cell damage caused as the virus triggers inflammation in the brain. Stopping the virus from getting into the brain might be one way to fight the disease. “Vaccines against HSV1 might prevent Alzheimer's, at least in some cases,” says Itzhaki. But so far, no one has tested that proposition. Still, Itzhaki has earned some admirers. “I think there's significant merit in her work,” says Keith Crutcher of the University of Cincinnati. “This type of work is notoriously difficult, and I think her studies have been carefully conducted.”

    Genetic interplay

    In some diseases, the line between pathogens and the genes of their hosts may be nearly indistinguishable. Certain kinds of viruses, known as endogenous retroviruses, paste their DNA into their host cells. If one of them should infect a cell destined to become a sperm or egg, the virus will infect every cell in the body of the person it gives rise to. It will also be handed down to subsequent generations. Endogenous retroviruses make up an estimated 1% of the human genome, but most of their sequences have mutated into harmless nonsense. Still, some endogenous retroviruses may be able to come back to life (often during fetal development), harnessing their host's genes to make new viruses that can invade new cells.

    Hopkins's Yolken and his colleagues have been exploring whether reawakened retroviruses might somehow be involved in schizophrenia, as some are known to cause brain damage. They searched for retroviral DNA in the cerebrospinal fluid of 35 people who had recently developed schizophrenia. As they reported in the 10 April Proceedings of the National Academy of Sciences, genetic material from one type of retrovirus, HERV-W, turned up in 29% of schizophrenics, whereas none was found in the cerebrospinal fluid of healthy people or even people with other neurological disorders. One hypothesis that Yolken and his colleagues are now pursuing is that these retroviruses are unleashed in certain individuals before they are born, altering the development of their brains in ways that don't become clear until adulthood.

    In a report to appear in the November Archives of General Psychiatry, Yolken and his colleagues report on a potential trigger for these retroviruses. The researchers sifted through the records of a study known as the Collaborative Perinatal Project, in which thousands of pregnancies were monitored between 1959 and 1966. During the project, blood samples were taken from the mothers, and the health of their children was followed for 7 years. The group tracked down 27 subjects who had developed schizophrenia and other psychotic illnesses as adults. They revisited their mothers' blood samples, measuring levels of antibodies to various pathogens. They then measured the same antibodies from mothers of healthy subjects, using two controls for each psychotic subject who were born during the same time of the year and were of the same race and gender.

    For five out of six pathogens, the researchers found no significant association with psychosis. But one did pass the test: HSV2—the sexually transmitted form of HSV, which causes genital sores. Women with signs of infection with the virus when they were pregnant were more likely to give birth to children who would later develop schizophrenia and other forms of psychosis.

    Yolken points out that HSV2 is a compelling candidate for triggering schizophrenia—and a treatable one at that: “We know they're capable of activating retroviruses, we know they're capable of replicating in the brain, and we know that there are treatments that are available.”

    Yolken concedes that even with a study that spans 40 years in molecular detail, he is far from proving that a particular pathogen causes schizophrenia. As with other chronic diseases, it defies the classic standards for recognizing infectious diseases articulated by Robert Koch in 1882: showing that the pathogen is present in all victims suffering from a specific disease but not in healthy people, for example, and that an isolated pathogen can cause the same disease in a new host. “We've solved the easy problems”—identifying the agents that cause many acute infectious diseases—says Ewald. He argues that for uncovering the pathogens that may lie at the root of chronic diseases, Koch's postulates should not be the guiding factor. “We're just not going to get the kind of evidence for causation as we do for acute infections. There's just no way. If you're dealing with a disease where the symptoms take 5 decades to develop, how are you going to get an animal model of that?”

    But Yolken and other researchers trying to show a link generally believe that they have to come as close to the classical methods as possible if they are to convince their medical colleagues. Says Yolken: “In this day and age, when we have good treatments, you have to show that when you remove the agent, you get a change in the disease to have people to believe it.”

  12. EVOLUTIONARY BIOLOGY

    Preparing the Ground for a Modern 'Tree of Life'

    1. Elizabeth Pennisi

    Next week, biologists meeting in New York City will discuss an ambitious project to map the origins of and relationships among Earth's species

    Without question, the human genome project has been a technical tour de force. It has also stimulated the DNA sequencing of many other organisms, from microbes to mammals. But biology needs to look beyond genomes now, says an expanding group of evolutionary biologists who are pushing for a new initiative. They want to build a “tree of life” that would map the evolution of Earth's species and show how they split off from one another over time. The project, which would make use of genetic and morphological data, might be more expensive than the human genome project and take longer to complete. But it would be well worth the effort, they say.

    Because the tree of life reflects evolutionary relationships, says David Hillis, an evolutionary biologist at the University of Texas, Austin, it “is not just a list of all the fundamental units but an organizing principle” that would enable researchers to make predictions. For example, it might help them estimate the similarity of two organisms' proteins or genes, depending on their placement on the tree.

    Last year, the National Science Foundation (NSF) gave initial backing to this idea, sponsoring three exploratory workshops over several years. Their purpose is to lay out the goals for a series of phylogenetic studies for a modern tree of life. Hoping to jump-start this work, a group of biologists is meeting on 20 to 22 September at the American Museum of Natural History in New York City to take stock of what they have learned so far and discuss what to do next.

    The agenda for this gathering, “Assembling the Tree of Life,” reads like an inventory of Noah's Ark, with talks covering the history and diversity of plants, animals, and microbes. “It's the most comprehensive single meeting on the tree of life” ever planned, says co-organizer Joel Cracraft, an ornithologist at the museum. He and co-organizer Michael Donoghue, a botanist at Yale University, want to develop a consensus among the thousands of biologists who study organisms and evolution. Over 20 years, “we've gotten quite a ways down the road,” Donoghue notes. “It's high time” to try to put together a single modern tree of life.

    Due for an update?

    Ernst Haeckel's late 19th century illustration offers a metaphor for modern attempts to sort out the species.

    The concept of a tree of life, derived from earlier studies of evolution, is straightforward. Close to the tree trunk are the most ancient and simplest life-forms, the prokaryotes and the archaea. Branches that split off later include the eukaryotes, out of which plants and animals have sprouted. Farther out on branches are mammals and other animals, and humans, goats, and pocket mice are at the tips of twiglets. Although researchers have created many small trees that include 100 to 1000 species, few have been combined. The objective is to merge as many as possible, while resolving conflicts over the placement of species. “It's an ambitious project,” concedes James Rodman, an NSF program director. “Our ballpark figure is that it could take 10 to 15 years.”

    The September workshop comes at a time when “society is using tree-of-life stuff more and more to solve problems,” Cracraft says. Increasingly, biomedical researchers are using phylogenetic principles to understand how pathogens become resistant to treatment and how emerging diseases make their way into humans. Genome researchers are finding that they, too, need help from evolutionary biologists to compare proteins from different species. “We have to know how a genome has changed over time,” notes Charles Delwiche, a plant molecular systematist at the University of Maryland, College Park. Efforts to sequence the mouse, rat, and other organisms, he points out, are attempts to add a phylogenetic perspective to the human genome.

    Other fields can benefit as well, says Donoghue. Conservationists, for example, might be able to predict the course of invasions of alien species if they have information about the natural history of indigenous relatives. Moreover, “the tree of life can give us the big picture of biodiversity and help us make wise decisions about what to conserve,” Delwiche says. And basic researchers, such as developmental biologists, have a better chance of interpreting the organisms they study with a clearer evolutionary perspective. “Once you have a tree, you can start asking questions in a way that has a historical framework,” Cracraft explains. “Everybody is getting into it, because it's a powerful way of doing biology.”

    But the task of building the tree of life is huge. Of the 1.75 million species now cataloged, only about 50,000 have been placed in minitrees, and little effort has been made to merge them. Nevertheless, the project's advocates are undaunted, citing their access to better software and powerful new analytical methods. “The branches are slowly but steadily taking shape,” insists Michael Lee, who studies turtles and other reptiles at the University of Queensland in Australia. According to reports from earlier workshops, the number of phylogenetic analyses is doubling every 5 years. “For many, many years to come, there will be legitimate hand-wringing and unresolved issues,” Donoghue explains. “But I think the data speak loud enough to say” that a consensus will eventually develop.

    Progress has been slow so far. Since the days of Darwin, taxonomists have grouped organisms based on morphological characters such as the number of legs and body shape. But in the past 2 decades, phylogenetic biologists have introduced other classification methods. For example, most now assess relatedness of species according to the degree of similarity in equivalent stretches of DNA. Sometimes the morphological and molecular data clash, although Lee says that, “on the whole, morphological and molecular data have been in broad agreement.”

    Researchers have found that the more data they collect, the more confident they can be about the lines of a species, and this has encouraged many to incorporate several kinds of data in their analyses. Those who rely mainly on molecular data also take stock of morphological analyses and the fossil record. Fitting all the pieces together will remain a challenge, though. The sequencing of microbial genomes has flooded systematists with new genomes, but it also has revealed extensive gene transfer among microbial species. Thus microbial trees no longer consist simply of bifurcating branches; they look more like tangled brambles.

    Plant and bacterial species can be very tricky to sort out. For example, there are about 1000 proteins in Arabidopsis that are clearly cyanobacterial in origin, most of them expressed in photosynthesizing components of plant cells called chloroplasts but some expressed elsewhere in the cell. And the origins of some algae can be hard to pin down, according to Delwiche. They appear to have two chloroplasts, at least one of which was acquired when the algae's ancestor ate another photosynthetic eukaryote. What resulted “is like a Russian doll, with a eukaryote inside a eukaryote inside a eukaryote,” he explains. Who is to say which is the true ancestor?

    Resolving these problems and building a tree of life will require a big-science approach involving “a lot of money, a lot of people, and a lot of effort,” says Terry Yates, a systematic biologist at the University of New Mexico in Albuquerque. Indeed, “a human genome-scale effort would be marvelous,” adds Tim Littlewood, a systematist at the Natural History Museum in London. The first task, argue Hillis and Donoghue, should be to develop the computational tools needed to collect lots of data rapidly and to compute ever-larger trees. They call this new field “phyloinformatics.” “If we can dramatically increase the rate of discovery about the tree of life, it will pay off enormously in the long run,” Hillis insists.

    Although a tree-of-life project might sound expensive, advocates say it would make systematics more efficient by encouraging greater coordination. “Currently,” says Littlewood, the field “is rather like a cottage industry with key groups around the world working in isolation.” Evolutionary biologists aren't the only ones who might gain: DNA sequencers might use the tree to set priorities. “We don't need to sequence entire genomes for every bit of life on Earth,” Yates notes. A tree of life can clarify which organisms would yield the most insights. Given the huge costs of sequencing, he argues, building a tree of life might be well worth the money.

  13. PALEOANTHROPOLOGY

    What--or Who--Did In the Neandertals?

    1. Michael Balter

    Was it a changing climate, competition with modern humans, or both? Experts who debated the topic at a high-level meeting couldn't agree

    GIBRALTAR— About 100 experts in human evolution paused atop the Rock of Gibraltar to admire the view: Stretched out below were the golden shores of southern Spain, and on a clear day the mountainous coast of Morocco is visible some 30 kilometers away across the blue straits. A moment later, the group began a dizzying descent down 300 stone steps cut into the sheer limestone cliffs to the rocky beach below. Their destination: two sandy caves that were occupied by Neandertals at least 90,000 years ago. Recent excavations in these caves have turned up important new evidence that Neandertals butchered marine mammals, including seals and possibly dolphins, and exploited a much wider range of animal resources than they are often given credit for.

    This field trip capped a high-level gathering* at which researchers sought answers to some pivotal questions about the relationship between Neandertals and modern humans, who coexisted in Eurasia for several thousand years before the Neandertals finally went extinct about 25,000 years ago. How much interaction was there between the two groups? Was competition with modern humans responsible for Neandertal extinction?

    In recent years, Neandertals, once viewed as subhuman brutes, have increasingly earned respect, even if most experts today relegate them to a different species from our own. There is a growing consensus among researchers that the Neandertals were not easily shoved aside when Homo sapiens ventured into their territory and possibly even continued to advance culturally and technologically (Science, 2 March, p. 1725). But whereas some researchers argued at the meeting that there may have been no competition at all between the two groups, others saw the appearance of modern humans as the ultimate death knell for the Neandertals. The participants also got their first detailed look at the evidence behind a controversial claim that the skeleton of a 4-year-old child—first reported in early 1999 from Portugal—was the result of interbreeding between Neandertals and modern humans.

    No contest?

    The meeting kicked off with biologist Clive Finlayson, director of the Gibraltar Museum, who threw down the gauntlet: He stated flatly that there was no basis for believing that modern humans caused the extinction of the Neandertals. Indeed, he argued, during their coexistence the population density of each group was too low to trigger serious competition between them. Finlayson proposed that Neandertals went extinct because they were less well adapted to the rapidly fluctuating climate changes that dominated Europe and Western Asia between 60,000 and 25,000 years ago.

    At home in the Rock.

    Neandertals butchered marine mammals at Gorham's Cave (left) and Vanguard Cave.

    CREDIT: CLIVE FINLAYSON

    This might seem counterintuitive, because the Neandertals—who were braving the European ice ages many thousands of years before modern humans arrived on the continent—were more robust and cold adapted than their more slender cousins, whom most researchers consider to be relatively recent arrivals from Africa. But the key difference between the two species, Finlayson suggested, was that modern humans had more “complex social networks” and thus were better at dispersing across the landscape. Neandertals, on the other hand, tended to stay in one locale; the raw materials for their stone tools, for example, appear to come from sources much closer than those used by modern humans.

    As a result, Finlayson suggested, Neandertals were more vulnerable to alternating episodes of population growth and population decline—and sometimes local extinction—as temperatures rose and fell. The cumulative effect of such local extinctions led to the demise of the species, despite its sometimes superior local adaptability. Ian Tattersall of the American Museum of Natural History in New York City agreed that too much local adaptation can mean death for a species. “Adaptation to specific conditions is a passport to extinction,” Tattersall said. The ones who do best, he concluded, are those most able to adopt new strategies.

    Some researchers thought they saw evidence in the fossil record for such severe population fluctuations among the Neandertals. Paleoanthropologist Erik Trinkaus of Washington University in St. Louis cited recent studies of Neandertal fossils showing a significant underrepresentation of adult skeletons over 40 years of age. “If you model populations that build up and crash repeatedly, you get a population profile with a dearth of older individuals,” Trinkaus said.

    But other scientists argued that, whatever the other pressures on Neandertals, competition with modern humans couldn't be ruled out. Once modern humans arrived on the scene, noted New York University archaeologist Randall White, “you no longer had the same ecological or cultural landscape.” Chris Stringer of the Natural History Museum in London agrees: “If it was just climatic changes, we have to ask why Neandertals did not go extinct sooner.”

    Mary Stiner, a zooarchaeologist at the University of Arizona in Tucson, argued that the archaeological record contains hints that modern humans made more efficient use of food resources, making them better overall competitors for resources. For example, she said, during the Upper Paleolithic period—which corresponds to the arrival of modern humans—archaeological sites begin to show signs that humans were boiling animal bones in water to extract fats rather than simply breaking them apart. “This method can probably double the amount of fat you can get out of the bones,” Stiner said. This greater efficiency at gaining nutrition could have led to faster population growth by modern humans that “could swamp other populations.” Other scientists noted, however, that the findings from the Gibraltar caves—much of which is unpublished—indicate that the Neandertals were also using more varied food sources, including marine mammals, at least on a local level.

    A divisive discovery

    Whether or not the advent of modern humans led to the Neandertals' demise, most researchers assume that the two groups did not interbreed. But that assumption was rudely challenged by the discovery in late 1998 of the skeleton of a 4-year-old at Lagar Velho in Portugal. The team studying the skeleton, which includes Trinkaus and Joäo Zilhäo, director of the Portuguese Institute of Archaeology in Lisbon, claimed that the 24,500-year-old skeleton was a hybrid or “admixture” resulting from interbreeding between Neandertals and modern humans, but other researchers concluded that the child was actually a modern human (Science, 30 April 1999, p. 737).

    The debate continued in Gibraltar, where Trinkaus and Zilhäo gave their most detailed presentations yet of the skeleton. Showing a long series of unpublished slides of the fossils, Trinkaus pointed out that the leg bones show much greater “robusticity” than that seen in modern humans and that the back of the child's incisors show an indented “shoveling” pattern typical of Neandertals. Another Neandertal-like feature is the claimed existence of a suprainiac fossa, a depression at the back of the skull often used to distinguish the species. Other features of the skeleton, however, resemble modern humans. “This is not just a funny-looking early human,” Trinkaus concluded.

    Yet many researchers at the meeting remained skeptical. Yoel Rak, a paleoanthropologist at Tel Aviv University in Israel, argued that if the child was really a hybrid, the Neandertal and modern human features should be more blended. “If you look at a mule, you don't have the front end looking like a donkey and the back end looking like a horse,” Rak said. And Tattersall, who was one of the first to challenge the hybrid theory, told Science that he saw nothing in this more detailed view to change his mind. “The skull is typically modern human in most of its characteristics,” Tattersall says, adding that the features found in its burial—including a pierced shell and red ochre—were “typical” of early modern human funeral practices.

    A less dismissive view was offered by Stringer, who had earlier argued that the child's robustness might be an example of short-term adaptation by modern humans to cold conditions in Iberia. “I would take the suprainiac fossa very seriously if it is there,” Stringer said, “because that is considered diagnostic of Neandertals.”

    Thus the debate over just how up close and personal the relations between Neandertals and modern humans really were shows no indications of ending soon. By the time researchers return to Gibraltar for the next meeting, 3 years from now, there may be more answers—perhaps even from those sandy caves at the bottom of the Rock.

    • *Neandertals and Modern Humans in Late Pleistocene Eurasia, Gibraltar, 16–19 August.

  14. INTERVIEW

    Setting Priorities Puts New Minister in the Hot Seat

    A longtime advocate, Koji Omi helped write Japan's first basic plan for science. Now he has to defend controversial new spending priorities

    WASHINGTON, D.C.— When Japan's ministries unveiled their budget requests on 31 August, the numbers revealed a surprising shift in support for science and technology (Science, 7 September, p. 1743). Huge jumps in four fields, led by the life and materials sciences, accompany cuts in space and marine science severe enough to cause delays in building major facilities. This prioritization, new for Japan, has opened cracks in its previously unified scientific community.

    The man behind that new alignment is Koji Omi, 68, since April the minister for science and technology policy. As head of the Council for Science and Technology Policy, chaired by Prime Minister Junichiro Koizumi, Omi serves as Koizumi's de facto science adviser. A member of the Diet (legislature) since 1983, Omi led the ruling Liberal Democratic Party's science policy subcommittee, which in 1996 pushed through a law that led to the country's first-ever Science and Technology Basic Plan. A second plan, which took effect this year, proposes an increase in government spending on science to 1% of gross domestic product, up from the current 0.7%.

    A commerce graduate of Hitotsubashi University and a former Ministry of International Trade and Industry bureaucrat, Omi admits that his lack of scientific background sometimes puts him at a disadvantage. “I do not clearly understand the substance of scientific discussions,” he says. “However, with my passion and enthusiasm, I believe that I contribute to creating a very favorable environment for researchers.”

    His efforts haven't gone unnoticed. Ken-Ichi Arai, a molecular biologist and director of the University of Tokyo's Institute of Medical Science, says “scientists are very grateful to him” for his work on the science and technology plans. And Yoshiki Hotta, director-general of the National Institute of Genetics in Mishima and a critic of the administration's four priorities, calls Omi “one of the few members of the Liberal Democratic Party who speak up for the importance of science and technology.”

    Omi came here last week to formally ask the United States to reconsider its 1999 decision to withdraw from the International Thermonuclear Experimental Reactor (ITER) project. “It would be beneficial not only for the world but for the United States itself if the United States rejoined,” he says. Omi met with Energy Secretary Spencer Abraham, who said he would review the matter, and with members of Congress, who offered lukewarm support.

    During his visit, Omi met with Science's Jeffrey Mervis and spoke with Science's Japan correspondent Dennis Normile by phone to discuss Japanese science policy. An edited transcript follows.

    Straight talk.

    Japan's Koji Omi has clashed with some lab heads over their budgets.

    CREDIT: MARK F. SYPHER PHOTOGRAPHY

    Science: What led to this new approach of prioritizing research?

    Omi: In recent years there has been criticism that our science and technology policy lacks any strategy. With the second Science and Technology Basic Plan, we decided to determine areas of interest. After extensive discussions we decided to allocate more funding and human resources in four areas, namely life sciences, information technology, the environment, and nanotechnology and materials science.

    Science: Why those four fields?

    Omi: These areas were chosen [based] on what is going to be important for science and technology in Japan. We listened to the opinions of business and economic leaders and many other people.

    Science: A group of laboratory heads criticized the four-field strategy in a letter to the prime minister. They are still not happy about the way next year's science budget has been divided up.

    Omi: I had the opportunity to meet with them and discuss this issue. But no matter how many times I told them that we are not neglecting basic research, I couldn't gain their understanding. However, if you really look at the [budget], you will understand that there is enough funding for basic research.

    Science: The Koizumi administration would like to privatize or abolish a class of public corporations that includes research organizations such as the Institute of Physical and Chemical Research (RIKEN) and the Japan Marine Science and Technology Center. What will happen to them?

    Omi: The general plans regarding [the 163] public corporations may not apply to the [10 or so] scientific organizations. We continue to think that those organizations are very important for the development of science and technology.

    Science: Does that mean you hope they will remain intact as they are now?

    Omi: Yes. But in some way we may have to modernize them.

    Science: Is there a point after which U.S. participation in ITER would not be feasible because plans would be too far along?

    Omi: Even without U.S. participation, the three partners will proceed with this program. We are going to [decide] where the facility will be constructed and what roles the partners will play, and then proceed to implementation.

    Science: What problems are hindering research at Japan's national universities, and how will the proposed denationalization address them?

    Omi: Those working at national universities are public servants. With that status they are not allowed to receive any money from private companies [as compensation] for research. The rules have been revised somewhat, but it is still difficult. If [the universities] are made independent agencies, [the faculty members will] no longer be public servants. That is something that many private companies urge us to do.

    Science: What can be done to improve the situation for women faculty members?

    Omi: I do not believe women are in a disadvantageous position when it comes to promotion. Of course, if a woman decides to take several years off for child care, she has a lot of catching up to do later. However, as long as they work under the same conditions [as men], I do not believe there is any inequality. Actually, because there are fewer women in the workplace, women have a better chance of getting promoted.

  15. PROFILE

    Frozen Species, Deep Time, and Marauding Black Holes

    1. Robert Irion

    It all goes with the territory for Gregory Benford, a mild-mannered professor who doubles as the working physicist's science-fiction writer

    IRVINE, CALIFORNIA— Few physicists have adjectives devoted to them, but Gregory Benford does. It's “Benfordesque,” as in this review of his latest novel: “Eater is Benford's most Benfordesque book in quite a while.” Yes, Eater has it all—bickering astrophysicists, useless bureaucrats, a love triangle, a smart but vindictive black hole, and a dying astronaut who downloads her brain into a space probe. Welcome to what one friend calls the “weird but stimulating mind” of Greg Benford.

    Benford's mind isn't easy to summarize. Its contents include straight physics, such as his studies of relativistic electron beams here at the University of California (UC). His theoretical work extends to pulsars, the cores of active galaxies, and other lairs of turbulent jets. Currently, he and his colleagues are exploring whether microwave beams could propel sails of ultralight carbon fibers in space.

    Then there are Benford's papers in journals typically read by biologists and climatologists. For instance, he once urged conservationists to collect random species from threatened habitats and freeze them until future biologists figure out what to do. He recently advocated dumping corn stalks and other crop wastes into the sea to remove carbon dioxide from the atmosphere. He also works on communicating with aliens—both out there and here on Earth, where civilizations of the far future will struggle to interpret anything we've left behind.

    Most people, though, know Benford's name from the big block letters on the covers of his science-fiction (SF) novels. He's written 20 so far, including the million-selling Timescape, his classic tale of messages across time that won the prestigious Nebula Award in 1980. Another hit novel, Cosm, told of baby universes trapped at UC Irvine's physics department and the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory on Long Island.

    Most recently there's Eater, in which the aforementioned astronaut-spaceship and her brilliant husband, back on Earth, zap a nasty black hole into oblivion before it devours the world's best minds. In the hands of a hack, such a story would come off as downright silly. But Benford is recognized as one of the top writers of “hard SF,” which he describes as “fiction with a scrupulous regard for the process, world view, and results of scientific inquiry.” Thus, Eater mixes astrophysics seminars in jealousy-riddled departments and diagrams of the warped magnetic fields around black holes with death-ray shoot-'em-up cosmic action sequences.

    So much fun is Eater, in fact, that the Fox television network has optioned it as a 4-hour miniseries. But don't expect to see space-time manifolds or electromagnetic field equations; Benford doubts much science will survive the transition to the tube. (The book's sole sex scene, however, undoubtedly will.)

    Benford's fans and his professorial colleagues think that he deserves his reputation as the SF voice of the working physicist. “There are very few others who can put as much science into their fiction,” says artificial-intelligence guru Marvin Minsky of the Massachusetts Institute of Technology (MIT) in Cambridge. “And he is certainly unsurpassed in depicting the academic scenes.” Adds Benford's close friend at UC Irvine, evolutionary biologist Michael Rose, “The big challenge in fiction is how to make academics interesting, because we aren't. Greg does that.”

    Chiral twins

    Conversations with Benford whirl as unpredictably as his SF plots. He's genial and confident at age 60, delighted to tackle any imaginable topic. He's equally likely to refer to books by William Faulkner and to covers of MAD magazine. Names drop constantly: Dinner with Arthur C. Clarke leads to introducing Kurt Vonnegut at a speech leads to Isaac Asimov's agoraphobia. He loves to interject the word “duh,” usually to make a dramatic point about the cluelessness of the powers that be. It's no surprise that his office is an eruption of books and papers, calmly centered only by a framed portrait of the Milky Way by his friend, space artist Jon Lomberg.

    Wired.

    Gregory Benford's electron beams once drew power from these capacitors, but his writing taps a deeper source.

    CREDIT: DAVIS BARBER PHOTOGRAPHY

    Through it all, key facts emerge about his life. For one, he and his brother Jim—president of Microwave Sciences Inc. in Lafayette, California—aren't just identical twins, they're “chiral twins, the rarest kind,” Benford says. Greg is right-handed whereas Jim is left-handed, they have birthmarks on opposite cheeks, and their peppered gray hair spirals in opposite directions. They didn't part until Greg earned his Ph.D. in physics at UC San Diego 2 years before Jim, but they still collaborate.

    The twins grew up in rural Alabama near Mobile, close to E. O. Wilson's boyhood home of a decade earlier. But whereas Wilson became a famed Harvard biologist, the Benfords turned to physics. Three things did the trick, Greg Benford recalls. Their father's Army assignments in Japan, Germany, and Atlanta exposed the boys to “completely alien environments” far from the verdure of home. They grew to adore SF, launching a mimeographed publication called Void at age 13 that grew into one of the great fanzines of the 1950s and 1960s. Most critically, Greg read Atoms in the Family, Laura Fermi's ode to her husband, physicist Enrico Fermi. “This vision of a life spent pursuing deep aspects of reality entranced me,” Benford recalls.

    The reality of Benford's first job, at Lawrence Radiation Laboratory (now Lawrence Livermore National Laboratory) in California, was less entrancing. He joined a fusion research team, but few aspects of the work excited him. “I left after 4 years at Livermore because I didn't think the conventional fusion program had a prayer of working,” Benford says. He landed a faculty position at UC Irvine in 1971 and earned tenure 2 years later.

    Benford's work on relativistic electron beams was solid but not groundbreaking. He's the first to admit this. “It's clear that I'm a journeyman physicist,” he says. “I'm good enough to be a professor at UC, but I'm not and never will be a major figure. My work will have faded utterly 20 years after it's done. … If you want to make a lasting contribution, you'd better worry about what persists. And you realize that astonishingly little does.”

    Although Benford doesn't put himself in the rarefied company of Asimov, Clarke, Robert Heinlein, and Ray Bradbury, he is gratified that his writing may postdate him. It started with a short-story contest in 1965, for which he won second prize and a lifetime subscription to Fantasy and Science Fiction. More short stories followed, then a stream of novels, including his “galactic center” series of six books. They feature complex machine societies at the Milky Way's core, shaped by Benford's conversations with MIT's Minsky.

    Scientists are among Benford's most avid readers. “The science is usually great, but he has a darker streak,” says UC Berkeley astronomer Gibor Basri, who teaches Timescape in an undergraduate seminar on SF. “Some of his books made me feel a little claustrophobic and depressed.” Basri's reaction comes as no surprise to Benford. “I've never enjoyed trotting around in the head of a bright-eyed, perpetual optimist,” he wrote in a short autobiography published in 1997. “This may reveal more about me than I wish, but there it is.”

    A scientist fan base also has hazards, because setting a hard SF novel in a real place can blur the line between fiction and truth. Physicist John Cramer of the University of Washington, Seattle, who has written two SF books, notes that researchers at RHIC weren't too pleased with Cosm. Administrators come across as goons, and some of the physics pushes the boundaries of plausibility —including the guts of the experiment that spawns the new universes.

    No pulp.

    Benford's books include an SF thriller (above) and a treatise on communication.

    CREDITS: COVER IMAGES COURTESY OF HARPERCOLLINS PUBLISHERS (EATER) AND AVON BOOKS (DEEP TIME)

    Library of life

    Those criticisms pale next to the slings and arrows that fly when Benford invades other fields.

    Consider his “library of life” idea, published in the Proceedings of the National Academy of Sciences in 1992. Benford felt that a radical plan was needed to spur people to preserve as many endangered species as possible. Randomized freezing was his solution—a plan that reporters immediately dubbed “Noah's freezer.” His abstract urged vigorous debate, and that's exactly what ensued. Benford organized a National Academy of Sciences workshop at Irvine 2 years later—a physicist calling ecologists to arms.

    Microbial geneticist Mark Martin of Occidental College in Los Angeles recalls the mood there. “If you live every day with a terrible problem, like the loss of species and biodiversity, and someone who has never worked on the issue says, ‘I have a solution,’ your first response is ‘Yeah, yeah, right,’” Martin says. “Conservation biologists were tolerant and amused by him, but he got everyone talking.”

    Benford hasn't pursued the idea further. “I realized this issue could turn into a career, and I already had one,” he says. Still, he's dismayed that scientists have not yet succeeded in making people aware of what we're doing to life on this planet. “Biologists can't shout loudly enough to penetrate to the public that in a mere human lifetime, we might eliminate one-third of the species,” Benford says. “There's an astonishing silence, an unacknowledged fatalism. People don't have much hope beyond 30 years.”

    The challenge of looking centuries or millennia down the road runs throughout Benford's only nonfiction book, Deep Time, published in 1999. The book is about communicating words, images, and ideas across eons. “Without noticing it, we have come to act not only over immense scales of space, but through time” as well, he says. “We're not yet wise enough to appreciate our newfound powers.”

    Earth as a message

    The closing act of Deep Time is called “Stewards of the Earth.” Our ultimate message to our descendants will be the condition of our planet hundreds of generations hence, Benford says. Given civilization's pace of change, he thinks geoengineering is inevitable. The simplest step, he believes, is to raise the planet's albedo—the proportion of light it reflects into space—by a percent or so. “We know this works locally,” Benford says. “I mean, duh. There's a reason why houses in the Mediterranean are all white. It's no big mystery.” And yet no agency has taken any measures to make rooftops and roads lighter or, more dramatically, to explore increasing cloud cover over the oceans.

    Another tractable option is storing carbon anywhere but in the air. Benford dove into this topic in the April 2001 issue of the journal Climatic Change. He and engineer Robert Metzger of the Georgia Institute of Technology in Atlanta proposed that laborers collect discarded parts of crops, ship them to the Gulf of Mexico, and dump them in deep water. That could capture as much as 12% of the annual carbon emissions in the United States, they estimated, as most of the residues are now left to rot.

    Benford's top boss at UC Irvine encourages such efforts. “On the face of it, one would say maybe this is just another physicist trying to come in and show how the rest of us were stupid,” says chancellor Ralph Cicerone, an atmospheric scientist. “But Greg is paying his dues. He has been trying to think through carbon sequestration carefully and quantitatively, far beyond the order-of-magnitude arguments some other physicists have made about climate change.”

    Cicerone acknowledges that Benford may not receive due credit within UC's reward system for his nonphysics research. His prolific SF career doesn't help, either. “There's a supposition that to be concerned about the larger social place of physics is marginal,” Benford says. “To do so by writing fiction—people automatically think it's suspect.” Benford did surmount the university's tough academic grade of Professor VI this year, but only after a campuswide committee overturned his department's rejection. “In a fairer analysis, he might have achieved that level some time ago,” says physicist Gaurang Yodh of UC Irvine.

    For his part, Benford doesn't care about campus politics. “What I wanted was a life in the sciences, and I got it,” he says. And much more, as millions of readers can attest.

  16. A Semiconductor Giant Ramps Up Its R&D

    1. Robert F. Service

    As computer chips get harder to shrink down and speed up, Intel is embracing research and a new culture of openness to show the way forward

    HILLSBORO, OREGON— In the Oz-like realm of computer chip giant Intel, Gerald Marcyk is the man behind the curtain. Marcyk heads Intel's research on silicon transistors and other gizmos that make up computer chips. It's his job to ensure that these chips—and hence our desktops, laptops, and personal digital assistants—continue to get faster, smaller, and more powerful, trends that have held steady for 35 years and that customers have come to expect. In June, Marcyk reported a key step toward that goal. Intel researchers, he announced, had created the world's smallest and fastest silicon transistors, each capable of winking on and off 1.5 trillion times per second. If all goes as planned, Marcyk's blindingly fast devices will be powering computers in just 6 years.

    Good news, to be sure. But the announcement was spectacular for another reason—that it took place at all. Marcyk's job doesn't normally involve much fanfare, and for years Intel brass has worked hard to keep it that way. The semiconductor giant has a reputation for eschewing long-term research in favor of product-oriented development work. And where research has been needed, Intel has preferred to keep the results quiet, unlike other corporate research powerhouses such as IBM and Lucent Technologies' Bell Labs. Now that is changing. Looking to establish a new reputation as a research leader, Intel officials have played up the tiny transistors and other recent research achievements to media and industry analysts.

    Futuristic.

    At Intel's new D1C plant (above), a sealed monorail (below) whisks chips through the facility.

    CREDITS: INTEL

    The shift reflects how the complex science of semiconductors is forcing major changes on the industry giant. It's getting harder and more expensive to shrink transistors and pack ever more of them onto chips. That's compelled Intel to multiply its R&D funding nearly 10-fold since 1990, to an estimated $4.1 billion this year. In recent years the company has launched new internal labs devoted to improving the layout of devices on computer chips and developing novel software applications in an effort to sell more computers powered with Intel chips. And this spring, here in the western outskirts of Portland, the company opened its first facility dedicated to silicon research.

    Intel isn't only spending more; it's talking more as well. To better compete for everything from research talent to the confidence of customers, Intel is beginning to lift the curtain on its research. It is publishing more (see graph below) and touting the results. “Intel is trying to make its advances more public,” says Chris Murray, a chemist at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York. “That was not true before. Intel tended to keep its cards very close to the vest.”

    Many computer science researchers applaud Intel's new approach. “This is vital to the industry,” says Randy Isaac, who heads science and technology at IBM's Watson labs. Sandip Tiwari, who heads the Cornell Nanofabrication Facility in Ithaca, New York, agrees: “It's critical for the future. Coming to the limits [of shrinking chips] means we need to be open about research to be open to new ideas.”

    Growing research

    Intel was anything but a research powerhouse when Gordon Moore and Bob Noyce founded the company in 1968. Before launching Intel, Moore led research at Fairchild Semiconductor, where he quickly soured on company-sponsored long-term research when he saw that many technologies invented in Fairchild's research labs failed to make it into products. At Intel, Moore and Noyce opted against building a central research lab. Instead, needed research would take place alongside teams developing better and faster computer chips, a strategy the company still follows today.

    The result was a far cry from the open-ended studies of the cosmic microwave background and superconductors that won fame and Nobel prizes for Bell Labs and IBM. Intel researchers haven't netted any Nobels, but it's hard to argue that the company's strategy hasn't been a success. Today, the chipmaking giant employs 70,000 people and commands 85% of the worldwide market for microprocessors, the chips that serve as the brains of computers. Last year, the company's revenue topped $33 billion.

    Ironically, in an era that has seen research at central corporate labs reined in and focused more on helping a company's bottom line, Intel is slowly moving in the other direction. The company still favors decentralized research linked to developing new products. But according to numerous company officials, long-term research is increasingly gaining favor.

    Just 5 years ago, getting research money “was a struggle,” says Marcyk. To a large degree that's because the path to improving chips was clear. Semiconductor companies use a technique called lithography—shining light through stencils to activate chemical etchants—to pattern devices on chips. And for decades they have managed to increase the density of these patterns by using shorter and shorter wavelengths of light, a process akin to drawing thinner and thinner lines with an ever sharper pencil. But current lithographic methods are fast approaching their endpoint, as engineers will soon run out of materials transparent to ultrashort wavelengths of light (Science, 3 August, p. 785).

    “The industry is getting to a point where the future directions are less clear,” Isaac says. That's forcing Intel to stoke its research engine and be more open about its work. “We realized a few years ago that we had to do research ourselves on new materials and processes,” says Marcyk. “This is a relatively new behavior for us.”

    The trouble was that as chipmaking grew increasingly complex and sophisticated, Intel found itself developing a reputation for playing it safe and staying off technology's leading edge, says Manny Vara, who handles public relations concerning research for the company. Four years ago, for example, IBM announced that it was beginning to make chips with transistors connected by copper wires instead of the traditional aluminum variety. Copper conducts electrons faster than aluminum, so it has the potential to speed chips. But it can also kill semiconductor devices if not handled with extreme care. News organizations around the globe hailed IBM's effort, saying it would pave the way for chips with record-breaking speed.

    Intel researchers had already evaluated the technology and concluded that it wasn't needed for the time being. Copper wires allow electrons to course through the chips like high-speed sports cars, explains Justin Rattner, who heads Intel's microprocessor research in Hillsboro. Three years ago transistors were still relatively slow. Even if electrons sped between them through copper wires, they'd still stall out at the transistors—the chip equivalent of stoplights—and reach the finish line at the same time. “We concluded the wiring delays were not the speed limitations,” says Rattner. Intel officials decided that, rather than reengineer their manufacturing plants to work with copper, they would hold the technology for use in chips made with smaller and faster transistors.

    “The astonishing thing was that Intel continued to deliver by far the fastest processors despite the fact we weren't using copper,” says Vara. But reporters and industry analysts questioned over and over whether Intel's leadership was slipping. Now, Rattner says, the same thing is happening with silicon on insulator, a souped-up substrate on which chips sit, which is being backed by IBM, Sun Microsystems, Advanced Micro Devices, and Texas Instruments. Again, Intel researchers view the gains as meager and not worth higher manufacturing costs. “It looks like Intel is going against the grain or is antiresearch. But as is the case with copper, we are making very educated decisions,” says Rattner.

    To counteract this reputation, Vara and others have encouraged Intel's business leaders to unveil the work being carried out by the company's 6000 researchers in 80 labs around the globe. Vara says that they agreed reluctantly and have started slowly revealing selected results, often fuzzing the details of how they were accomplished. IBM's Isaac argues that the shift is important. As the path to improving semiconductors grows increasingly foggy, it's critical for Intel to let other companies know what its strategy is beyond the next quarter. “To get others to follow, you have to tell people what you're doing,” says Isaac. “And if you can't get them to follow, you're not a leader, you're a loner.”

    Equally important, IBM's Murray and others say, is publicity's role in attracting new research talent and ideas to the company. It's increasingly crucial for electronics companies to forge tight relationships with university researchers, says Murray. Although Intel spends about $50 million a year to support some 300 academic projects such as quantum computing, talking up the corporation's research drives home the message that this is a place where scientists' hard work will be valued. Whether it is for enticing researchers, customers, or other industry customers to follow the company's lead, “research is becoming more of a strategic tool than it has been in the past,” says Murray.

    Coming soon?

    Intel's current projects include software designed to recognize human faces

    CREDIT: INTEL

    Lifting the curtain

    This year, Intel has allotted $4.1 billion to R&D—equivalent to roughly one-fifth of the budget for all biomedical research supported by the National Institutes of Health. Company officials won't specify how they split their spending between research and development. But most of the money, they say, is slated for places like D1C, the company's latest chip-development plant in Hillsboro, which is preparing next-generation chip technologies for manufacturing. Chips at this plant will bear features as small as 130 nanometers across—about half of the size of components on current chips. They will be the first from Intel laced with copper wiring and will be forged from 300-millimeter-diameter wafers instead of today's 200- millimeter ones—another industry first that should make them cheaper to produce.

    Paper chase.

    Intel researchers' small but growing record of publications bucks the trend of its more visible rivals.

    The futuristic environment of D1C is as impressive as the chips it will turn out. Like all chip foundries, its core is a clean room dotted with stations for patterning and testing the chips. But compared with most chip clean-room facilities—which require engineers to wear special suits and to breathe into scubalike respirators to prevent even the tiniest contaminants from entering the air—D1C is relatively lax. When the chips aren't inside patterning and testing machines, they're ferried about in sealed cartridges and whisked through the plant on a robot-controlled monorail. In 2002, Intel officials expect to convert the development plant to fulltime manufacturing.

    Another sizable chunk of cash goes next door to RP1, the new silicon research plant. Here, Marcyk and his crew spend their days inventing and perfecting schemes for making the generations of chips beyond those that D1C will manufacture. In addition to making tiny transistors, the RP1 crew is working to develop more potent insulators, which help confine electrons to regions where they are supposed to be on the chips. Current chips are already approaching the limits of today's insulator, silicon dioxide. So researchers are evaluating promising new compounds such as zirconium dioxide.

    Small wonders.

    Silicon chips from 1971, 1982, and 2000 (the Intel 4004, 286, and Pentium 4, respectively) show the progress that researchers have made in miniaturizing electronic circuitry.

    CREDITS: INTEL

    Marcyk's team is also looking to make high-quality lenses from calcium fluoride, a temperamental material that's transparent to photons at 157 nanometers, which makes it the leading candidate to replace current quartz optics in future lithography machines. And RP1 researchers are starting early work on replacing some of a chip's internal wires with “optical interconnects” that would use photons to speed data transfer onto and off chips. All these efforts, Marcyk says, are geared to keeping increases in computing power marching along. “It's our job to make Gordon Moore look like a genius,” he says, referring to Moore's oft-quoted 1965 prediction that the number of transistors on chips would double every 2 years.

    Maintaining that pace also involves plenty of work beyond the confines of RP1. In a nondescript office complex just down the road, Rattner and his colleagues in the microprocessor research group are working on designs for the high-density chips that Marcyk's team is learning to build. Topping Rattner's list are microprocessors hardwired for vision and hearing. Such chips, says Rattner, will one day allow your computer to recognize you when you walk in the room, turn itself on, and fetch your latest e-mail, access the Web, or launch other programs you tell it to launch. Literally tell, Rattner says, because new hardware designs are also expected to vastly improve today's rudimentary voice recognition systems. “On these foundations you can build all sorts of unique and wonderful tools,” such as computers that control all the appliances in your house with simple voice commands, says Rattner.

    Many of those tools will likely come from the Intel Architecture Lab (IAL), also centered in the Hillsboro complex. Here, researchers are pushing the limits in an area that few realize has long been an Intel hotbed: software. According to IAL researcher Steven Spina, Intel researchers actually did the lion's share of developing components of now-standard programs, such as RealPlayer—a music and video player—and an animation program called Shockwave that has been downloaded by more than 200 million users worldwide. Spina and Vara say that although Intel spent millions on the products, it essentially gave away the licensing rights. The idea, Vara explains, is to create must-have applications that will drive demand for new computers, most of which will presumably be powered by Intel chips. “We want to continue to grow the entire computing pie,” says Vara.

    Spina and his IAL colleagues have several software packages on the horizon. One converts video coverage of sporting events into three-dimensional animation that can be viewed from any angle on the field, including the perspective of specific players, referees, coaches, and even the ball. Another is a set of video games that track a player's real-life motions by camera and use them to control the action on screen. More mundanely, the researchers envisage computers that start up in seconds rather than the 1 to 2 minutes that we've come to tolerate first thing in the morning. Inventing new computer applications is a job Spina and his colleagues clearly enjoy. “This is a place where we get to be around a lot of cool stuff and get to play,” Spina says.

    But with the semiconductor industry in the midst of a wrenching downturn, it's not clear how long the toys—and the money to buy them—will keep coming. Also unclear is how long the corporate glasnost will continue to offer outside researchers insight into the possible road ahead and a chance for Intel's own legions of trained talent to talk in detail about their latest work.

    “I don't know if this is a fluctuation or a sustained trend,” says Isaac. “[Intel] still is not as open as IBM or Bell Labs would be,” says Cornell's Tiwari, but for a company better known as a consumer of research than as a provider, the trend is in the right direction.

  17. Better Searching Through Science

    1. David Voss

    Next-generation search tools now under development will let scientists drill ever deeper into the billion-page Web

    In the beginning, the Web was without form, and void. Vast heaps of information grew upon the deep, and it was good for one's desktop. But users across the land were befuddled and could not find their way. There arose the tribes of the Yahoos, the HotBots, and the AltaVistas to bring order out of chaos. Google and CiteSeer prospered and lent guidance. But researchers and scientists, learned ones who had built the Web in their own image, yearned for something more. …

    As myths go, this one may lack staying power, but there is no doubt that in some sense scientists have been victims of their own success. The real creation story is that the World Wide Web began as an information-sharing and-retrieval project at the European particle physics lab CERN, and many scientists of all fields now depend vitally on the Internet to do their jobs. It's only recently that it has evolved into a convenient way to buy stuff. And although this commercial proliferation has been good for the Web's growth, it has frustrated researchers seeking quality content and pinpoint results among the noise and spam.

    Now, a handful of companies and academic researchers are working on a new breed of search engines to undo this second curse of Babel. “I think the real action is in focused and specialized search engines,” says Web researcher Lee Giles of Pennsylvania State University, University Park. “This is where we're going to see the most interesting work.”

    The first generation of search engines was based on what computer scientists such as Andrei Broder of AltaVista like to call classic information retrieval. Stick in a key word or phrase, and the software scurries around looking for matching words in documents. The more times a word pops up, the higher the document ranks in the output results.

    But ranking by hits did not say how important or authoritative or useful the pages might be. “The original idea was that people would patiently look through 10 pages of results to find what they wanted,” says Monika Henzinger, director of research at Google. “But we soon learned that people only look at the first set of results, so ranking becomes very important.” Some early services, most famously Yahoo, tried to work around this problem by using human analysts to construct Web directories that retained only the most useful or authoritative Web sites. Metaengines—Web sites that shot a query off to dozens of search engines—gave coverage another boost.

    ILLUSTRATION: TERRY SMITH

    Then, starting about 3 years ago, a second generation of tools appeared whose software performs link analysis: not only digesting the content of pages but also scoping out what the pages point to and what pages point at them. Google is considered the commercial pioneer in this field, but other companies such as NEC have funded development of sophisticated Web structure analyzers such as CiteSeer and Inquirus. And it's still a topic of intense basic research at academic incubators like the alma mater of Google's founders, Stanford University.

    Nowadays, Broder says, virtually every large search engine does some form of link crunching and has ranking functions that order the results. These ranking algorithms are closely guarded secrets. “They are the magic sauce in the recipe,” Broder explains. At Google, a system called PageRank measures the importance of Web pages by “solving an equation of 500 million variables and more than 2 billion terms,” according to Google's Web site. Says Henzinger, “The idea is that every link is a vote for a Web page, but the votes are weighted by the importance of the linking page.”

    According to Broder, the goal of a third generation of search engines now on the drawing boards “is to figure out the intent behind the query.” By looking at patterns of searches and incorporating machine intelligence, software may anticipate what an engine user really wants. That knowledge should help it narrow and focus the search.

    Future search technology will also begin to track its human users much more closely—for example, divining that a query about “Mustang” refers to the car, not the animal. In its Inquirus-2 project, for instance, NEC has been looking at ways to reformulate a query based on the user's information needs before zipping it off to other search engines. Other search engines are starting to present the results in a file cabinet stack of categorized folders.

    Privacy issues aside, the ramp-up in search engine power is bound to benefit scientists. An example of a specialized search engine for scientists is Scirus, a joint venture launched in April between FAST, a Norwegian search engine company, and the Elsevier Science publishing group. Scirus is a search interface that taps into Elsevier's proprietary journal content while simultaneously searching the Web for the same key words. “We found that scientists were searching proprietary databases as well as the Web,” says Femke Markus, Elsevier's project manager for Scirus. “Wouldn't it be ideal to have one search engine to do both? We [also] would like to let people know that we have journals that might be useful to them.”

    FAST's chief technology officer, John Lervik, says that Scirus was designed to filter search results to present matches only from Web pages with scientific content. “For the Web content, we filter on the basis of some attributes like domain. A Web site ending in ‘.edu’ is more likely to have scientific content, for instance.” More important, he says, “we can do automatic categorization to estimate whether something is scientific content or not.” And like Google, Scirus also searches content in PDF files, a document format widely used in scientific research.

    Such searching power does raise perils. Some users fear that tailored search engines might promote the Web content of one publisher under the guise of an omniscient search engine. Queries to Scirus, for example, yield not only free Web content at universities and research labs but also links to subscriber-only content in the Elsevier journals and MEDLINE.

    Markus says Scirus does not stack the deck. “We've joked about it: Can't we raise the ranking and make sure the top 20 is always Elsevier?” she says. “But that would be very bad for us. Everyone would say, ‘Hey, you're only pretending to launch an independent platform.’” Markus says her team is inviting other publishers, including the Los Alamos physics preprint server, to have their content indexed on Scirus.

    John Lervik at FAST also denies any bias. “We use the same relevance algorithms for everything, and we don't emphasize ScienceDirect [Elsevier's online journal gateway] over anything else.” Lervik also wants users to speak up if they see anything fishy.

    View this table:

    Another challenge for both the specialized and general-purpose search engines is the “hidden Web”—databases that search engines do not index, either because their content has a short shelf life (such as daily weather reports) or because they are available to subscribers only. The publishers of Science, Nature, and other journals charge fees for online access to the full text of research papers. Although abstracts may be available, and the citations can readily be discovered by search engines such as Google, the data and full text may never be seen by search engines. This is partly why Elsevier cranked up the Scirus Project. “Because of our firewalls and subscriptions, engines like Google cannot get in and index us,” says Markus.

    These barriers pose a dilemma for researchers who want the stamp of peer-review approval and publication in a high-profile journal but who also want the world to know about their work. It has also led to a continuing debate about whether scientific research publications should be free and available without restriction on the Web (Science, 14 July 2000, p. 223). At the moment, Science and Nature both allow authors to post copies of papers on their Web pages after a period of time. By then, however, it may no longer be the breaking news that researchers are looking for.

    Other researchers believe that the highest quality search tools will come not from rejiggering the search engines but from a whole new way of creating Web content. One initiative, called the “semantic Web,” is being promoted by a team that includes Tim Berners-Lee, the father of the World Wide Web, who is now at the Massachusetts Institute of Technology. The goal is to incorporate “metadata”—a description of what a document is about—into every Web page, in a form that computers can easily digest and understand. To scientists wrestling with information overload, that might mark the first big step toward paradise regained.

  18. The Quandary of Quantum Information

    1. Charles Seife

    Scientists are excited by the potential of quantum computing but increasingly confused about how it works

    If even the newest, speediest personal computers don't thrill you, consider what's in store if quantum computing lives up to its promise. By using the strange properties of quantum objects to store and manipulate information, quantum computers, if they can ever be built, would crack the codes that safeguard the Internet, search databases with incredible speed, and breeze through hosts of other tasks beyond the ken of ordinary computing.

    Useful quantum computers are still at least years away; right now, the most advanced working model can barely factor the number 15. Nonetheless, the past few years have seen a flurry of advances, as physicists figure out how to use quantum information to perform feats that are impossible in the classical world. Yet even as theorists crank out quantum software, they have been astonished to discover that a phenomenon long considered essential for quantum computing appears to be dispensable after all. That leaves them wondering just which exotic properties of the quantum realm combine to give quantum computers their incredible potential. “People are looking for where the power of quantum computing is coming from,” says Raymond Laflamme, a physicist at the University of Waterloo in Ontario. And the deeper they peer beneath the surface, the more paradoxes they discover.

    At first glance, a quantum computer shouldn't be more inscrutable than the computer on your desktop; both are essentially machines that process information. In 1948, Bell Labs scientist Claude Shannon laid the groundwork for modern computing by founding information theory, a new discipline that did for information what the laws of thermodynamics did for heat. A PC, true to Shannon's vision, processes information by manipulating “bits,” binary digits that can have a value of either 0 or 1. A 1 can be a high voltage, a closed switch, or a bright light, whereas a 0 can be a low voltage, an open switch, or a dim light: The medium is certainly not the message. But however the bits are represented, the computer uses an algorithm to make those ons and offs dance a jig, and out pops the desired answer.

    What makes quantum information much more intricate than classical Shannon information is that quantum computers, unlike their classical counterparts, can exploit the laws of the subatomic realm. Instead of manipulating bits, quantum computers store information on quantum-mechanical objects such as atomic nuclei, photons, or superconductors. A “qubit” might be a 1, for instance, if a photon is polarized vertically rather than diagonally, if an atom's spin is pointing up rather than down, or if current in a loop of superconductor is moving clockwise rather than counterclockwise.

    But the laws of quantum mechanics make qubits quite different from bits. Instead of having to choose between being a 0 or a 1, a qubit can be both at once—an idea that physicist Erwin Schrödinger mocked with his famous half-alive, half-dead cat. But this “superposition” of different quantum states is quite real; for instance, last year, teams from Delft, in the Netherlands, and New York state, showed that superconducting loops can carry currents that run both clockwise and counterclockwise at the same time (Science, 31 March 2000, p. 2395). Under the right circumstances, manipulating a single qubit in superposition is equivalent to running a classical computer twice—once with the bit set to 0 and another time with the bit set to 1, potentially giving a quantum computer a speedup over a classical one.

    A second quirk of qubits that makes the quantum computer incredibly powerful is entanglement. When two quantum objects are entangled, their fates are linked. The most famous incarnation of entanglement is Einstein's “spooky action at a distance,” in which, if one entangled atom is poked, its entangled twin feels the prod, even if it's halfway across the universe. In theory, any number of particles can be entangled. Mathematically, such clusters are yoked together to form, in effect, a single object—you can't manipulate one member without considering the effect on the others. In principle, this more-than-the-sum-of-their-parts effect allows qubits to be linked into larger and larger quantum systems capable of storing staggering amounts of information. Two entangled qubits can be equivalent to four sets of two bits—(0, 0), (0, 1), (1, 0), and (1, 1)—all at once. Three entangled qubits are equivalent to the eight different combinations of three bits all at once, and so on and so on exponentially.

    When quantum computing began to blossom in the early 1990s, most experts thought this exponential effect would form the heart of a quantum computer. “It was fairly well accepted in the community that you need entanglement to do the power of quantum computation,” says John Smolin, a physicist at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York. “Without entanglement, you lose the ability to get exponential compression in quantum representation.” Where a 10-bit classical computer might take 1024 separate calculations to perform a task, a quantum computer could do the same task by means of a single calculation with 10 entangled qubits instead.

    Such exponential compression has some drastic consequences. In the mid-1990s, mathematician Peter Shor of Lucent Technologies' Bell Labs in Murray Hill, New Jersey, proved that a quantum computer would be able to factor large numbers much more quickly than an ordinary computer can. Because public-key cryptography— the technique that protects transactions on the Internet—relies upon the difficulty of factoring large numbers, a quantum computer would crack the Internet's encryption schemes.

    A quantum computer could do many other things that classical computers can't. For example, it could query a database in a way that no classical computer could ever do. In essence, tracking down an element in a database is equivalent to picking a combination lock. Imagine a lock with 25 possible combinations. An ordinary computer would try each combination, one by one, until it found the correct one that opened the lock. On average, it would take 12 or 13 attempts to find the correct combination; in the worst case, it can take 25. In 1997, Bell Labs computer scientist Lov Grover showed that a quantum computer could solve that same database problem in no more than five tries. That is, instead of requiring about N/2 tries to try N combinations, it takes the square root of N—a significant speedup that would be impossible in the world of classical computing. But quantum computers aren't merely more efficient than ordinary ones. This year, Nicolas Gisin of the University of Geneva and his colleagues at the Swiss Federal Institute of Technology found a quantum-mechanical procedure for solving an information-theory conundrum (colorfully known as the Byzantine generals problem) that classical algorithms cannot solve in any amount of time.

    For years, scientists assumed that such spectacular results showed the power of entangled particles. Recently, though, they have been shocked to discover that the limits of classical computing can be exceeded without even using entanglement. In fact, some experimenters have been doing it all along.

    The most sophisticated quantum computers to date, developed by physicists such as Neil Gershenfeld of the Massachusetts Institute of Technology (MIT) and Isaac Chuang of IBM's Almaden Research Center in San Jose, California, perform quantum-type computations with atomic spins as qubits. By nudging molecules such as chloroform with magnetic fields, Chuang and colleagues force the atomic spins to reverse their orientations (or dance more intricate dances) to carry out quantum logic operations. These nuclear magnetic resonance (NMR) quantum computers, which include the one that can factor the number 15, are still quite rudimentary. But by executing some basic quantum algorithms —error correction, Grover's algorithm, and others—they prove that quantum-computing theorists are on the right track.

    Quantum edge.

    Whereas a classical database search (left) tries to match every possible “key,” Lov Grover's quantum technique saves steps by making mismatches fade into improbability.

    Or so it seemed. In 1999, Carlton Caves of the University of New Mexico, Albuquerque, showed that under the room-temperature conditions of the NMR experiments, large-scale entanglement of atoms is impossible. Bewilderingly, the NMR quantum computers had executed Grover's algorithm without having access to the entanglement that the algorithm required. In another unsettling twist, last year, MIT physicist Seth Lloyd showed how to mimic Grover's quantum database-searching speedup without using entanglement at all. By exploiting interference effects made possible by the wavelike nature of particles, Lloyd's algorithm also gets a square-root-of-N improvement over classical computers. The penalty for jettisoning entanglement is that any quantum computer running Lloyd's algorithm would need exponentially growing resources. As the problem gets bigger and bigger, the computer requires many, many more beamsplitters, detectors, and other necessary equipment, making it impossible to solve any but the tiniest problems. So even though Lloyd's algorithm outpaces any classical computer, it is inherently limited in a way that quantum computing with entanglement is not. “There's something funny that happens,” says Smolin. “[Lloyd's algorithm] really does sit in between a quantum algorithm and a classical algorithm.”

    Clearly, Laflamme says, “entanglement is not the whole answer to where the power of quantum computing comes from.” What gives quantum computers their power, then? “What it is, we're not 100% sure,” he says. “It's not something we always want to say to our sponsoring agency, but to a researcher, it's absolutely great.”

    Exploring the limits of unentangled quantum computing, Laflamme and colleagues at Los Alamos National Laboratory recently figured out a way to create a quantum computer by using simple lenses, mirrors, and other optics to manipulate a beam of unentangled light. “It's an idea I like: a way to manipulate quantum information in a totally unexpected way,” he says. The drawback is that the computer needs a source of light that spits out a single photon at a time and a detector sensitive enough to detect that photon—equipment that is easy to sketch on paper but difficult and expensive to build.

    Such theoretical insights won't hasten the day quantum computers appear in your local mall. “We understand [quantum information] more deeply,” Smolin says, “but it doesn't get you any closer to quantum computers.” But by puzzling through the seeming paradoxes of quantum information, theorists think that they will understand the strange realm of quantum theory in unprecedented detail. Says Laflamme: “We're just on the border of the territory, and we're just making excursions into it.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution