News this Week

Science  19 Nov 1999:
Vol. 286, Issue 5444, pp. 1450

    New York's Lethal Virus Came From Middle East, DNA Suggests

    1. Martin Enserink

    Fort Collins, Colorado—Scientists have finally agreed on the identity of a virus that caused an epidemic of brain inflammation in and around New York City this summer, sickening 60 mostly older people and killing seven. A group of public health researchers announced at a workshop held here last week that the virus's DNA sequence conclusively shows that it is the West Nile virus—an identification that was under debate until recently. They also reported that the virus's genome is almost identical to that of a West Nile strain found in Israel last year, suggesting that it originated in the Middle East. Many now agree it has a good chance of establishing itself and spreading in the Western Hemisphere.

    West Nile is endemic in Africa and has made occasional forays into Europe and Asia. This summer, for example, there were over 800 cases and 40 deaths in the southern Russian city of Volgograd, said Alexandre Platonov of the Central Institute of Epidemiology in Moscow. But its appearance in the United States, where it had never been detected before, “was truly a watershed event,” said James LeDuc, associate director for global health at the Centers for Disease Control and Prevention (CDC) in Atlanta.

    Although the human epidemic in New York has died down, the West Nile virus is still lurking in birds, where it replicates, and in Culex mosquitoes, which can transfer it to humans. Researchers at the meeting said New York City's subway system could be a winter haven for infected mosquitoes. Migratory birds, meanwhile, may spread the virus south. Already, officials have found a sinister omen: A dead crow carrying the virus turned up in Baltimore, Maryland, last month, 300 kilometers southwest of New York.

    “The worst case scenario is that it's going to spread all over the eastern U.S. and we face outbreaks every year,” said David Morens of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland. But it's also possible the virus could subside into the background, like its cousin, St. Louis encephalitis (SLE) virus, which is endemic in the southern United States and causes occasional outbreaks.

    The genetic analysis, carried out by a team from CDC's lab for arboviral diseases in Fort Collins, was welcomed in part because it put an end to several months of confusion. Shortly after the first encephalitis cases emerged, CDC concluded from antibody tests that SLE was the culprit. Alarmed by some odd observations—most notably a die-off of crows and zoo birds in New York—CDC changed its mind on 24 September. Genetic data had shown that another virus was involved, which the agency—very cautiously—referred to as “West Nile-like.” Then a team led by molecular biologist Ian Lipkin at the University of California, Irvine, challenged the classification, saying the new virus most closely resembled another agent, the Australian Kunjin virus. In a Lancet paper, Lipkin dubbed it “Kunjin/West Nile-like” (Science, 8 October, p. 206).

    The conflicting analyses, however, were based on small snippets of the genome. The full sequence resolves the issue, says the CDC team, which has submitted its results for publication. Indeed, the viral DNA is 99% identical to a West Nile strain found in a dead goose in Israel in 1998, which was partly sequenced by virologist Vincent Deubel of the Pasteur Institute in Paris. “It's essentially the same virus,” says Robert Lanciotti, who presented the work. Lipkin now agrees, adding that the sequence also closely resembles a West Nile strain found earlier in Egypt.

    The findings give researchers the first clear clues about where the virus came from, but they do not explain how it crossed the Atlantic. It probably rode in a natural host, researchers said—in an infected bird or mosquito, or a human traveler. “There's a lot of traffic between the Middle East and the New York area,” says John Roehrig, who led the CDC study. “We may never know exactly what happened.”

    Data presented at the meeting show that the virus has already spread way beyond what researchers now call the “hot zone”—the area in northern Queens where most of the patients live. Infected mosquitoes, for instance, were found in several counties north of New York City, in Long Island, and in New Jersey. Crows were the hardest hit bird species: According to one estimate, as many as two-thirds of them may have perished in and around New York City. But researchers have now found the virus in 20 other bird species, ranging from American robins and rock doves to broad-winged hawks. In the Bronx Zoo, one bald eagle died from a West Nile infection, and New York state wildlife pathologist Ward Stone warned that wild specimens of this American icon may succumb as well. Researchers don't know whether the virus will survive in birds until next summer, but if it does, migratory species may spread it to the southeastern United States, the Caribbean, and even as far as South America, said John Rappole of the Smithsonian Institution's Conservation and Research Center in Front Royal, Virginia.

    Health officials and researchers spent a lot of time at the meeting drawing up disease-prevention recommendations to be issued later this year. Participants concluded that mosquitoes and birds should be tested widely and that eastern U.S. states should be on the lookout for dead crows. Many also supported a more sophisticated way to track the virus: keeping flocks of chickens and testing their blood regularly. Two decades ago, most states kept such “sentinel flocks” to monitor impending outbreaks of SLE and several other types of mosquito-borne encephalitis. But most of them have disappeared in budget cutbacks, along with the expertise needed to maintain the flocks. It would be difficult to reinstate them on short notice. “The state infrastructures have crumbled to all-time lows,” says Yale University medical entomologist Durland Fish. And even if flocks could be restored, they might not be welcome where they're most needed. As one participant noted, it would be hard to install a hen house on Wall Street.

    Looking back on the summer crisis, participants agreed that—at a minimum—communication between New York City and other government health agencies needs to be improved. “Perhaps that's the benefit of this outbreak,” says virologist Harvey Artsob of the Canadian Science Centre for Human and Animal Health in Winnipeg. “It's tragic for the patients, but it is a reminder that we have to be prepared.”


    Shadow of an Exoplanet Detected

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Few doubted that they were out there, but astronomers are delighted to have confirmation: For the first time, a planet in another star system has been seen to cross the face of its parent star. The transit, as it is called, is the most direct glimpse yet of an extrasolar planet. It has also allowed astronomers to pin down the object's mass, size, and density—characteristics that could only be guessed before—which leave little room for doubt that it really is a planet.

    “This is what we've been waiting for,” says exoplanet hunter Geoffrey Marcy of the University of California, Berkeley. But with just one transit observed, some astronomers think it's too soon to rejoice. “It's an exciting observation,” says David Black, director of the Lunar and Planetary Institute in Houston. “But I'd like to see a couple of full transits before drawing any conclusions.”

    The new planet, closely orbiting a dim star in Pegasus, turns out to be less massive than Jupiter but larger, bloated by the heat of its parent star. According to Greg Henry of Tennessee State University in Nashville, who announced the discovery last week in an electronic circular of the International Astronomical Union, the low density suggests that it and the many other massive exoplanets orbiting close to their parent stars—so-called hot Jupiters—are gaseous. They must have formed farther out in gas-rich regions of a young solar system, then migrated inward.

    Over the past 4 years, several teams, most notably Marcy's, have discovered more than two dozen exoplanets orbiting sunlike stars without actually seeing a single one. The astronomers inferred their existence from slight, periodic movements of the parent stars back and forth along the line of sight, presumably because of the tug of an orbiting planet. From the extent and timing of the wobbles, the size, shape, and period of the planet's orbit can be deduced, as well as a lower limit on its mass. The actual mass can't be pinned down as long as the inclination of the orbit is unknown.

    On 5 November, Marcy, Paul Butler of the Carnegie Institution of Washington, D.C., and Steve Vogt of the University of California, Santa Cruz, reported that they had detected a new set of telltale wobbles. The team, which regularly uses the 10-meter Keck telescopes on Mauna Kea, Hawaii, to scrutinize likely sunlike stars, announced six new planets, with orbital periods ranging from 3.5 days to almost 3 years.

    As always, Marcy passed the new data on to Henry, who uses small, automated telescopes at Fairborn Observatory in the Patagonia Mountains in Arizona to study sunlike stars for slight brightness variations that could be due to pulsations or star spots. He also looks for planetary transits. For a transit to occur, the planet's orbit must be seen more or less edge-on. Just by chance, about 10% of the hot Jupiters should produce transits, but no one had seen one. “I was becoming very worried,” says Marcy.

    Henry realized that one planet in the latest batch would be an especially good target for his transit search because its orbital period was just 3.5 days. “When I saw Marcy's data for the star HD 209458,” he says, “I realized that a transit might occur in the night of November 7” if the orientation of the planet's orbit was favorable. “I quickly reprogrammed one of the automated telescopes before I went home and got to bed.” The next day, when Henry looked at the brightness measurements recorded by the 0.8-meter Fairborn telescope, he hardly believed what he saw. Exactly at the predicted moment, the star showed a brightness drop of 1.7%.

    “It took me a few more days to exclude the possibility that the brightness drop could be the result of star spot activity,” he says. “But in the next 3 or 4 days, the star remained absolutely constant. Then I knew we had it.” Black isn't so sure. “Henry only saw the beginning of the presumed transit,” he says—the star had set before its brightness increased again. The announcement, Black says, “is a little premature.”

    But if Henry is right about the transit, the planet's orbit must be edge-on. Combining that orbital inclination with the wobbles in its parent star, the team could calculate its exact mass: 0.63 Jupiter masses, or 200 Earth masses. From the observed brightness drop, they estimate the diameter of the planet at 225,000 kilometers—60% larger than Jupiter. That puts the bloated planet's density at 0.21 grams per cubic centimeter, far less than that of water. “It has to be gaseous,” says Marcy.

    According to Marcy and Henry, the discovery puts to rest the nagging possibility that stellar wobbles aren't due to planets at all, but to rhythmic pulsations of the entire star or some other intrinsic cause. But with so much hanging on a single observation, Henry would like to dispel any doubts by repeating it. A second transit should have occurred on 11 November, but it took place during daylight in the United States and could not be observed. The third was predicted for last Sunday night, 14 November. Henry, Marcy, Butler, and Vogt had announced their discovery on 12 November, so that other astronomers could watch for the dimming. But both the Fairborn Observatory and Lick Observatory in California, where another team tried to observe the event, were clouded out that night.

    Because upcoming transits will happen when the star is below the horizon of Fairborn observatory, confirmation will have to come from other teams. Marcy says he isn't worried. “We already believe it,” he says. “The first brightness dip happened exactly at the predicted moment. If this was due to something else, Mother Nature would have played a horrible trick upon us.”

  3. NIH

    Protests Win Changes to Peer-Review Proposal

    1. Bruce Agnew*
    1. Bruce Agnew is a science writer in Bethesda, Maryland.

    Sometimes, it pays to fight City Hall. Biomedical researchers who protested that their fields were slighted in a proposed reorganization of the National Institutes of Health's (NIH's) peer-review system are winning at least some concessions. Responding to the complaints, NIH's Panel on Scientific Boundaries for Review last week penciled in changes to its blueprint that will give heightened prominence to AIDS, urological, and development research. It also made clear that further fine-tuning is likely before it issues its final “Phase 1” report on the overall structure of the peer-review system in January.

    The panel, headed by National Academy of Sciences president Bruce Alberts, originally proposed organizing the more than 100 study sections run by NIH's Center for Scientific Review (CSR) under 21 supercommittees known as integrated review groups (IRGs). Sixteen of these were to be centered on disease or organ systems and five on basic research whose relevance to specific diseases cannot yet be predicted. But in more than 800 e-mail and conventional comments on the draft proposal, many scientists argued that their fields were overlooked or downgraded. AIDS and urological researchers mounted what appeared to be organized letter-writing campaigns (Science, 5 November, p. 1074). So, at its 8 to 9 November meeting, the panel:

    • Proposed creation of three additional IRGs—AIDS and AIDS-Related Research, Renal and Urological Sciences, and Biology of Development and Aging—bringing to 24, rather than 21, the number of IRGs in its proposed peer-review structure;

    • Made clear that it is leaving intact—at least for the time being—the four new IRGs that were created in 1998 and earlier this year for neuroscience and behavioral research, completing the merger of the National Institute of Mental Health, the National Institute on Drug Abuse, and the National Institute on Alcohol Abuse and Alcoholism into NIH; and

    • Promised a series of conference calls with experts in other fields to “further refine” its proposed IRG structure.

    According to CSR director Ellie Ehrenfeld and molecular biologist Keith Yamamoto of the University of California, San Francisco, chair of the CSR Advisory Committee, the first targets of those phone calls will be leaders in fields whose practitioners felt scorned by the panel's initial draft. These areas include toxicology, nutrition, pediatrics, gerontology, dental and craniofacial sciences, radiation oncology, and surgical research.

    On some issues, the panel simply has to explain itself better. For example, says Yamamoto, some chemists worry that the panel wants to force basic chemistry research into a physiology mode, whereas “our actual goal was to ensure a venue at NIH for fundamental chemistry.” In addition, “some basic scientists are reading the draft report and saying, ‘Oh, they're just going to make everything disease-based,’” Yamamoto says. “And some clinician-scientists are saying that the basic scientists are going to take over the whole review system.”

    For biomedical researchers, the nail-biting will continue for a while. Only after the Boundaries Panel completes its final Phase 1 report, which will propose the lineup of IRGs, will officials begin the hard part—drawing the boundaries of individual study sections within those IRGs and testing the system to see how it would work. That is expected to take at least another 2 years, and additional changes to the blueprint seem inevitable. “We're feeling our way,” Alberts cautions. “We're scientists who are doing experiments.”

  4. INDIA

    Cyclone Wrecks Rice, Botanical Centers

    1. Pallava Bagla

    New Delhi—Two major Indian laboratories are struggling to recover from a powerful cyclone that swept across parts of eastern India late last month. The storm, which packed winds of up to 280 kilometers an hour, caused extensive damage to the Central Rice Research Institute (CRRI) in Cuttack and its germ-plasm stocks as well as destroying much of the collection of rare and exotic plants at the Regional Plant Resource Center (RPRC) in Bhubaneshwar.

    More than 8000 people have died and 15 million have been left homeless in the eastern state of Orissa, which was cut off from the rest of the country for 3 days after the storm struck on 29 to 30 October. The storm surge drove seawater as much as 15 kilometers inland in what has been described as the worst cyclone of the century.

    The cyclone has “ravaged the entire campus” of India's premier rice research center, CRRI director Shanti Bhushan Lodh reported last week. All windows facing north were shattered, and the biotechnology, biochemistry, and engineering departments were filled knee-deep with water. Indian Council of Agricultural Research officials this week announced a $500,000 emergency grant to help in the rebuilding of the 53-year-old institute, which remains without electricity and water.

    The 70 hectares of rice in experimental plots at the center have also been devastated. Lodh estimates that only a third of the 10,000 rice varieties being grown survived the gale-force winds and subsequent flooding that swept across the region. “The green fields of rice have now turned gray,” he says. Gurdev Khush, chief of rice breeding at the International Rice Research Institute in Los Baños, Philippines, says the devastation will be “a major setback for India's rice research program” and a “tremendous loss” for scientists around the world.

    Probably the worst affected will be the rice germ-plasm collection, which had its roof blown away and its refrigeration units flooded. The 22,000-strong varietal collection, one of the world's largest, is a medium-term storage facility accessed by researchers around the world. Fortunately, most of its collection is duplicated at the National Gene Bank in New Delhi, a long-term repository for the seeds. In addition, a quick-thinking scientist reportedly salvaged much of the lab's supply of temperature-sensitive enzymes and reagents by taking the materials with him on a flight to Chennai a few days after the cyclone.

    The damage was even heavier at the RPRC, one of the largest botanical gardens in the world. Created 20 years ago, the center is spread over 197 hectares on the outskirts of the state capital and bore the brunt of the cyclone's fury. Most of its valuable collection of rare trees, palms, bamboos, and medicinal and aromatic plants appears to have been destroyed. Director P. Das estimates overall damage at more than $2 million; in addition to the destruction of labs, stores, and other structures, many roads and paths have been washed away. “It's the only center of its kind in India,” says H. Y. Mohan Ram, an economic botanist at the Department of Environmental Biology of the University of Delhi.

    Lodh is thankful that none of his 140 scientists lost their lives in the storm, but notes that “morale is very low.” A visit last week from a government team resulted in the emergency grant and a backup generator, but Lodh says that the center needs “maximum help” to recover from the devastation.


    Chimps and Lethal Strain a Bad Mix

    1. Jon Cohen

    Bethesda, Maryland—For the first time in the history of the AIDS epidemic, the National Institutes of Health (NIH) convened a public meeting to discuss a proposed HIV vaccine experiment in chimpanzees. The reason for the extra scrutiny: The test involves giving the animals a strain of the virus that quickly destroys their immune systems and possibly even causes disease.

    For 2 years, researchers have debated the science and ethics of injecting chimps with a potentially lethal HIV strain to test whether the immune response triggered by experimental AIDS vaccines can block infection or prevent disease (Science, 19 February, p. 1090). But what was a simmering academic dispute has now become a real-world dilemma. The National Cancer Institute's Marjorie Robert-Guroff has proposed just such a test of a vaccine her lab has been developing with the drug company Wyeth-Lederle. A successful experiment, she argued, would help convince colleagues—and Wyeth—that her vaccine approach deserves more support than it's been receiving.

    Billed as a “consultation” to help NIH decide whether it should back Robert-Guroff's trial, the 5 November meeting triggered impassioned debate over the role that animal “models” should play in the search for a vaccine. It also revealed that anyone who wants to use a lethal HIV strain in chimps first must build a compelling case—something Robert-Guroff failed to do, as the assembled researchers were unenthusiastic about her proposal, apparently leaving it dead in the water. “We recognize this is a complex issue,” said Peggy Johnston, who convened the meeting and heads the AIDS vaccine program at NIH's National Institute of Allergy and Infectious Diseases. “This consultation is the first step, not the only step.”

    Robert-Guroff's vaccine is now in small-scale human trials to test its safety and ability to trigger an immune response. As her group described in the June 1997 issue of Nature Medicine, they had first tested the vaccine—consisting of HIV genes stitched into a harmless adenovirus—in chimps. In the most impressive study, they vaccinated four chimps and then “challenged” them with an injection of an HIV strain that doesn't appear to harm the animals. More than 10 months later, none had detectable HIV in the blood, while the lone control was infected. “We were pretty encouraged by this study,” Robert-Guroff said. However, she admits that the results drew a tepid reaction from colleagues. “They said we really didn't show anything, as the [challenge] virus was too wimpy.”

    The concerns underscore the serious limitations of any animal model used to test a potential AIDS vaccine. Monkeys, the most commonly used animal for such tests, usually can only be infected with SIV (a cousin of HIV) or a hybrid virus known as SHIV; they also cannot be infected easily with the adenovirus used in Robert-Guroff's vaccine. Although chimps exposed to HIV can become infected, they generally do not become sick. It was not until 1996 that a disease-causing strain hit the press, when a group led by Frank Novembre at the Yerkes Regional Primate Research Center in Atlanta, Georgia, reported that an HIV-infected chimp named Jerom had developed an AIDS-like disease.

    Robert-Guroff and others hoped that this apparently lethal strain would make the chimp challenge model more persuasive. But as the meeting revealed, researchers are divided over how useful this strain might prove to be. For one, the virus is not as devastating as some had expected. Novembre described here how his team has used derivatives of Jerom's virus to infect seven more chimps, some of which now have fewer than 200 CD4 cells—the white blood cells that HIV destroys—per cubic millimeter of blood; one animal's count is down to zero. (For humans, a CD4 count below 200 defines AIDS.) Unlike Jerom, however, none of these animals has yet developed AIDS-like symptoms. This virus “isn't as virulent as I thought it was when I first read about it,” said the New York Blood Center's Alfred Prince, who opposes using lethal HIV in chimps. Still, he said, the strain's diminished reputation doesn't rule out the possibility that it might kill chimps; using it in a challenge experiment would entail unnecessary risks to the animals, he argued, as the goal of a vaccine should be to prevent chronic infection. “Disease is quite irrelevant,” he said.

    Others disagreed. Patricia Fultz of the University of Alabama, Birmingham, dismissed Prince's argument, contending that all chronic infections will, eventually, cause disease. She said the goal is to find a strain that more closely matches a human HIV infection. Although Fultz has tested several strains in chimps—many of which now have low CD4 counts—none duplicates a human infection, in which the virus replicates furiously for prolonged periods while steadily eroding CD4 levels. Ones derived from Jerom have the same shortcoming, she said, depleting CD4 cells more rapidly than is seen in a typical human infection. Finding a strain that behaves in chimps similarly to HIV in humans, Fultz said, is “important for evaluating those vaccine candidates that go into [large-scale efficacy] trials.”

    A few scientists question whether the chimp model holds any promise at all. When HIV first infects people, it enters cells using a surface protein called CCR-5. Over time, the virus develops a preference for another receptor, CXCR-4. The Jerom-derived strains, by contrast, rely on CXCR-4 from the outset; thus they do not mimic the initial infection in humans, said Jonathan Allan of the Southwest Foundation for Biomedical Research in San Antonio, Texas. Indeed, he knew of no CCR-5 dependent HIV that could reliably infect chimps. “I don't think vaccine studies are going to go forward based on the chimpanzee model,” said Allan.

    Scientific qualms aside, Robert-Guroff's own corporate partner has expressed surprisingly little interest in conducting another chimp challenge. Instead, said Wyeth's Zimra Israel, the company will decide how to proceed based on the immune responses seen in the ongoing clinical trials. Robert-Guroff maintained that positive results from a chimp challenge might build excitement at Wyeth. “If they were convinced by an experiment that this approach was really worthwhile in following, perhaps they would come back into the arena in a forceful way and help out,” she says.

    Summing up the feelings of many participants, Norman Letvin of Harvard's Beth Israel Deaconess Medical Center in Boston said he did not think Robert-Guroff's proposal was “a crucial experiment.” Yet he stressed that a pathogenic HIV challenge would be “ethically defensible” for the right experiment, urging his colleagues “not to consign a possible very powerful model … to the trash heap of history.”


    Ocean Project Drills for Methane Hydrates

    1. Dennis Normile

    Tokyo—Next week a Japanese drilling ship will begin taking a bite out of an underwater layer of frozen methane and water to assess its potential as a major source of energy for the world. The exploration of these gas hydrates—one of many hydrate deposits thought to occur near continental margins worldwide—is also expected to shed light on their role in past and future changes in global climate (see p. 1465) and their contribution to the stability of continental slopes.

    Gas hydrates are a mixture of water and methane from the decay of organic material, frozen into the pore spaces of marine sediments and under permafrost on land. By some estimates, the energy trapped in the hydrates beneath the sea and in polar regions is more than double all of Earth's known oil, gas, and coal deposits combined. The cruise to the Nankai Trough, 60 kilometers off Japan's Pacific coast, is the culmination of a 5-year Japanese effort to understand whether the energy is commercially recoverable. Half a dozen countries around the world, including the United States, are also studying hydrates, but “this is the first legitimate [program] looking at their resource potential,” says Timothy Collett, a geologist with the U.S. Geological Survey (USGS) in Denver.

    What little is known about marine deposits is largely based on seismic surveys, which trace a layer of hydrates beneath many continental slope sediments. But there's no substitute for drilling into the deposits to learn how much gas they contain and how easy they might be to exploit. Hydrate researchers have tried before, most notably in 1995 at the Blake Ridge, 320 kilometers off the coast of South Carolina. But previous drilling has been limited both in scope and in technical capabilities, and the methane often escapes before the core sample reaches the surface. “Because we know so little about the real condition of gas hydrates in the ground, we can't yet formulate [methane recovery] as an engineering problem,” says Charles Paull, a geologist at the Monterey Bay Aquarium Research Institute in Moss Landing, California, and co-chief scientist on the Blake Ridge project.

    The latest effort is led by the Japan National Oil Corporation (JNOC), supported by the Ministry of International Trade and Industry and a consortium of 10 Japanese companies. The team has developed or extended technologies to capture samples in their natural state and to measure the permeability of the hydrate-bearing strata, a key to determining if the methane can be induced to flow to a well. JNOC has not disclosed the program's budget, but scientists estimate it at $60 million.

    Floating in 950 meters of water, the M.G. Hulme Jr. expects to drill through an unknown thickness of soft silt before reaching a layer of gas hydrates and retrieving samples. It will also seal off parts of the well, pump in various fluids, and measure how quickly they disperse to determine flows in and out of the deposit.

    The Japanese researchers tested the techniques on land, in a 1150-meter-deep test well drilled into permafrost in early 1998 at a site about 200 kilometers north of Inuvik, in Canada's Northwest Territories, in collaboration with the Geological Survey of Canada and the USGS. The drilling suggested that the Canadian field likely contains “a few trillion cubic feet” of methane, says Tatsuya Sameshima, technical project director for JNOC, although it is not clear how much of it could be recovered and whether the technology would be economical.

    The Canadian project is already being used to develop a computer model of a gas hydrate deposit in Alaska for which only limited data are available. The Nankai Trough test well will provide a similar benchmark for marine deposits, which are particularly important for temperate-zone countries such as Japan. The data will allow scientists to calibrate seismic data and sketch a more accurate picture of the location and extent of other deposits.

    At stake is more than future energy supplies. Methane is a greenhouse gas, and hydrate deposits are thought to be very sensitive to changes in temperature and pressure. University of Tokyo sedimentologist Ryo Matsumoto says that any warming of the oceans could result in massive releases of methane and “cause a catastrophic acceleration of global warming.” Hydrates are also suspected of playing a role in previous global warming cycles, he says, yet “they haven't been taken into account in [climate change] models.” Increasing evidence also suggests that undersea hydrate deposits may help to cement in place the soft sediments on continental slopes, says Monterey's Paull. As a result, large-scale drilling or ocean warming could conceivably trigger landslides that would generate tsunami.

    The drilling is expected to continue through the rest of the year, followed by laboratory tests and data analysis that will run into next spring. And researchers emphasize that this is just the beginning. Last month both houses of Congress approved plans for a 5-year, $40 million effort by the Department of Energy to expand a small program now under way, and Russia, India, Norway, and Canada all have active hydrate research efforts. The Nankai Trough well “is a logical step in a progression [of research] that will go on for a long time,” says Paull.


    Lumpy Infrared Points to Earliest Galaxies

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    Astronomy hit the front pages in 1992 when NASA's Cosmic Background Explorer (COBE) satellite discovered “lumps”—the seeds of later giant structures in the universe—in the even glow of radiation left over from the big bang. Less well reported was another discovery that emerged later from COBE data: a glow like that of a city that is just over the horizon, produced by the universe's very first generation of stars and galaxies. COBE's small infrared telescopes did not have the resolution to pick out any lumps in the cosmic infrared background that might indicate the structure of the primordial galaxies, but two French astronomers announced last week at a meeting* near Munich that data from the European Space Agency's Infrared Space Observatory (ISO) do show lumps in the IR background, presumably caused by large seas of galaxies emitting strongly in the infrared.

    “What they presented is very convincing. I have no doubts,” says Dietrich Lemke of the Max Planck Institute for Astronomy in Heidelberg, Germany, principal investigator for ISOPHOT, the ISO instrument involved. “Potentially this [discovery] is quite important, because we are trying to understand what the sources are of this background radiation,” says Michael Hauser, deputy director of the Space Telescope Science Institute in Baltimore, Maryland.

    Astronomers want to know more about the infrared background because the signal detected by COBE was much stronger than expected. The infrared glow is thought to come from primordial galaxies, full of young stars blazing within heavy shrouds of dust, which would have reradiated the starlight as heat. Hauser says observations from ISO and the SCUBA instrument on the James Clerk Maxwell Telescope in Hawaii suggest that this bright infrared background may have been caused by an early generation of “ultraluminous galaxies.” Because of their cloak of dust, our only view of these early beacons may be in the infrared. “The results show that there has to have been a very strong presence of the infrared sources in the past, much more than now,” says Jean-Loup Puget of the Institute for Space Astrophysics in Orsay, near Paris.

    The discovery is actually a posthumous achievement for ISO, which ceased observing in May 1998 when the liquid helium coolant for its telescope ran out. Puget and his Orsay colleague Guilaine Lagache analyzed data collected during ISO's 3-year life by ISOPHOT's four infrared photometers. The photometers measured the brightness of the infrared sky as seen by ISO's 60-centimeter telescope, which has a resolving power more than 25 times that of COBE's best effort.

    Puget and Lagache focused on data from small patches of sky—an area totaling 4 square degrees—that were specifically selected to avoid infrared radiation coming from dust in our own galaxy. Even so, the astronomers had to do much work to remove “foreground” radiation from within our galaxy and from point sources of infrared light elsewhere in the universe. The brightness variations that remain “are galaxies that we cannot resolve with the instrument,” says Lagache.

    Astronomers are now keen to learn more about these early galaxies and when star formation began. Puget says that the observations show that star formation was already intense during the first billion years of the universe's 12- billion-year life, while previous optical observations seemed to indicate that star formation started much later. For more detailed data, astronomers will have to wait until NASA launches its Space Infrared Telescope Facility (SIRTF) in 2002. SIRTF will have a slightly higher resolving power than ISO, and its detector will be much more sensitive, says Michael Rowan-Robinson of London's Imperial College. Puget adds that the Atacama Large Millimeter Array, an array of 64 infrared dishes to be built as an international project in Chile, “will allow real progress.”

    • *ISO Surveys of a Dusty Universe, a Max Planck Institute of Astronomy Workshop, Ringberg Castle, Germany, 8 to 12 November.


    A System Fails at Mars, a Spacecraft Is Lost

    1. Richard A. Kerr

    Just in time to protect the Mars Polar Lander from risking a similar fate when it reaches the Red Planet next month, NASA investigators have wrapped up their inquiry into the loss of the Mars Climate Orbiter (MCO) spacecraft in September. Confusion over units of measurement used during mission navigation set the craft on its fatal course into the martian atmosphere, investigators announced last week, but the failure of mission team members to fully follow existing checks and balances turned a correctable snafu into a disaster. Why spacecraft operators broke the rules remains unclear, but NASA's new “faster, better, cheaper” approach to planetary missions is taking some of the blame.

    The investigation board confirmed that the root cause of the loss was a misuse of English units, as previously reported: The MCO's operator, Lockheed Martin Astronautics of Denver, supplied data about the firing of the spacecraft thrusters in pound-seconds. The recipients of the data, spacecraft navigators at the Jet Propulsion Laboratory (JPL) in Pasadena, California, assumed the units were the newton-seconds required by mission specifications.

    Although that error was not found until later, the navigators knew the spacecraft was off course. “The navigation team realized as the spacecraft approached Mars that it was coming in lower than intended,” says Arthur Stephenson, director of NASA's Marshall Space Flight Center in Huntsville, Alabama, and the head of the NASA investigation board. He adds that they did not realize the gravity of the situation. “They never saw that the spacecraft was in jeopardy. They were more worried about tuning the orbit than a catastrophic loss.”

    Even so, Patrick Esposito, who supervises navigation of all three current Mars missions, strongly recommended almost a week before arrival at Mars that a modest trajectory correction be made, according to an internal investigation at JPL. The suggested correction possibly could have saved the mission, but it “was denied for good reason,” says Stephenson. “The [operations] team was not ready to perform it.” A fifth and final tweaking of the trajectory was in the mission plan, but the operations team had done none of the planning needed to ensure that a last-minute course correction could be executed if needed.

    That was just one of eight contributing factors that turned an oversight into a catastrophe, according to the NASA report. Other problems included the navigators' unfamiliarity with the peculiarities of this spacecraft's behavior, poor communication between spacecraft teams, and understaffing (the navigation “team” consisted of just two people: the MCO navigator and Esposito). With these failures in mind, the board has recommended changes in the operation of the Mars Polar Lander, due to reach Mars on 3 December. They include beefing up navigation staffing and using a second, independent means of determining the spacecraft's trajectory.

    NASA officials denied that their recent strategy of flying more, smaller missions at lower overall cost affected MCO. “We have to remember faster, better, cheaper includes following the rules,” says Edward Weiler, NASA associate administrator for space science. “They weren't followed this time.” Noel Hinners, vice president for flight systems at Lockheed Martin, begs to differ. “There is a faster, better, cheaper effect here,” he told Science. It's a matter of sufficient staffing—perhaps 10% more—to make sure the checks and balances work, he says. “It's nothing big, but it takes time and money.”


    Journals Launch Private Reference Network

    1. Eliot Marshall

    Most of the world's biggest scientific publishers have so far shown little interest in participating in a U.S. government plan to provide free access to scientific articles through a Web site called PubMed Central. Now they've responded with a plan of their own. On 16 November, 12 private and nonprofit organizations unveiled a scheme designed to cross-link journal articles through their reference lists, making it easy for researchers to locate and obtain the text of a referenced article through the Internet. Unlike PubMed Central, the plan will allow publishers to retain full-text material on their own Web sites and control access to it. PubMed Central, in contrast, would turn archived texts into public property.

    View this table:

    The lead organizers of the new project are Academic Press, a Harcourt Science and Technology company based in San Diego, California, and John Wiley & Sons Inc. of New York City. They worked with the International Digital Object Identifier (DOI) Foundation near Oxford, U.K.—established in 1998 with the help of major publishers—to devise “tags” that can be used to find and track journal articles. Others in the group supporting this venture are the American Association for the Advancement of Science (publisher of Science), Nature, the American Institute of Physics, the Association for Computing Machinery, Blackwell Science, Elsevier Science, the Institute of Electrical and Electronics Engineers Inc., Kluwer Academic Publishers, Oxford University Press, and Springer-Verlag. At press time, the sponsors had not settled on a name.

    This high-profile group plans to spend an undisclosed sum to create a new, not-for-profit digital information service. Publishers who participate will send articles to the new service to be tagged with universal identifiers. The goal, according to Charles Ellis—chair of the DOI Foundation and Wiley's former CEO and president—is to enable readers to use a mouse-click to leap from a footnote in an article on one publisher's Web site to the text of the article being cited, even if it's on a different site. The mechanism will be largely invisible to readers, but access to the full text may require a password or a fee. “The beauty of the system,” Ellis says, “is that it permits a kind of one-stop shopping that gives access to all the journals” participating in the scheme.

    The publishers have consulted their legal advisers and concluded that they will not run afoul of antitrust laws, which prohibit collusion among competitors, as long as “we don't try to exclude anyone,” says Ellis. Indeed, the founding members are eager to have many other publishers join. However, those who do so must agree to use a standard data format devised by the DOI Foundation. It requires publishers to provide summary information on every article, such as the author's name, a short description of the work, and the name of the journal. But each publisher can decide whether to make abstracts or full-text articles available for free.

    Susan Spilka, a spokesperson for Wiley, says the plan is to label more than 3 million current journal articles with DOI tags immediately and to have them up and available on the system early next year. After that, more than half a million new articles will be added to the collection every year.

    David Lipman, director of the National Center for Biotechnology Information, who has chief responsibility for developing PubMed Central, says this private network is very different from PubMed Central: “It in no way provides barrier-free access to the primary research literature.” But, he adds, it's “great” that publishers are trying to improve access to online information, and “I hope they move forward with this.”


    Gyroscope Failure Closes Down Hubble

    1. Andrew Lawler

    The stream of science data from the Hubble Space Telescope stopped last weekend after the fourth of the instrument's six gyroscopes failed. The $2 billion telescope will remain in a “safe mode” until astronauts arrive next month for a scheduled service mission to the orbiting spacecraft. Even if all goes well, however, agency officials say it will take another month to get the telescope back on line.

    The gyroscopes, which keep the telescope pointed properly, have bedeviled NASA engineers since the Hubble's launch in 1990. Four have been replaced on previous space-shuttle missions. But the devices have continued to fail, and because Hubble needs three working gyros to make observations, the latest failure, on 13 November, caused NASA to suspend all scientific operations. The shuttle was slated to rendezvous with Hubble in October to replace all six gyros and conduct other maintenance tasks, but problems with the Discovery shuttle have delayed the mission. Space agency officials are eager to meet the 6 December launch date because some of the software for the Hubble servicing mission is not Y2K compliant.

    Space science chief Ed Weiler says that the delay poses no danger to Hubble. And because a servicing mission is imminent, “the timing [of the gyro failure] is not so bad,” Weiler notes. “We have to just sit and wait.” The shutdown interrupted researchers' plans to examine the turbulent upper atmosphere of Jupiter and Saturn's rings and put on hold the search for binary brown dwarfs and a survey of galaxies with high redshifts, according to the Space Telescope Science Institute in Baltimore, which oversees the Hubble's science program. The research will be rescheduled.

    On the servicing mission, the shuttle crew will install replacement gyros modified to make them more reliable. The crew will make a total of four spacewalks—replacing a host of other equipment besides the gyros—before returning to Earth. The mission should keep Hubble operating through 2003.


    Bayes Offers a 'New' Way to Make Sense of Numbers

    1. David Malakoff

    A 236-year-old approach to statistics is making a comeback, as its ability to factor in hunches as well as hard data finds applications from pharmaceuticals to fisheries

    After 15 years, environmental researcher Kenneth Reckhow can still feel the sting of rejection. As a young scientist appearing before an Environmental Protection Agency review panel, Reckhow was eager to discuss his idea for using an unorthodox statistical approach in a water-quality study. But before he could say a word, an influential member of the panel unleashed a rhetorical attack that stopped him cold. “As far as he was concerned, I was a Bayesian, and Bayesian statistics were worthless,” recalls Reckhow, now at Duke University in Durham, North Carolina. “The idea was dead before I even got to speak.”

    Reckhow is no longer an academic outcast. And the statistical approach he favors, named after an 18th century Presbyterian minister, Thomas Bayes, now receives a much warmer reception from the scientific establishment. Indeed, Bayesian statistics, which allows researchers to use everything from hunches to hard data to compute the probability that a hypothesis is correct, is experiencing a renaissance in fields of science ranging from astrophysics to genomics and in real-world applications such as testing new drugs and setting catch limits for fish. The long-dead minister is also weighing in on lawsuits and public policy decisions (see p. 1462), and is even making an appearance in consumer products. It is his ghost, for instance, that animates the perky paperclip that pops up on the screens of computers running Microsoft Office software, making Bayesian guesses about what advice the user might need. “We're in the midst of a Bayesian boom,” says statistician John Geweke of the University of Iowa, Iowa City.

    Advances in computers and the limitations of traditional statistical methods are part of the reason for the new popularity of this old approach. But researchers say the Bayesian approach is also appealing because it allows them to factor expertise and prior knowledge into their computations—something that traditional methods frown upon. In addition, advocates say it produces answers that are easier to understand and forces users to be explicit about biases obscured by reigning “frequentist” approaches.

    To be sure, Bayesian proponents say the approach is no panacea—and the technique has detractors. Some researchers fear that because Bayesian analysis can take into account prior opinion, it could spawn less objective evaluations of experimental results. “The problem is that prior beliefs can be just plain wrong” or difficult to quantify properly, says statistician Lloyd Fisher of the University of Washington, Seattle. Physicians enthusiastic about a particular treatment, for instance, could subtly sway trial results in their favor. Even some advocates worry that increased use may lead to increased abuse. “There is a lot of garbage masquerading under the Bayesian banner,” warns statistician Don Berry of the University of Texas M. D. Anderson Cancer Center in Houston, a leading advocate of Bayesian approaches.

    Still, the renewed interest in Bayesian methods represents a major upturn in a 2-century roller-coaster ride for this approach to data analysis. Two years after Bayes's death in 1761, a friend, Richard Price, arranged for the British Royal Society to publish his notes on “a problem in the doctrine of chances.” The 48-page essay tackled a question that was as much philosophy as mathematics: How should a person update an existing belief when presented with new evidence, such as the results from an experiment? Bayes's answer was a theorem that quantified how the new evidence changed the probability that the existing belief was correct (see p. 1461). The theorem “is mathematics on top of common sense,” says statistician Kathryn Blackmond Laskey of George Mason University in Fairfax, Virginia.

    Using the theorem for any but the simplest problems, however, was beyond the skills of most mathematicians of that era. As a result, few scientists were aware of Bayes's ideas until the 1790s, when the French mathematician Pierre-Simon de Laplace showed researchers how they could more easily apply them. Laplace's work came to dominate applied statistics over the next century. Still, some scientists became increasingly uneasy about one characteristic of the Bayesian-Laplacian mathematical framework: Two people analyzing the same evidence can arrive at dramatically different answers if they start with different beliefs and experiences.

    Researchers eager to avoid that problem got their wish in the 1930s, after statisticians Ronald A. Fisher and Egon Pearson of the United Kingdom and Jerzy Neyman of Poland offered new methods of evaluating data and comparing competing hypotheses. The intertwined methods became known as frequentist statistics because they indicate how frequently a researcher could expect to obtain a given result if an experiment were repeated and analyzed the same way many times. Frequentist methods were viewed as more objective because different researchers could apply them to the same data and usually emerge with similar answers, no matter what beliefs they started with.

    Frequentist techniques had another advantage: They proved relatively easy to apply to real-world problems—unlike Bayesian methods. “It might take just half an hour to write down the equation” needed to answer a problem with Bayesian statistics, explains Brian Junker, a statistician at Carnegie Mellon University in Pittsburgh, Pennsylvania—“but forever to do the computation.” As a result, adds Greg Wilson, a doctoral student in rhetoric at New Mexico State University in Las Cruces, “frequentists used to say to Bayesians, ‘You're wrong—but even if you weren't wrong, you still can't do the computation.’”

    That argument, however, began to dissolve earlier this decade with the growing power of desktop computers and the development of new algorithms, the mathematical recipes that guide users through problems. Bayesian statisticians, including Alan Gelfand of the University of Connecticut, Storrs, Adrian Smith of Imperial College, London, and Luke Tierney of the University of Minnesota, Minneapolis, helped popularize the use of simulation techniques now known as Markov Chain Monte Carlo—or “MCMC” to insiders. The new tools made the Bayesian approach accessible to a wide range of users, who say it has significant advantages. One is that it allows researchers to plug in prior knowledge, whereas frequentist approaches require users to blind themselves to existing information because it might bias the results.

    Such prior information can be very helpful to researchers trying to discern patterns in massive data sets or in problems where many variables may be influencing an observed result. Larry Bretthorst of Washington University in St. Louis, Missouri, for instance, developed Bayesian software that improved the resolution of nuclear magnetic resonance (NMR) spectrum data—used by chemists to figure out the structure of molecules—by several orders of magnitude. It uses prior knowledge about existing NMR spectra to clarify confusing data, yielding resolution improvements that were “so startling that other researchers had a hard time believing he hadn't made a mistake,” says Kevin Van Horn, an independent computer scientist in American Fork, Utah.

    Genomics researchers have also become converts. “You just say ‘Bayesian,’ and people think you are some kind of genius,” says statistician Gary Churchill of The Jackson Laboratory in Bar Harbor, Maine, who is working on ways to analyze the flood of data produced by DNA sequencing and gene expression research. Some researchers, for instance, are using what they already know about a DNA sequence to identify other sequences that have a high probability of coding for proteins that have similar functions or structures, notes Jun Liu, a statistician at Stanford University in Palo Alto, California. “No easy frequentist method can achieve this,” he says. Similarly, “Bayesian has become the method of choice” in many astrophysics studies, says astrophysicist Tom Loredo of Cornell University in Ithaca, New York. The approach has allowed users to discern weak stellar signal patterns amid cosmic background noise and take a crack at estimating the locations and strengths of mysterious gamma ray bursts.

    Lifesaving statistics?

    In other fields, such as drug and medical device trials, Bayesian methods could have practical advantages, say advocates. Indeed, at a 2-day conference last year, the Food and Drug Administration (FDA) office that approves new devices strongly urged manufacturers to adopt Bayesian approaches, arguing that they can speed decisions and reduce costs by making trials smaller and faster.

    Telba Irony, one of two Bayesian statisticians recently hired by the division, says the savings flow from two advantages of the Bayesian approach—the ability to use findings from prior trials and flexibility in reviewing results while the trial is still running. Whereas frequentist methods require trials to reach a prespecified sample size before stopping, Bayesian techniques allow statisticians to pause and review a trial to determine—based on prior experience—the probability that adding more patients will appreciably change the outcome. “You should be able to stop some trials early,” she says. So far, just a handful of the 27,000 device firms regulated by FDA have taken advantage of the approach. But FDA biostatistician Larry Kessler hopes that up to 5% of device trials will be at least partly Bayesian within a few years. “We're not going to change the statistical paradigm overnight,” he says. “There is still a healthy degree of skepticism out there.”

    Such skepticism has also limited the use of Bayesian approaches in advanced drug trials, a potentially much bigger arena. But a team led by M. D. Anderson's Berry and researchers at Pfizer Inc.'s central research center in Sandwich, England, is about to challenge that taboo. Next June, using a heavily Bayesian study design, the company plans to begin human trials aimed at finding the safest effective dose of an experimental stroke drug designed to limit damage to the brain. The trial—called a phase II dose ranging trial—will help the company decide whether to move the drug into final testing trials. “There are huge economic consequences on the line,” says Pfizer statistician Andy Grieve.

    The team believes that Bayesian methods will allow the company to reach conclusions using 30% fewer patients than in traditional designs. Just as important, however, Berry and his colleagues believe that Bayesian flexibility could reduce an ethical problem that they say plagues frequentist-oriented trials: the need to deny beneficial treatments to some patients in the name of statistical rigor. Traditional dose ranging trials, for instance, randomly assign patients to one of several drug doses until enough patients have been treated to produce statistically significant results. Under classic frequentist designs, researchers aren't supposed to look at the data until the study is done, for fear of biasing the analysis. Although many statisticians bend the rules to reduce ethical concerns, many patients in traditional trials may still receive a less beneficial treatment even if, unbeknownst to the investigators, the evidence is mounting that another dose is better. “The frequentist approach inadvertently gives rise to an attitude that we have to sacrifice patients to learn,” says Berry, who believes the practice has unnecessarily cost lives in some trials.

    The Pfizer study, in contrast, should allow researchers to analyze the accumulating data and more quickly eliminate ineffective or potentially harmful doses. Skeptics worry that the approach could allow the drugmaker's enthusiasm for its product to color the results. But Grieve says the team will also perform more traditional frequentist analyses to satisfy FDA officials and company executives that Bayesian methods are “robust and ready.”

    Another advantage of the Bayesian approach, say statisticians from both camps, is that it produces answers that are easier to understand than those produced by frequentist computations. These methods generate a measure of uncertainty called the “P value,” which researchers find handy. In simple terms, a P value is supposed to tell a researcher whether experimental results are statistically “significant” or the product of chance. For decades, many journals would only publish results with a P value of less than 0.05. In a trial comparing a new drug to no treatment, for instance, a result with a P = 0.05 means that the odds that the “null hypothesis”—no treatment—would produce the observed effect are just 1 in 20 if the experiment is repeated many times, suggesting that the alternative hypothesis—the new drug—is creating the effect.

    But there are at least two problems with P values, physician and biostatistician Steven Goodman of The Johns Hopkins University in Baltimore, Maryland, noted earlier this year in a plea for greater use of Bayesian methods published in the Annals of Internal Medicine (15 June, p. 995). One is that many people, even those with some statistical training, incorrectly interpret P = 0.05 to mean that there is a 95% chance that the null hypothesis is wrong and the alternative hypothesis correct. This misinterpretation exacerbates a second problem: that P values tend to overstate the strength of the evidence for a difference between two hypotheses. Indeed, studies with small P values, implying a highly significant finding, have sometimes paled in a Bayesian reanalysis. For instance, in a widely cited 1995 Bayesian restudy of findings that one heart attack drug worked better than another, Canadian researchers Lawrence Joseph and James Brophy concluded that the use of P values overstated the superiority of one of the drugs. Similarly, Duke's Reckhow found that a statistically significant trend in acid rain pollutants detected in some lakes by frequentist analyses disappeared upon a Bayesian reexamination.

    Goodman says that even he is sometimes at a loss to explain the proper interpretation of P values to his students, but that he has no problem getting them to understand Bayesian probabilities. “Bayesian computations give you a straightforward answer you can understand and use,” he says. “It says there is an X% probability that your hypothesis is true—not that there is some convoluted chance that if you assume the null hypothesis is true, you'll get a similar or more extreme result if you repeated your experiment thousands of times. How does one interpret that?”

    Uncertain decisions

    Statisticians say that knowing just how seriously to take the evidence can be particularly important in politics and business, where decisions have to be made in spite of uncertainty. Fisheries managers, for instance, are on the spot to decide how many fish people should be allowed to catch from a population of indeterminate size. Last year, in a bid to improve quota-setting, a National Academy of Sciences panel recommended that fisheries scientists “aggressively” pursue the use of Bayesian methods to predict fish populations and “evaluate alternative management policies.” The idea, says panel member Ray Hilborn of the University of Washington, Seattle, is that fisheries researchers should spell out the amount of uncertainty that goes into their predictions. Policy-makers might be more cautious in setting catch quotas, he and others reason, if they knew there was a significant risk that their actions might destroy a stock.

    Although the new approach has yet to make “a real difference” in fisheries management, says Hilborn, Bayesian techniques are already influencing policy and business decisions in other fields. The model that U.S. officials use to estimate the population of endangered bowhead whales—and the number that Alaskan natives are allowed to kill—is Bayesian. A Bayesian model is also at the heart of the controversy over whether to use statistical adjustments to avoid undercounts of urban minorities in the next U.S. census; it uses assumptions drawn from intensively surveyed neighborhoods to estimate how many people census-takers may have missed in other areas. Oil, power, and banking companies also call on Bayesian statisticians regularly to help them predict where they should drill, expect electricity demand, or move their investments, and salvage companies and the Coast Guard use Bayesian search models to decide where to look for shipwrecks and lost mariners.

    The most ubiquitous Bayesian application, however, may be Microsoft's animated paperclip, which offers help to users of its Office software. The application, says computer scientist Eric Horvitz of Microsoft Research in Redmond, Washington, grew out of software he and colleagues designed that tries to predict what users will ask next by keeping track of prior questions. He can even explain why the paperclip pops up when it isn't wanted. Company officials “didn't go all the way with Bayes—they could have avoided that problem if they had,” he claims.

    Bayesian barriers

    Corporate cautiousness isn't the only factor limiting the spread of Bayesian statistics, however. Another is its absence from the undergraduate curriculum, advocates say. Two years ago, the American Statistician, a journal of the American Statistical Association, published a heated debate about whether it should be included—a debate that continues in e-mail groups run by statisticians. Opponents argue that teaching Bayes would not prepare students for the kinds of statistical applications they are likely to encounter in their professional lives, says David Moore of Purdue University in West Lafayette, Indiana. Supporters respond that the world is changing, and that Bayesian techniques will soon become essential tools.

    Ecologists, for instance, are increasingly being drawn into policy debates that push them to state the probable outcomes of different environmental policies, notes Aaron Ellison of Mount Holyoke College in South Hadley, Massachusetts. To prepare his students, he teaches Bayesian methods in his introductory courses and last year edited a special issue of the journal Ecological Applications encouraging colleagues to verse themselves in Bayesian methods.

    Also limiting the use of Bayesian tools is the absence of “plug and play” software packages of the kind that have made frequentist approaches so easy to apply. Although several companies are designing products to fill the niche, not all statisticians believe they are a good idea. Bayesian methods are complicated enough, says statistician Brad Carlin of the University of Minnesota, Minneapolis, that giving researchers user-friendly software could be “like handing a loaded gun to a toddler; if the data is crap, you won't get anything out of it regardless of your philosophical bent.”

    The growing demand for Bayesian aids, however, reflects a profound change in the acceptance of Bayesian methods—and an end to the old debates, says Rob Kass, head of Carnegie Mellon's statistics department. “In my view, Bayesian and frequentist methods will live side by side for the foreseeable future.”

    For some Bayesians, that is a thrilling notion. George Mason's Laskey and others are even planning a London celebration in 2001 to kick off a “century of Bayes” ( Meanwhile, others are savoring the Reverend's return to respectability. “Twenty-five years ago, we used to sit around and wonder, ‘When will our time come?’” says mathematician Persi Diaconis, a Bayesian at Stanford. “Now we can say: ‘Our time is now.’”


    A Brief Guide to Bayes Theorem

    1. David Malakoff

    In making sense of new data, traditional statistical methods ignore the past. Bayesian techniques, in contrast, allow you to start with what you already believe and then see how new information changes your confidence in that belief. Putting Bayes theorem to work requires heavy calculus—now aided by computer models—but the fundamental concept is straightforward.

    Thomas Bayes's original 1763 paper, which focused on predicting the behavior of billiard balls, included his now-famous theorem (see center box) and an illustrative example. Imagine, it says, that a newborn arrives on Earth and watches the sun set. The precocious child scientist wants to know the probability that the sun will rise again. Toiling in the darkness with Bayes's theorem, the clueless researcher first must make a guess about what will happen and assign it a probability. So she hedges her bets—essentially admitting that she doesn't know the answer—and decides that there's an even chance, 1 in 2, that the sun will return. This is called the “prior probability.”

    View this table:

    To analyze her data, the skywatcher constructs a simple computer using a bag and a collection of marbles, either white or black. She drops one white marble into the bag to represent the likelihood that “the sun will return,” and one black marble for the opposite outcome. After each rosy dawn, she adds another white marble to the bag. The changing contents of the bag demonstrate the improving odds in favor of another sunrise, from 2:1, to 20:1, to 100:1. As the experiment continues, the black marble becomes little more than a statistical speck amid a sea of white.

    Looking into her bag, the researcher concludes that the “posterior probability” of the sun rising is very high—nearly 100%. Indeed, the new evidence is so strong that it leads her to select one of her equally weighted prior guesses—the sun will rise—over the other.

    In theory, Bayesians say enough evidence will lead people who start with dramatically different priors to essentially the same posterior answer. When data are scarce, however, the choice of a prior probability can heavily influence the posterior. That process has led to charges that the method is too subjective.

    But even if the final answers differ, Bayesians say, the method can still probe the strength of the evidence itself, by computing how far it alters each prior belief. Modern Bayesians have also tried to reduce the influence of priors by using “flat priors” that avoid playing favorites by giving every possible hypothesis equal probability. Another technique for guarding against bias is “sensitivity testing,” in which a wide range of priors is tested to see which one holds up best.


    The Reverend Bayes Goes to Court

    1. David Malakoff

    A statistical approach developed by an 18th century clergyman helped the New Jersey Public Defender's Office deal with a thoroughly modern challenge: proving allegations that state troopers were engaged in “racial profiling”—discriminating against African-American drivers by singling them out for traffic stops. The 1994 case, settled this year, was just one of several recent high-profile court appearances for Bayesian statistics, which allows data to be analyzed in the light of prior knowledge—and gives answers that judges, jurors, and other nonstatisticians can more easily interpret.

    In New Jersey, the approach enabled the legal team to overcome a huge gap in police records to make their case. The problem was that in 70% of 892 analyzed stops along the southern end of the New Jersey Turnpike, police records did not identify the race of the driver. The lawyers worried that the missing data would doom their case, which sought to suppress evidence against 17 African-American suspects, many of whom were charged with transporting drugs.

    But statisticians Joseph Kadane of Carnegie Mellon University in Pittsburgh and Norma Terrin of the New England Medical Center in Boston used a Bayesian model and prior information to make sense of the missing data.* They knew, for instance, that based on the 30% of stops where the driver's race was known, blacks appeared to be 4.86 times more likely to be pulled over than drivers of other races. The question was whether troopers were also more likely to record the race of black drivers. If so, the 70% of stopped drivers of unknown race might have been overwhelmingly white, and the total numbers might not support the profiling charge.

    Using prior knowledge, Kadane and Terrin were able to test this possibility. Earlier studies on the turnpike showed that about 15% of speeding drivers were black. For their analysis, the statisticians assumed that the police were as much as three times more likely to record the race of a black driver. Even under such extreme circumstances, they concluded that black drivers were still far more likely to be stopped than white drivers. Last summer, after losing the case and then withdrawing an appeal, state officials began dismissing the 17 cases.

    Although a clever statistician might have arrived at a similarly convincing answer with conventional techniques, says Kadane, it would have taken more work. Also, the logic of a Bayesian answer—which gives the probability that the answer lies within a certain range of possible answers—is “more accessible to the court” than hard-to-follow frequentist answers, Kadane and Terrin say.

    Bayesian clarity was also critical to undermining a doping accusation against U.S. long-distance runner and former record holder Mary Decker Slaney. Slaney was barred from the sport after failing a drug test for performance-enhancing steroids at the 1996 Olympic games. But the test, which measures the ratio of the steroids testosterone to epitestosterone (T/E) in urine, is statistically impossible to interpret as evidence of guilt or innocence, says statistician Don Berry of the University of Texas M. D. Anderson Cancer Center in Houston, who helped Slaney's defense team win a reversal from U.S. track officials.

    Berry used a Bayesian approach to show that prior knowledge is essential to conclude anything from the test. For one, he says, testers need to know “the probability that you'd get a T/E greater than 6 [the prohibited level] if she used a banned substance, and the same probability if she didn't use it.” Doctors who rely on tests to diagnose everything from AIDS to paternity face similar problems, he notes, as do prosecutors trying to pin convictions on other kinds of drug tests. In each case, other statisticians say, the Bayesian framework exposes the range of prior information needed to interpret test results properly.

    Whatever the final outcome of Slaney's case—an international federation has so far upheld the ban, and her lawyers are still suing to clear her name—Berry's analysis and other criticism have severely damaged the credibility of the T/E test. It's “bad science, bad policy,” physician Gary Wadler of the New York University School of Medicine last month told a congressional committee that is pushing for changes in drug testing among athletes. International Olympic officials still disagree, but the Reverend Bayes may force them to make amends.

    • *“Missing Data in the Forensic Context,” Journal of the Royal Statistical Society 160(2), 351 (1997).


    An Improbable Statistician

    1. David Malakoff

    In 1969, statisticians around the world sent funds to restore the deteriorating London tomb of Thomas Bayes, “in recognition of [his] important work in probability.” But beyond his theorem, which opens a place for preconceptions and prior knowledge in statistical analysis, historians know very little about this Presbyterian minister and how he became interested in his “problem in the doctrine of chances.”

    Born into an affluent family in 1701 or 1702, Bayes followed his father into the pulpit and became a minister in the Nonconformist sect, which had broken from the Church of England. Barred from Oxford or Cambridge because of his religious affiliation, he studied logic and theology for a few years at the University of Edinburgh in Scotland, says David Bellhouse, a Canadian statistician and Bayes historian at the University of Western Ontario. He retired from his ministerial duties in 1752 and died in 1761.

    Bayes's notebooks—recently found at an insurance company he helped fund—suggest that he “was obviously well read and kept up on the mathematical literature,” Bellhouse says. Indeed, a 1736 book defending Isaac Newton's ideas may have been what won him election to the Royal Society in 1742.

    Bayes's two scholarly papers, however, weren't published until after his death and attracted little interest at the time. Indeed, one historian of the Royal Society later dismissed the probability paper, saying “the solution is much too long and intricate to be of much practical utility.” His public profile was so low, in fact, that a widely circulated portrait of the great man is in all likelihood not Bayes at all. “The clothes and hairstyle are all wrong,” Bellhouse says.

    But more than 2 centuries later, his ideas live on. And what would Bayes have thought about the extended debate over his methodology? “We would like to think he is a subjectivist fellow-traveler,” wrote statisticians José Bernardo and Adrian Smith in 1994. Luckily, they note, “he is in no position to complain at the liberties we take with his name.”


    A Smoking Gun for an Ancient Methane Discharge

    1. Richard A. Kerr

    Off Florida, oceanographers have drilled into what appears to be the remains of a globe-altering event 55 million years ago

    It's not often that researchers probing pivotal events in the ancient history of life can put their finger on causes. One striking exception is the meteor that hit Earth 65 million years ago, doing in the dinosaurs. Now paleoceanographers are closing in on a possible cause for another evolutionary watershed 55 million years ago. This watershed brought modern mammals—the ancestors of horses, cows, deer, apes, and us—into global dominance. And its trigger may have been a vast belch of climate-changing methane from under the sea floor.

    Proposed less than 5 years ago, the “methane burp” hypothesis gets its most direct support yet on page 1531 of this issue of Science. Paleoceanographer Miriam Katz of Rutgers University in Piscataway, New Jersey, and her colleagues report the discovery of a sequence of sediment layers buried half a kilometer below the sea floor off Florida, which records the exact sequence of deep-sea changes—including vast submarine landslides—predicted by the scenario. “It's real good, consistent evidence” for the hypothesis, says paleoceanographer Timothy Bralower of the University of North Carolina, Chapel Hill. “It's not 100% proof, but it's compelling.”

    If not for the evolutionary turning point the methane burp might explain, we might be eating egg-laying mammals for Thanksgiving—if we had evolved at all. Ten million years after the demise of the dinosaurs, mammals in the fossil record were still “archaic,” a mix of odd, unfamiliar animals that have no direct descendants today. But 55 million years ago, an array of modern forms burst on the scene in North America. In recent years, researchers have tied this transition to an extraordinary surge of global warming recorded in fossils and sediments. The warming would have allowed mammals that had already evolved into modern forms in a presumed Asian enclave to march across otherwise frigid polar lands and usurp North American archaics (Science, 18 September 1992, p. 1622).

    Several researchers proposed that the methane from the sea floor could have driven the warming (Science, 28 February 1997, p. 1267), and paleoceanographer Gerald Dickens of James Cook University in Townsville, Australia, used clues in sea-floor sediments to develop a detailed scenario for what happened. His hypothesis assumes that 55 million years ago, as today, something like 15 trillion tons of methane hydrate—a combination of ice and methane—had formed beneath the sea floor, where microbes digesting organic matter release methane. Then, for whatever reason, ocean bottom waters warmed enough to decompose a small fraction of the methane hydrate into water and methane gas—roughly a trillion tons of it. This methane disrupted the sediment, triggering landslides down the continental slope that let the methane burst into the ocean. Upon its release, theorized Dickens, the methane probably oxidized to form carbon dioxide, which eventually reached the atmosphere, driving greenhouse warming.

    Katz and her colleagues found signs of most of what Dickens called for, 512 meters down a sediment core that the Ocean Drilling Program (ODP) retrieved 350 kilometers off the northeast Florida coast. The deep-ocean warming killed off bottom-dwelling microscopic animals called foraminifera, they saw. Then the isotopically distinctive carbon of the methane left its mark in carbonate sediments. Finally, carbonic acid from the dissolved carbon dioxide partially dissolved some of the sediment.

    Other researchers had made similar findings in various cores. But Katz also turned up a never-before-seen 20-centimeter-thick layer of soft mud containing chunks, or clasts, of the same mud. That's just the sort of debris deposit you'd expect to see downslope of where methane burst from the bottom, she notes. Indeed, the mud-clast layer was laid down just where debris from a hydrate-triggered slide should be—in the layer that settled while the foraminifera were going extinct and just before the isotopic shift that recorded the surge of methane. In addition, seismic probing beneath the sea floor reveals a “chaotic zone” of disturbed sediment 15 kilometers upslope from the drill site. This zone, like the mud-clast layer, just predates the isotopic signal of methane in the sediment, says Katz, making it the likely source.

    “You can't do much better than that” at getting the predicted events in the predicted order, says paleoceanographer James Zachos of the University of California, Santa Cruz. “This is all consistent with Dickens's original proposal.” Paleoceanographer Ellen Thomas of Wesleyan University in Middletown, Connecticut, notes that “this is the first place we have evidence [that methane] came out anywhere; it's a really neat development. But it's not completely solved.”

    Charles Paull of the Monterey Bay Aquarium Research Institute in Moss Landing, California, agrees. A specialist in methane hydrates, he finds the sea-floor methane hypothesis “absolutely fascinating, but I wonder about whether or not it's correct.” Among his doubts: whether the hydrate deposits 55 million years ago contained enough methane to drive the climate shift and whether a deepwater warming could have penetrated far enough into the sediments to decompose the deep hydrates that could cause landslides.

    Still, things do seem to keep falling into place for the methane burp. In the 21 October issue of Nature, for example, Richard Norris and Ursula Röhl of the University of Bremen in Germany reported that the same core Katz studied contains a time marker in the form of rapid, periodic variations in iron content. Because the iron cycles are about 21,000 years long, equal to the time for one wobble of Earth's axis to alter climate and thus sediment composition, Norris and Röhl assume that the iron cycles provide a reliable clock for geologic time. Applying this clock to the isotopic change recorded in the sediments, they find that two-thirds of the trillion tons of methane was released in a few thousand years or less, about as fast as humans are releasing carbon dioxide by burning fossil fuel. No one has figured out how that much carbon could have found its way into the environment so quickly 55 million years before cars and power plants, except by destabilizing methane hydrates. “Things just sort of line up,” says Norris. “The other hypotheses just don't explain the data as well.”


    Were Spaniards Among the First Americans?

    1. Constance Holden

    A bold proposal, based on similarities in stone tools, revives the discredited notion that ancient Europeans crossed the Atlantic to settle the New World

    Back in the ‘30s and ‘40s, archaeologist Frank Hibben argued that stone points he had found in Sandia Cave in New Mexico looked a lot like those made by people in northern Spain, called Solutreans, some 20,000 years ago. Hibben suggested that people from Europe might have been among the first Americans.

    But most researchers rejected the idea, because there seemed to be no easy routes from Spain to New Mexico—and because the Solutreans thrived thousands of years before the Clovis people, believed to have been the first to settle in North America some 11,500 years ago. Instead, almost all anthropologists became convinced that the first Americans arrived via the Bering land bridge from northern Asia. Nonetheless, at a meeting called “Clovis and Beyond,” held last month in Santa Fe, New Mexico, Hibben's heretical idea was formally revived in a presentation by archaeologist Dennis Stanford of the Smithsonian Institution in Washington, D.C. Some archaeologists dismiss the notion—“preposterous,” says Lawrence Straus of the University of New Mexico, Albuquerque—but others are intrigued. “I really do think Dennis deserves a hearing,” says anthropologist Leslie Freeman of the University of Chicago, who specializes in the prehistory of northern Spain. “I don't understand these convergences, and I think they deserve much further investigation.”

    Anthropologists once thought that the Americas were settled when ancestors of the Clovis hunters chased mammoths across the Bering land bridge, down an “ice-free corridor” that opened about 12,000 years ago in what is now Canada, and into North America; eventually their descendants reached South America. But a controversial Chilean site called Monte Verde, dated at more than 12,500 years ago—even older than the first Clovis sites in North America—has caused many archaeologists to reassess when, how, how often, and from where people migrated from the Old World to the New. “The door's been kicked fairly wide open to anything,” says Tom Dillehay of the University of Kentucky, Lexington, lead investigator at Monte Verde.

    Some researchers have suggested that the first Americans boated along the Pacific coast from Asia northward, then south all the way to South America; such a marine route opens up the timing as it does not require an ice free corridor, and it fits with evidence for 10,000-plus-year-old sites near the Pacific coast of South America (Science, 18 September 1998, pp. 1775, 1830, 1833). Now Stanford proposes that early maritime visitors might have come from the east as well, in the person of the Solutreans, who lived in southern France, Spain, and Portugal from about 23,000 to 18,000 years ago during the height of European glaciation.

    Stanford came to this notion after spending most of his career looking for antecedents of the Clovis people in Alaska and Siberia, along the supposed migration route—and not finding them. “I never saw anything technologically related to Clovis,” he says. He and his colleague Bruce Bradley, an independent archaeologist in Cortez, Colorado, say that Asian, Siberian, and Alaskan tool assemblages feature thick-bodied points very different from the slender Clovis blades found in the lower 48 states. Europe, he says, is where the similar points turn up.

    His case—presented at the Santa Fe meeting under the gleeful slogan of “Iberia, not Siberia”—is based on what he believes is a wealth of technological parallels. Both the Solutreans and the Clovis hunters used bifacial pressure-flaked technology—a technique for shaping two-sided stone points by banging on their edges with a tool such as an antler. And Clovis and Solutrean hunters were the only ones to work a stone point by flaking a single large piece from the entire width of the blade, called outre passé, or overshot flaking. This kind of flake is created by first shaping a bump on the blade and then knocking it off, and in other cultures it is “a mistake,” says Stanford. Other parallels include “spurred” end scrapers—hide-scraping stones with sharp little points rather than rounded edges, identical-looking bone spear foreshafts (a piece connecting the shaft to the point's socket), and limestone tablets with geometric or animal-shaped designs scratched into the surface. Both peoples also left caches of partly worked stone points, buried in red ochre, stored around the countryside.

    Stanford admits that there are a couple of rather large apparent obstacles to the theory, namely the Atlantic Ocean and the at least 5000-year gap between the last Solutreans and the Clovis culture. But he argues that the Solutreans were being pushed toward the coast for survival as glaciers covered most of Europe. Long hunting trips along the southern fringe of the ice could have taken them farther and farther west—all the way to what is now New York, says Stanford. The seas would have been cold and thus relatively calm, so the trip could have been rapid—perhaps only 2 weeks, he says—and seabirds, fish, and marine mammals would have been plentiful on the edge of the ice.

    As for the timing, Stanford points out several eastern Clovis sites that, although controversial, may be as ancient as the Solutreans. At Meadowcroft Rock Shelter in Pennsylvania, for example, deposits dated at 16,000 years ago contain unfluted projectile points that Stanford says “could represent a transitional technology between Solutrean and Clovis.”

    Even if the Solutreans crossed the Atlantic, Stanford agrees that Asians also must have discovered America, as DNA data show that today's Native Americans have Asian roots. But he notes that some Native Americans carry a mitochondrial DNA marker known as haplotype X, which has been traced back to Europe (Science, 24 April 1998, p. 520).

    Reactions to the Solutrean solution often seem to depend on researchers' openness to alternatives to the Bering land bridge scenario. “I love it,” says biological anthropologist Richard Jantz of the University of Tennessee, Knoxville, who has noted a few skeletal similarities between paleo-Indians and Europeans. Dillehay thinks the Solutrean link is at least as plausible as the idea of skirting ice sheets in boats along the Pacific coast to America. So does archaeologist Reid Ferring of the University of North Texas in Denton. “If we just pretended there were no oceans, we would immediately be drawn to Western Europe” as a likely point of origin for the Clovis people, he says.

    But other anthropologists, including those skeptical of pre-Clovis ideas, tend to be wary of the Solutreans. The time gap alone makes it implausible, says archaeologist Stuart Fiedel of John Milner Associates in Arlington, Virginia, who authored a scathing critique of the Monte Verde work (Science, 22 October, p. 657) and also doubts the Meadowcroft dates: “It's an old idea that has been rejected at least twice.” And some experts in European archaeology completely reject the idea. New Mexico's Straus says the Solutreans had a “vast diversity” of tools, most of which “don't bear any resemblance to Clovis.” And they left “no evidence” of boating or deep-sea fishing, much less the ability to take 5000-kilometer cruises. He considers the similar tools a simple case of technological convergence.

    Stanford admits that the Solutreans created many kinds of artifacts that aren't found in Clovis sites, but “there's very little in Clovis that is not found in Solutrean,” he insists.

    Despite the strong critiques, other researchers say that the lively interest in the idea is a sign of changing times, as anthropologists begin to consider alternative routes to the Americas. “If [Stanford] had gotten up 10 years ago and started talking about wandering Spaniards,” says James Adovasio of Mercyhurst Archaeological Institute in Erie, Pennsylvania, lead excavator at Meadowcroft, “he would have been laughed out of the room.”


    Australasian Roots Proposed For 'Luzia'

    1. Constance Holden

    While archaeologists ponder the startling suggestion that ancient Americans might have arrived from Europe (see main text), Brazilian anthropologist Walter Neves offers another surprising proposal: that some early colonizers came from the same stock as did the aboriginals of Australia.

    Neves, of the University of Säo Paulo, has been analyzing about 30 skulls dug up over the past century in central and southeastern Brazil; charcoal in the associated sediments has been dated to 9000 or more years ago. He compared features such as brow ridges and the size of nasal apertures with those of existing populations, and concluded that the ancient Brazilians look more like Australians and Melanesians than like people in northeast Asia, which is traditionally considered the original homeland of the first Americans. The oldest skull in Neves's sample has been reconstructed as the African-looking “Luzia”—named after the African australopithecine “Lucy.”

    Anthropologists agree that one group of Africans reached Melanesia and had boated to Australia by 50,000 years ago; now Neves suggests that some of these Australasians were among the early South American settlers, probably boating north and then down the west coast of the Americas. But many archaeologists are skeptical. Archaeologist Tom Dillehay of the University of Kentucky, Lexington, an expert in early South American sites, warns that such thinking is “very preliminary.” The archaeological evidence associated with the African-looking skulls, he says, “is no different from what you see at sites with nonanomalous skeletons.” University of Chicago anthropologist Leslie Freeman adds that because variations within racial groups are so great, it is “impossible” to identify an individual's roots based on sparse skeletal evidence.


    Rockefeller to End Network After 15 Years of Success

    1. Dennis Normile

    The Rockefeller Foundation is closing the books on an ambitious $100 million effort to develop and disseminate new molecular tools to improve rice

    Phuket, Thailand—Nobody threw rice. After all, it was the end, not the beginning, of a relationship. But tossing a few grains would not have been out of place as rice scientists from around the world gathered*—perhaps for the last time—to celebrate a 15-year effort by The Rockefeller Foundation linking the revolution in molecular biology to Asia's most important food crop and to mark the next phase of the venture. Most of the nearly 400 who met here recently were trained with Rockefeller money. But now they're on their own: The foundation has decided to shift its agricultural resources to problems facing subsistence farmers, with an emphasis on traits rather than specific crops and a focus on sub-Saharan Africa.

    The International Program on Rice Biotechnology has disbursed $100 million since 1984 to foster cutting-edge genetics research aimed at helping rice farmers in the developing world. Its legacy: a community of rice researchers that has created more prolific, robust, and nutritious strains. “The Rockefeller Foundation rice biotech program is an outstanding example of a well-planned and -executed funding program,” says Gurdev Khush, principal plant breeder at the International Rice Research Institute (IRRI) in Los Baños, the Philippines. “Other funding agencies could learn a lesson from [its] success.”

    Rockefeller already had a track record in agriculture before it began the rice initiative, beginning with work in China in the 1930s and extending to IRRI's creation in 1960. But in the early 1980s, foundation officials began to worry that the increased yields and improved nutrition promised by the genetic engineering of plants were not going to be applied to rice. “At that time, there was essentially no research being conducted in rice molecular biology outside of Japan,” says Gary Toenniessen, deputy director of the foundation's Agricultural Science Division. The foundation hoped to reverse that pattern by enticing leading plant science laboratories in advanced countries to work with rice, while building up the capacity of developing countries to carry out biotechnology research and integrate those efforts into national rice-breeding programs.

    The two-pronged strategy was spectacularly successful in attracting the interest of topflight scientists. The foundation lured 46 labs in the industrialized world into the program, and by 1987 it was spending nearly $5 million a year on these efforts (see graph). Investment in developing countries rose almost as fast. Rockefeller began sending promising Asian scientists to the labs it was supporting in the industrial countries, as well as putting money selectively into Asian facilities that could support their work once they returned home. From almost nothing in 1984, the foundation's investment in both training and capacity-building grew to nearly $4 million a year by 1989; it has remained at roughly that level ever since.

    Robert Herdt, Rockefeller's director of agricultural sciences, ticks off an impressive list of successes, including rice lines that are tolerant of high-aluminum soils and the identification and transfer of a gene associated with resistance to bacterial blight. The program's greatest achievement, however, has been support for work that incorporated the synthesis pathway for β-carotene, the precursor to vitamin A, into rice. Herdt says an estimated 400 million people dependent on rice suffer vitamin A deficiency, with its associated vision impairment and disease susceptibility.

    No one had ever transferred several genes into rice at the same time. The feat was finally accomplished this year by Ingo Potrykus, a plant molecular biologist at the Institute for Plant Sciences of the Swiss Federal Institute of Technology in Zurich, and his collaborator Peter Beyer of the Center for Applied Biosciences at the University of Freiburg in Germany (Science, 13 August, p. 994). “It's a tour de force,” says James Peacock, a plant scientist from Black Mountain Laboratories in Canberra, Australia, and vice chair of The Rockefeller Foundation's Scientific Advisory Committee. “It has huge potential to reduce one of the [world's] most prevalent and serious nutritional deficiencies.” Potrykus says the continued support from the rice biotech program allowed him to persist despite deep skepticism from colleagues and that its demise will make it harder to pursue such high-risk research: “Most funding agencies are very conservative.”

    The second leg of Rockefeller's support for rice biotechnology focused on graduate students and postdoctoral fellows, as well as training courses for scientists to learn such skills as transferring genes for specific traits. “The training is very, very important,” says Qifa Zhang, a plant molecular biologist at Huazhong Agricultural University in Wuhan, China. Zhang notes, for example, that virtually all the techniques his team is using to understand what accounts for the increased vigor of hybrid rice plants “were developed within the framework of the [Rockefeller] program.”

    Over 400 researchers from developing countries have received some sort of training under the program, and at some institutes they make up a critical mass of talent. Of eight senior researchers on the biotech team at the Philippine Rice Research Institute in Maligaya, for example, five earned their doctorates with Rockefeller grants and one received a postdoctoral career development grant for collaborative research with an advanced lab. “Without the support of The Rockefeller Foundation, it would have been almost impossible for us to build this capability,” says Leocadio Sebastian, the institute's deputy director for research.

    As the skills of their scientists improve, developing countries have received an increasing share of program funding. Herdt says that the proportion going to advanced countries has fallen from 70% in the early years to 10% currently. International centers and labs in developing countries now get 60% of the funds, with the rest going for education.

    Along with that increased funding came increased expectations, which Asian scientists appear largely to have met. Herdt says today's proposals have to satisfy the same standards as those from the United States and Europe. The results were obvious at Phuket, Peacock says: “This meeting has been as good as any scientific meeting we go to.”

    And success is not confined to the laboratory. Herdt says that Chinese farmers are already using a disease-resistant rice variety produced using tissue culture, and other varieties genetically engineered for greater disease and pest resistance are being tested elsewhere. Several countries have improved local varieties by augmenting traditional breeding efforts with such biotech tools as molecular markers, which function as tags for traits and allow researchers to determine quickly if a target trait has been picked up by a crossbred plant. On a broader scale, he notes that China, India, and Korea have integrated biotechnology into their national rice research programs, and that the Philippines, Thailand, and Vietnam are moving in that direction. At the same time, Toenniessen admits that countries like Bangladesh have been unable to profit fully from the program because of the limited scope of their own research efforts.

    The next step for Rockefeller, says Herdt, is “to focus on some of the most difficult traits and problems that face agriculture in the developing world.” That change, part of an overhaul of the foundation's portfolio triggered by the arrival last year of agricultural ecologist Gordon Conway as president, puts a premium on “getting those [new] varieties down into farmers' fields,” Herdt says. Instead of concentrating on one crop, the program will support work on traits, such as drought tolerance, that could be engineered into a variety of plants. A final component, he says, will be “doing something about building capacity in sub-Saharan Africa.”

    Educational programs will continue in some form, and much of the work now going on under the rice program may be eligible for support under the new guidelines. Strategic work on rice as a crop, however, will be phased out. Support for the biennial meetings might end, too.

    That's an unhappy prospect for many of the rice researchers. IRRI's Khush says the linkages among rice researchers nurtured by the biennial meeting are “one of the payoffs of the program.” But Rockefeller believes that it's time for the rice biotech community to stand on its own. Herdt hinted that Rockefeller might be willing to bankroll a future meeting, but he was adamant that the planning be done by others. Sounding like a proud but stern parent, Herdt left participants with this message: “If you want to have a meeting, organize yourselves.”

    • *General Meeting of the International Program on Rice Biotechnology, 20 to 24 September, Phuket, Thailand.

  19. AIDS

    Can Immune Systems Be Trained to Fight HIV?

    1. Michael Balter

    Current antiviral therapies don't seem able to eliminate HIV from the body, so researchers are trying to coax the immune system to finish the job

    Marnes-la-Coquette, France—Every HIV-infected person struggling to keep up with a daily regimen of antiviral drugs must have the same recurrent dream: One day, he or she will toss away the plastic box crammed with pills that once made the difference between life and death and walk away, free from HIV and free from the specter of AIDS. Last month, at the Cent Gardes international AIDS meeting in this town outside Paris,* several researchers described bold new experiments that explore what it might take to make this dream a reality.

    Among them are studies of patients who had been taken off drug therapy for short periods of time to see how effectively their immune systems could control HIV. Although for most of these subjects the answer was not very well, a minority—particularly those who began receiving treatment very early in their infections—did show signs that their immune systems could keep the virus at bay. But more important, these experiments are providing valuable new information about what kinds of immune responses are needed to control HIV. And other work is pointing to ways of boosting the immune system—for example, with a postinfection or “therapeutic” anti-HIV vaccine—to mount a stronger attack against the virus and perhaps one day eliminate the need for daily drugs.

    For some AIDS researchers, these results represent the most encouraging news they have heard for a long time. “There are a lot of reasons to be excited” by the new work, says Ronald Desrosiers, an AIDS vaccine researcher at the New England Regional Primate Research Center in Southborough, Massachusetts. But others, recalling how many times hopes have been raised and then dashed during the nearly 2 decades of the AIDS epidemic, are urging caution. “No one has presented compelling evidence that the immune system will be successful by itself in the absence of therapy,” says Norman Letvin, an immunologist at Harvard Medical School in Boston, although he adds that AIDS researchers are now doing “the right experiments” to determine if such approaches could actually work.

    Until quite recently, many researchers had hoped that drugs alone might eliminate the virus. The powerful new anti-HIV therapies that came on the market in the mid-1990s seemed capable of suppressing the virus to undetectable levels in many patients who had been facing a likely death sentence from AIDS. Before long, the magic word “eradication” began appearing on the lips of patients and AIDS researchers alike. Alas, such hopes soon proved premature. Evidence quickly mounted that although the new drugs could drive HIV from the bloodstream, the virus continued to lurk in hidden “reservoirs” made up of so-called T helper lymphocytes, its primary target in the immune system (Science, 14 November 1997, pp. 1227, 1291, 1295). Moreover, calculations of how long it would take these latently infected cells to die out were disheartening: One 1999 paper in Nature Medicine concluded that it could take as long as 60 years for the reservoir cells to perish.

    Thus more and more AIDS researchers are now convinced that so-called highly active antiretroviral therapy, or HAART, is not powerful enough to entirely eradicate the virus from infected patients. So just what would it take to do the job? Some scientists have advocated the development of more powerful antiviral drugs, while others have been looking for ways to “flush out” HIV from its reservoir hiding places (Science, 20 March 1998, p. 1854). At the Cent Gardes meeting, a number of researchers reported new data that might support another frequently discussed alternative: using the patient's own immune system to control the virus, either unaided or with the help of a therapeutic vaccine that would be administered to people already infected with HIV.

    The most encouraging news was reported by immunologist Bruce Walker of Massachusetts General Hospital in Boston. Over the past few years Walker's team has been working with a cohort of patients who began treatment shortly after being infected with HIV, often before they developed antibodies to the virus. Patients who begin treatment later in their infections generally lose the ability to mount effective anti-HIV immune responses, which appear to be mediated by both T helpers and another type of white blood cell called a cytotoxic T lymphocyte (CTL) (Science, 21 November 1997, pp. 1399 and 1447).

    But Walker saw a different picture in seven patients who had been receiving HAART for 1 year and whose therapy was then deliberately stopped and started again in what are called “structured interruptions.” Most of these subjects had little or no anti-HIV CTL activity while on therapy, presumably because the virus had been controlled so quickly after infection that they had not mounted a strong immune response. As expected, HIV quickly reappeared in their bloodstreams, sometimes at very high levels, when they were taken off HAART. But for the first time, the patients also showed strong CTL responses, which persisted even after therapy was started up again.

    Most dramatically, in some patients these CTLs seemed able to control the virus. For example, in one subject whose therapy was stopped twice, the HIV load rocketed to about 1 million copies of the virus per milliliter of blood after the first interruption, a level typical of many untreated patients. But after the second interruption, the viral load plateaued at only 6000 copies and was still at that level after 4 months without HAART. “The immune responses to HIV [in these patients] can clearly be manipulated by exposure to their own virus,” Walker told the meeting. “Bruce is almost doing a vaccine study,” says Jonathan Heeney, an AIDS vaccine researcher at the Biomedical Primate Research Center in Rijswijk, the Netherlands. “First he takes HIV away [with HAART] and then boosts the immune system with the virus, and then he does it again.”

    But Walker and others aren't sure how relevant his results are to HIV-positive patients who did not start treatment at a very early stage in their infections—so-called chronically infected patients. “These experiments involve quite a unique group of patients,” says Giuseppe Pantaleo, an immunologist at the Vaudois Hospital Center in Lausanne, Switzerland. He points out that chronically infected patients also produce anti-HIV CTLs, but that these are usually not effective in controlling the virus. The reason, Walker thinks, is that whereas patients treated early retain T helpers that recognize HIV and work with CTLs to attack the virus, chronically infected people have lost these crucial cells.

    Indeed, Brigitte Autran, an immunologist at the Pitié-Salpêtrière Hospital in Paris, and her colleagues had shown that fewer than 20% of chronically infected patients treated with HAART demonstrate anti-HIV T helper responses. That may explain the lackluster results she saw in chronically infected patients briefly taken off therapy. At the Cent Gardes meeting, Autran reported preliminary results from new experiments with 10 patients whose therapy was stopped and then restarted when viral load rose, a process that was repeated up to four times in some subjects. Most of these patients showed an increase in CTL activity with the rise in viral levels, but they did not mount a vigorous immune response or control the infection, as Walker's patients had. Still, HIV-targeted T helpers did appear during treatment interruption, although they disappeared again before the virus was brought back under control with HAART.

    Autran's conclusion was hopeful, however. To her, the brief appearance of T helpers implies that restoration of these critical anti-HIV cells might be “feasible” if the immune system could be properly stimulated to produce them. The stimulus might be a therapeutic vaccine given before trying to take patients off therapy. But researchers say developing an effective therapeutic vaccine will not be easy. Didier Trono, a molecular virologist at the University of Geneva, questions whether a therapeutic vaccine would be able to stimulate immune responses that were more effective than what the body was already producing. “The virus is providing the body with all the antigens you can think of; why doesn't that work?”

    But some encouraging signs are coming in from trials of the only therapeutic vaccine currently undergoing extensive testing: Remune, which is marketed by the U.S.-based companies Immune Response Corp. and Agouron Pharmaceuticals. Remune is based on a vaccine developed by the late Jonas Salk and consists of a whole HIV particle, inactivated so that it is no longer infectious. So far, large-scale Remune trials taking place in the United States and Europe have not produced conclusive results about its effectiveness. But at the Cent Gardes meeting, immunologist Fred Valentine of the New York University Medical Center in New York City presented data from an ongoing Remune trial with several dozen subjects who had just begun HAART. Half of these patients were inoculated with Remune 4, 12, and 24 weeks after starting HAART, while the other group received placebo injections. T lymphocytes were then taken from these patients and tested for their ability to mount an immune response to whole HIV as well as a variety of HIV proteins.

    Although the placebo group's T cells failed this test, the Remune-vaccinated group's T cells reacted strongly against HIV. “Fred is showing some pretty impressive responses,” says Desrosiers. But Pantaleo, who has been participating in a large European trial of Remune, says that although the vaccine can clearly induce anti-HIV responses, “there is no evidence that Remune can slow down the progress of the disease. Perhaps it can play a role, but it is not the solution.”

    Other investigators working with Remune are more optimistic. Frances Gotch—an immunologist at the Chelsea and Westminster Hospital in London who is co-leader of a 60-patient Remune trial at the hospital—told Science that preliminary results from her study show that subjects receiving the vaccine have developed anti-HIV T helper responses as strong as or stronger than those in so-called long-term nonprogressors, a small minority of patients who are able to control their virus without drugs. Moreover, Gotch says, the number of latently infected reservoir cells in Remune-treated patients appears to be dropping at a faster rate than in nontreated patients.

    Gotch is now talking with some of the participants in this trial to see if they would be willing to interrupt their drug therapy to find out if Remune vaccination can help control HIV without HAART—a study that is the logical next step in the view of many AIDS researchers. But this kind of trial, like all treatment interruption experiments, is fraught with medical and ethical risks. Autran, for instance, notes that a rise in viral load when treatment is stopped could allow HIV to infect and destroy the very T helpers the immune system had sent to fight it. And Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, warns, “If at each interruption of therapy more [latently infected] reservoirs are replenished, this could be counterproductive.”

    Letvin agrees that the jury is still out on whether the immune system could ultimately be induced to control the virus: “It is a seductive idea, but just because it is intuitively comfortable does not necessarily mean it will be borne out in fact.” On the other hand, Letvin adds, “I hope to God it works.”

    • *12th Colloquium of the Cent Gardes, Marnes-la-Coquette, France, 25 to 27 October.


    Cages for Light Go From Concept to Reality

    1. Dennis Normile

    Once a theorist's dream, intricate latticelike structures can serve as the optical equivalent of semiconductors, trapping and channeling light

    In 1987, when physicist Eli Yablonovitch published what is now recognized as the seminal paper that launched research into photonic crystals, he predicted it would be a blockbuster. The paper, in Physical Review Letters, described how specially designed materials, intricately structured to resemble microscopic honeycombs, could manipulate light—trapping, guiding, and filtering it much as semiconductors manipulate electrons. Such materials could be the key to ultrafast optical computing and communications. Yablonovitch, a professor at the University of California, Los Angeles, recalls that the day before publication, “I told my wife that people were going to read my article and say, ‘Oh, that's so obvious; why didn't I think of it!’” But in fact, the paper was hardly noticed. “There were no citations to speak of in the first couple of years,” he says. “I guess it wasn't obvious to anybody.”

    Work on photonic crystals may have been slow getting started, but the field has been making up for lost time. Yablonovitch says that since about 1993, by one rough count, the number of papers using the phrase “photonic bandgap” has been growing at the rate of 70% each year. At first the work was mostly theory and simulation. “If you went to the conferences 6 years ago, people were saying, ‘Yeah, that's a great theory, but how are we going to actually make these devices?’” says John Pendry, a materials science theorist at Imperial College, London. Since then, experimentalists with the right know-how—expertise in thin films, silicon, and nanofabrication—have taken up the challenge.

    The result, says John Joannopoulos, a theorist at the Massachusetts Institute of Technology (MIT), has been a recent series of firsts—the first three-dimensional (3D) photonic crystal that operates at optical wavelengths, the first waveguide that steers light waves around sharp corners, and the first photonic crystal laser. “All of these are breakthroughs that pave the way to be able to finally start putting things together and making real devices,” Joannopoulos says. The end result, he adds, could be integrated photonic devices—the optical counterparts of integrated circuits—that might supplant electronic devices in communications and, eventually, in computers, multiplying their speed and efficiency.

    What Yablonovitch realized back in 1987 was that photons in properly constructed photonic crystals should obey many of the same principles as do electrons in semiconductors. Semiconductors have a crystalline structure that forbids electrons in an energy range known as the electronic bandgap from flowing through the material. Similarly, a lattice of materials with contrasting optical properties can trap light of a particular wavelength if the spacing of the lattice is right—about half the wavelength of the light. Light waves entering the lattice will reflect off the boundaries between materials, like ocean waves lapping against pilings, and cancel out incoming waves. As a result, the light in this optical bandgap won't be able to propagate, except where a defect disrupts the regular lattice, changing its properties the way dopant ions change the properties of a semiconductor.

    With the right architecture and the right pattern of defects, photonic crystals should trap and channel photons much as semiconductor devices do electrons. And photonic crystals could reduce the scale and increase the efficiency of optical circuits by orders of magnitude by supplanting bulky lenses, mirrors, and optical fibers, just as semiconductors proved far tinier and more efficient than the vacuum tubes they replaced.

    The challenge, however, has been in building regular lattices minute enough to snare light, a feat that, for infrared wavelengths, requires a lattice spacing of about 0.5 micrometer—1/100 the thickness of a human hair. Pioneers in the field proved the principle of photonic crystals by devising coarser lattices that could control longer wavelength radiation, such as microwaves, which have wavelengths of a centimeter or so. To work with light, says Pendry, photonic-crystal enthusiasts had to attract the interest of people familiar with “all sorts of really weird [fabrication techniques] you didn't know about because your field was optics.”

    Little Lincoln Logs

    One such person is Shawn-Yu Lin, an applied physicist at Sandia National Laboratories in Albuquerque, New Mexico. A specialist in silicon fabrication techniques, Lin realized about 2 years ago, he says, that “you can utilize the very advanced silicon technology to make photonic crystals.”

    Working in collaboration with Joannopoulos and other theorists at MIT, his group first set out to prove that a photonic crystal could get electromagnetic waves to swerve around a 90-degree corner. They built a square array of 0.5-mm aluminum oxide columns spaced 1.27 mm apart, an arrangement tuned to block millimeter waves, which lie between infrared and microwave radiation. By omitting a series of aluminum oxide columns, they created a channel that turned a 90-degree corner within the array—a defect through which millimeter waves confined by the rest of the lattice should propagate freely. As theory had predicted, the waves made the turn effortlessly, in a distance roughly equal to their wavelength—a feat no conventional optical fiber or waveguide could match (Science, 9 October 1998, p. 274).

    Lin's group is now trying to duplicate the feat at visible and near-infrared wavelengths, which is where his silicon fabrication expertise comes in, because the structures required are 1/100th the size of the millimeter-wave array. What's more, whereas a 2D photonic crystal adequately confined the millimeter waves, Lin believes that it will take 3D crystals to control light.

    To craft this 3D structure, he and his colleagues build it up layer by layer. They first put down a thin film of silicon and then selectively etch it away to leave a row of what look like uniformly spaced squarish silicon logs. In a process called chemical-mechanical polishing, akin to fine-sanding woodwork, they smooth the logs to uniform dimensions. Then they convert a second silicon layer into logs and lay it down at right angles to the first. Stacking up five or six layers creates a honeycomb structure resembling a neat, intersecting stack of uniform Lincoln Logs.

    So far, they have demonstrated that this 3D crystal can exclude some wavelengths of visible light. Now they are working to get the light to turn a corner. Building a lattice structured precisely enough to manipulate such short wavelengths has proven more challenging than the researchers originally expected. “But we are very close,” Lin says. “We are a few months away from being able to bend [visible light].”

    Lin's channels could one day be put to use steering light within integrated optical circuits. But other groups are applying photonic bandgap principles to steering light at larger scales, along optical fibers and waveguides. A group of materials scientists at MIT led by Edwin Thomas, who also works in collaboration with theorist Joannopoulos, last year reported the development of what they call an omnidirectional mirror, a structure that completely reflects light of particular wavelengths, no matter which direction it comes from or how it is polarized (Science, 27 November 1998, p. 1667). Their mirror is a stack of nine alternating micrometer-thick layers of polystyrene and tellurium. Infrared light approaching the surface from any angle sees a photonic bandgap, which excludes wavelengths of 10 to 15 micrometers and causes them to be reflected.

    Other groups have claimed similar achievements, but Thomas and his colleagues are now looking at putting these mirrors to use in new ways. For example, graduate student Yoel Fink came up with the idea of rolling an omnidirectional mirror into a spaghetti-like flexible tube that would act as a waveguide. “If it doesn't matter what the angle [of the incident light] is, the mirror doesn't have to be flat,” Thomas says. As the group describes in the November issue of the IEEE Journal of Lightwave Technology, the rolled mirror can steer infrared light from a carbon dioxide laser through a 90-degree bend with a 1-cm radius of curvature.

    Thomas says they designed the “omniguide,” as they call it, to handle carbon dioxide laser light because it is used in industry for welding, as well as for laser surgery, and no existing flexible waveguide can contain light of that wavelength at high powers. “On an assembly line, [a worker] is going to pick up this flexible cable and put the energy wherever he wants it,” Thomas says. The group also thinks that, in smaller diameters, omniguides could carry light signals for communication.

    A different way to steer light with photonic crystals comes from a collaboration of the Optoelectronics Group at the University of Bath in the U.K.; the United Kingdom's Defence Evaluation and Research Agency in Malvern; and Corning Inc. in Corning, New York. Instead of rolling up a photonic crystal into a tube, they bundle hundreds of millimeter-thick hollow silica glass tubes together, but omit seven from the center, creating a hollow core. The stack is fused and drawn into a hexagonal fiber with a hollow core about 15 micrometers across. The arrangement and size of the holes creates a 2D photonic bandgap that traps light in the central core. Unlike the MIT waveguide, however, this bandgap only works when the incident light is nearly parallel to the fiber, which means it would not be able to guide light around sharp corners.

    But it should be a champion at carrying high-intensity light. Group leader Philip Russell, a physicist at the University of Bath, explains that when ordinary fibers carry more than a few hundred watts, the light itself alters their optical properties, interfering with the flow of light. “The hollow-core fiber avoids those effects because the light is in the empty region,” he says. “Potentially you might be able to guide incredibly high powers, maybe hundreds of kilowatts.” Russell foresees industrial applications ranging from delivering ultrahigh-power laser light for machining applications, to transmitting ultraviolet light, which tends to get absorbed in conventional optical fibers. He also thinks laser beams channeled through the fibers could push cold atoms through the hollow core, like a stream of water driving a marble. “It would allow you to do nice types of lithography, building structures up atom by atom,” he says.

    Light sources, light switches

    Besides steering light, photonic crystals can help generate it, as a group at the California Institute of Technology in Pasadena led by Axel Scherer has shown. Scherer had been working on vertical cavity lasers, tiny laser devices in which the light resonates between two reflective semiconductor layers, building up to high intensity. Vertical cavity lasers could serve as the light sources in integrated optical circuits, but they “are still relatively large,” Scherer says. “What attracted me to photonic crystals was the prospect of confining light to even smaller volumes than what we could do with a vertical cavity.” A single defect in a photonic crystal could trap and amplify light in a volume just a few wavelengths across.

    Rather than trying to build a 3D photonic crystal, Scherer took a shortcut: creating a photonic bandgap that would prevent photons from traveling along a thin layer of light-emitting material, then relying on other optical effects, such as internal reflection, to trap the photons in the third direction and keep them from escaping. Working with colleagues at the University of Southern California in Los Angeles, Scherer's group created a slab of indium gallium arsenic phosphide. Like an elaborate club sandwich, the slab included four thin horizontal regions having slightly different electronic properties—so-called quantum wells, where photons would be generated. The group also pierced the slab with a pattern of vertical air holes, creating a 2D photonic crystal that would prevent light from traveling in the plane of the slab. But they omitted one hole to create a defect.

    When they pumped energy into the slab with another laser, photons emitted in the quantum wells were trapped at the site of the missing hole, confined horizontally by the 2D photonic crystal and vertically by the reflective air-film interface at the top and bottom of the slab. The result was the world's smallest laser, which trapped and amplified light within a volume of 0.03 cubic micrometer, two orders of magnitude smaller than in vertical cavity lasers. “This allows you to integrate [the structures] in large numbers,” says Scherer.

    Scherer envisions arrays of these light emitters fabricated within a single photonic crystal and interconnected by waveguides that would carry light signals from one cavity to another. Because the light wouldn't leave the photonic crystal, the scheme would avoid the diffraction losses that mount up when light is sent from one device to another, he says.

    Such devices could form the heart of optical routers for communications networks and even serve as logic gates in optical computers—if researchers can develop one final element: a photonic-crystal switch that would control the flow of photons, as a transistor controls the flow of electrons. Scherer envisions two light sources (which could also be lasers) adjacent to one of his lasers. The lasing threshold would be set so that a stream of photons coming from just one of the light sources would not be enough to initiate lasing—the switch would be “off.” But photons coming from both sources would trigger lasing, turning the switch “on” and sending an optical signal to the next device in the circuit. “That gives you the equivalent of a [logic] gate,” Scherer says. “But this is very far away from where we are now.”

    Lin is also working on switching but is taking a different approach that harnesses both electrical and optical effects. Photonic crystals are designed to block a given wavelength or range of wavelengths. A defect in the crystal allows the forbidden wavelength to pass through. But an electric field can change the defect's electromagnetic properties, shifting the transmitted wavelength. The effect turns the defect into a shutter, opening or closing for a specific wavelength of light in response to an electric field.

    And there's much more to come, as these tiny honeycombs inspire a buzz of experimental activity. Some groups are now coaxing photonic crystals to grow themselves, using polymers that naturally self-organize into complex structures while in solution. Others are making photonic crystals from colloids, natural or engineered particles suspended in a liquid that pack themselves into a regular lattice like marbles in a jar as the liquid is removed.

    Yablonovitch's 1987 paper has proved a blockbuster after all. “It just took a lot longer to catch on than I expected,” he says.


    Holograms Can Store Terabytes, But Where?

    1. Alexander Hellemans*
    1. Alexander Hellemans is a science writer in Naples, Italy.

    Finding the right material to store these optical inscriptions is the key to making this optical data storage technology work

    Five years ago, a group at Stanford University demonstrated a pioneering data storage system based on holograms, patterns written in a material by the play of lasers (Science, 5 August 1994, pp. 736 and 749). To the optimists, the prototype heralded full-fledged systems that might store hundreds of gigabytes of data—the contents of tens of hard drives—in a cubic centimeter of material, while also reading and writing the data almost instantly. New companies sprang up, and IBM, Lucent Technologies, and others stepped into the field. Some optimists predicted that such systems might hit the market within 2 or 3 years.

    After 5 years, the world is still waiting. The promise of holographic storage remains bright: By storing data in a three-dimensional (3D) volume of material rather than writing it on the surface of a disk, the scheme should achieve vastly greater storage densities than current magnetic and optical disk technology offer. And because it transfers data to and from the storage medium in entire pages rather than bit by bit, the scheme promises readout speeds of up to a giga- bit per second. But holographic storage is still a method without an ideal medium. “Materials have always been the Achilles' heel of holographic storage,” says Glenn Sincerbox of the University of Arizona, Tucson. “We have always lacked one or two very important properties.”

    Most parts of the technology have made good progress since that early demonstration, by Stanford's Lambertus Hesselink and his colleagues. In 1995, Stanford, IBM's Almaden Research Center, and several universities and industries formed the Holographic Data Storage System (HDSS) consortium, with a 5-year budget of $32 million, half of it from the participants and half from the U.S. Defense Department's Advanced Research Projects Agency (DARPA). The consortium has developed most of the electronics, optics, and lasers needed for working systems, says Sincerbox: “We have been able to pull together the enabling technologies that you need for a holographic data system to work.” Indeed, several companies may soon introduce read-only holographic archiving systems, which would store vast amounts of data and serve it up in a flash.

    But turning holographic storage into the equivalent of a super-disk drive, able to speedily write as well as retrieve vast amounts of data, will take optical materials with an elusive combination of properties. They will have to record holograms quickly, preserve them faithfully, and erase old data to make room for new. For now, materials that can preserve holograms for long periods are often slow to record them or can't be erased; materials that record data quickly often lose the optical traces over time.

    The dilemma has spurred materials researchers at companies and in a consortium called PRISM, formed by DARPA, Stanford, and several companies in 1994 to develop optical recording media. They are working with existing materials, looking for ways to preserve holograms longer or capture them faster, and they are also searching for completely new materials with ideal combinations of properties. Hans Coufal, who heads IBM's research in holographic data systems and whose lab is testing potential materials, is optimistic. “Several materials look very encouraging,” he says.

    The light touch

    Materials problems may be slowing developments now, but, ironically, it was a materials problem that kicked off the field of holographic storage in the first place. More than 3 decades ago, researchers found that bright light changes the optical properties of lithium niobate, a material they were studying as a possible optical switch because its refractive index changes in response to an electric field. The discovery that light itself has the same effect came as an unpleasant surprise. “When it was discovered at Bell Labs in 1966, people thought of it as a nuisance.” says Stephen Zilker of Bayreuth University in Germany. But he adds: “Only 2 or 3 years later, people realized that you can use it for writing holograms.”

    A hologram is generated when one laser beam—the “reference” beam—intersects a second beam carrying an image or data. The result is an interference pattern, a pattern of dark and bright spots that can be recorded on film or in some other medium. The original data or image can be resurrected later by shining a third laser beam on the hologram.

    The pesky refractive index change—the so-called photorefractive effect—looked like just the thing for recording holograms, because it is long-lasting. In regions where the light is intense, electrons are excited to higher energy levels, enabling them to migrate though the material. The displaced electrons generate local electric fields that distort the crystal lattice, in effect creating a pattern of minute optical flaws in the material.

    Lithium niobate went on to become the mainstay of holographic data storage efforts—it was the material Hesselink's group relied on initially, for example. But it has big shortcomings. Niobate crystals are expensive and have to be grown individually. And the light that reads out holograms from lithium niobate and other photorefractive materials also erases them.

    One way to record holograms more permanently in lithium niobate is to “fix” them, as a photographer in a darkroom fixes a print. Heating the material or exposing it to an electric field can make the lattice distortions persist, but those methods can be cumbersome. A more promising process for fixing holograms in niobate crystals has recently been demonstrated by Karsten Buse of Osnabrück University in Germany and Demetri Psaltis and Ali Adibi at the California Institute of Technology in Pasadena.

    Their technique relies on crystals doped with two elements—iron and manganese—and exposed to two wavelengths of light: ultraviolet (UV) to prepare the material and red to actually write the hologram. “The UV sensitizes the material for red recording by transferring electrons from manganese to the red-sensitive iron,” explains Buse. The red light then excites the iron ions to dump these extra electrons, which migrate through the lattice and get trapped on manganese ions, preserving the hologram. The hologram can later be read with red light, which has too little energy to wrest the displaced electrons from manganese, keeping the hologram intact.

    These fixing processes don't address another of lithium niobate's failings: its lack of sensitivity. “You cannot write very fast on it, and you might have to use a strong laser or put up with a slow writing rate,” says Psaltis. It can take a few minutes to store a hologram, although he adds that for long-term data archiving that may not be a concern.

    Given niobate's shortcomings, some researchers are studying a different set of photorefractive materials for larger scale applications: photorefractive polymers. These organic materials can capture a hologram much faster—in a matter of milliseconds—and at lower light intensities than lithium niobate can. But the images last only as long as the polymers are exposed to an electric field. Once the field is switched off, says Zilker, “the electron-hole pattern will erase itself by thermal re-excitation” within a few minutes. “Organic photorefractive materials will not become a long-term storage system,” he says, although they could work well for temporarily storing large amounts of data during image processing.

    Lasting inscriptions

    Although changes in a photorefractive material's charge distribution can be fleeting, other materials offer more permanence, because light triggers drastic changes in their properties. Zilker, for example, is exploring photochromic materials, polymers that darken when exposed to light. When two polarized lasers interfere to create a bright spot in a photochromic material, the light's electric field orients some of the polymer's side chains. At each spot where these side chains line up, they darken the polymer and also change its refractive index.

    The process is slow; like lithium niobate, photochromic materials can take minutes to store data. But Zilker notes that “this reorientation is extremely stable.” His team roasted a material containing stored data at 160°C for 4 weeks, and the stored information did not degrade. “For permanent long-term storage, these are my materials of choice,” says Zilker. He adds that they might also be viable for read-write memories. They can be erased with circularly polarized light, which scrambles the side-chain orientations. And he sees a way around the slow recording times: Because the refractive index changes that these materials undergo as the side chains align are so large, he says, systems could be designed to work with the smaller changes that result from a writing time of a few milliseconds.

    Another class of materials that promises durable holographic storage is photopolymers. In these materials, the laser light triggers molecules to link up in chains, or polymerize. The polymerization is most extensive at the bright spots in the hologram, and it also changes the refractive index locally. Photopolymers are the equivalent of fast film, says Hesselink. “If you absorb a photon, a chemical reaction takes place and maybe 100 events could take place, and this makes the materials two to three orders more sensitive than the photochromic materials or photorefractive materials.” The transformation, however, is permanent, making photopolymers a promising basis for read-only memories but not for read-write systems.

    Another drawback of these materials is that they shrink during the writing process, as the molecules polymerize. The shrinkage shifts the angle of each hologram and alters the distance between its features. This makes it hard for the system to find stored holograms when it reads data. But researchers at Lucent are addressing the problem with a so-called “two chemistry” system, which is still under wraps. By separating the chemical events that record the hologram from the chemistry of the material as a whole, they say, it reduces shrinkage drastically.

    In spite of these hurdles, companies are pushing ahead. “The most crucial aspect now is to find the right niche, the right market,” says Psaltis. And several companies are betting that read-only systems for archiving and quickly retrieving large amounts of data will turn out to be the right niche. Because holographic systems handle information as entire “data pages,” they can search stored information rapidly by looking for telltale patterns, rather than examining it bit by bit. “Holographic storage stands to be a real winner as a search engine for associative retrieval of information,” says Sincerbox.

    Optostor AG, a company near Düsseldorf, Germany, expects to have a holographic read-only memory system for long-term archiving on the market in about 2 years. Its heart will be a photorefractive crystal measuring 50 millimeters by 50 millimeters by 4 millimeters, containing a terabyte of data stored in holograms that have been “fixed” by heating. “You can put it in a normal PC, and the dimensions are only a little bigger than a typical disk,” says Theo Woike of the University of Cologne in Germany, whose research group has a contract with the company.

    Others are also at work on read-only holographic storage based on photorefractive crystals and photopolymers, although some researchers speak coyly of having read-write systems in the works, while refusing to divulge proprietary information. One is Hesselink, who was a founder of Siros Technologies, formerly Optitek. “We have spent a lot of effort on [storage in] photopolymers—a read-only material. But in the Siros implementation, there is also a possibility of making that a read-write system,” he says.

    The wait has been longer than he and others expected 5 years ago. But their optimism hasn't dimmed. When will the first commercial holographic storage system appear on the market? “Within a year, I think,” says Sincerbox.


    Technique for Unblurring the Stars Comes of Age

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Adaptive optics technology, which can undo the effects of atmospheric turbulence, is expanding its reach to fainter stars and larger swaths of sky

    For millions or billions of years, the light from distant stars rushes toward Earth relatively undisturbed, but in the last few microseconds it gets scrambled. Turbulence in Earth's atmosphere distorts wavefronts and blurs details, placing what once looked like an ironclad limit on the resolution of even the largest ground-based telescopes. Although the 10-meter Keck telescopes on Mauna Kea, Hawaii, can detect light from vanishingly faint objects in the distant universe, they can't see these objects—or anything else—in much more detail than a large amateur backyard instrument can capture.

    For the past few years, however, technologically minded astronomers have been working on a solution, called adaptive optics (AO). The concept is daring: Measure the changing distortions in the light waves and compensate for them hundreds of times per second by flexing a deformable mirror in the light path, using tens or even hundreds of tiny piezoelectric actuators. But what at first looked like technological hubris gradually became a working technology, albeit limited to a few telescopes and able to make observations only near bright stars, used as probes of atmospheric distortion. “We had the concepts available back in the 1980s,” says Laird Close of the European Southern Observatory (ESO), “but it took longer than expected” to make AO a practical tool for astronomers.

    Now astronomers are fitting AO systems to many more telescopes; soon, most of the giant, 4- to 10-meter telescopes that have been sprouting on dry mountaintops from Chile to Hawaii to the Canary Islands will have working AO systems. The results are already starting to make headlines in the scientific literature. And there's more to come, says telescope designer Roger Angel of the University of Arizona, Tucson: “The scientific importance of such corrections will be greatly enhanced when the current restriction to bright objects is removed.” Indeed, new technologies are beginning to free AO from dependence on bright “guide stars”; ultimately, AO may allow any celestial object, even the faintest, to be unscrambled.

    The field could get an injection of new ideas from a research outfit opened just this month: the Center for Adaptive Optics (CfAO), funded by the National Science Foundation, at the University of California, Santa Cruz. Directed by UCSC astronomer Jerry Nelson, the CfAO will receive $20 million over 5 years to bring AO to maturity, not just for astronomy but also for studying and treating the eye, because the same tactics that can sharpen an image of a star can also deliver a clear picture of the retina through the imperfections of the eye's lens.

    Until AO came along, the only way astronomers could escape atmospheric distortion was to put a telescope into space. Unhampered by the atmosphere, the Hubble Space Telescope can deliver images showing details much smaller than ground-based telescopes can resolve, even when blessed with superlative optics and ideal seeing conditions. But Hubble's main mirror measures a mere 2.4 meters in diameter. A 10-meter ground-based telescope collects 17 times as much light, and if a scope of that size could eliminate atmospheric distortion, it would be able to see details four times finer than Hubble can.

    Adaptive optics may ultimately allow ground-based telescopes to disregard the atmosphere. Developed largely by the U.S. military to sharpen laser beams and get better images of satellites and missiles, the technology was declassified in the early 1990s. Astronomers quickly began refining it for their own uses, although they had already developed some AO systems of their own. In 1990, for example, ESO installed an astronomical AO system on its 3.6-meter telescope at La Silla, Chile.

    It and many other AO systems, including one on the 10-meter Keck II telescope, in operation since last August, detect atmospheric distortion with a costly technology known as a Shack-Hartmann wavefront sensor. “Basically, it's a bunch of small lenses and a CCD [charge-coupled device] detector,” says ESO's Close. The lenses are arranged in a matrix. Each lens creates a tiny image of the observed star. Small displacements of these images, caused by atmospheric turbulence, are registered by the CCD detector, and a fast computer calculates the necessary distortions of the deformable mirror to compensate for the atmospheric blur.

    Under the best conditions, the system can sharpen the vision of a telescope the size of Keck by a factor of about 10. But Close says the technique has disadvantages. “Each lens collects just a tiny portion of the incoming light, and you have to collect an image every 2 milliseconds, so you run out of photons very quickly.” As a result, the Shack-Hartmann technique can only be used with relatively bright guide stars, which are used to sharpen all the objects in the immediate vicinity. That puts objects that are not close to a suitable guide star off limits to the system. And then there's cost: The Keck system has a price tag of $7.5 million, mainly because of the large number of lens elements and actuators and the enormous computer power needed to manage them.

    Fire on the mountain

    To increase their sky coverage—at even greater expense—astronomers working with Shack- Hartmann systems are now trying to create their own bright guide stars at places in the sky where none is naturally present. They use a powerful laser to kindle a spot of fluorescence in the thin haze of sodium atoms—the debris of meteors—found 90 kilometers up in the atmosphere. A 20-kilowatt laser beam, for instance, excites a glow equivalent to a 20-watt light bulb switched on in the upper atmosphere. This orange artificial “star” is too faint to be seen by the unaided eye, but it is bright enough for the system to measure the atmospheric blur and calculate corrections that can then be applied to fainter objects nearby.

    “It's just amazing technology,” says Close, “but it's really complicated stuff.” The only laser system in routine use, he says, is ALFA (Adaptive optics with a Laser For Astronomy), built by the Max Planck Institute at the 3.5-meter telescope of Calar Alto Observatory in southern Spain. A second system, based on technology developed at Lawrence Livermore National Laboratory in California, has been set up at the 3-meter Shane reflector of Lick Observatory at Mount Hamilton, California, but it is still being refined, says Close. Nevertheless, laser experiments are under way at the Keck Observatory, and ESO also plans to install a laser system on one of the four 8-meter components of the Very Large Telescope (VLT) in Chile.

    According to François Roddier of the University of Hawaii, the main problem with laser guide stars is that the two-way trip of the light—from the laser up to the sodium layer and back—cancels out some effects of atmospheric turbulence that an ordinary guide star would betray, in particular the tiny image displacements called jitter. As a result, although the laser guide star provides a probe of the wavefront distortions that blur an image, “you still need a natural guide star to correct for these tiny image motions,” says Roddier. Fortunately, he adds, “this star can be rather faint,” because jitter requires less frequent corrections than wavefront distortions. Powerful lasers also produce a lot of excess heat, causing microturbulence in the telescope dome, which further degrades the image quality. At the Keck Observatory, the experimental laser system is housed in a huge refrigerated box to minimize the problem.

    Roddier has developed a technique that he says works just as well as Shack-Hartmann systems with laser guide stars and is much cheaper. Called curvature AO, it relies on just two images of the guide star, one captured just in front of the telescope's focus and one just behind it. Both images are blurred, but in the absence of atmospheric turbulence, every part of the image would have the same brightness. The wavefront distortions, however, create tiny brightness gradients across the images. By comparing the brightness distributions, Roddier's system can pick up the distortions and pass this information on to the deformable mirror a few hundred times per second, so that it can repair the wavefront. Whereas a Shack-Hartmann system makes separate corrections for each small patch of turbulence above the telescope, which requires dozens of actuators to bend and dimple the deformable mirror, a curvature system calculates more global corrections and thus can work with fewer actuators. “It's simpler,” says Roddier, “and it can be built for half a million dollars.”

    Moreover, because the system divides the incoming light among fewer lens elements than a comparable Shack-Hartmann system, its guide stars can be only 2% as bright, increasing the sky coverage of the system without a laser. Arizona's Angel notes, however, that for the very faintest objects, “laser guide stars are [still] the only way to measure AO corrections.”

    A curvature system called Pueo (which is the name of a keen-eyed Hawaiian owl but also stands for Probing the Universe with Enhanced Optics) was installed at the 3.6-meter Canada-France-Hawaii Telescope (CFHT) on Mauna Kea in January 1996. Roddier says that because the system can operate over much of the sky, it is turned on for 20% to 40% of the CFHT's observing time. Indeed, CFHT observations accounted for some 80% of last year's publications on AO results. Recently, for example, a team led by Bill Merline of the Southwest Research Institute in Boulder, Colorado, used the razor-sharp vision of Pueo to spy a 15-kilometer moon orbiting the asteroid Eugenia (Science, 14 May, p. 1099).

    A larger curvature system built by Roddier's team, called Hokupa'a (the Hawaiian name for the Pole star, literally translated as “immovable star”), was first used at the CFHT and has now been set up at the 8.1-meter Gemini North telescope, also on Mauna Kea. “Many people now use this technique,” says Roddier. “Very soon a Japanese curvature system will be installed at the [8.2-meter] Subaru telescope,” also on Mauna Kea. And ESO is planning five large curvature systems for the VLT—one for each of the four telescopes and one for the ultralarge, virtual telescope that will result when all four are combined through a technique called interferometry.

    Exponential challenges

    In spite of this progress, plenty of problems remain for the new CfAO to tackle. Astronomers, never ones to think small, are now envisioning a new generation of ground-based telescopes with mirrors between 25 and 100 meters in diameter (Science, 18 June, p. 1913). There's little point in building telescopes of that size without AO systems to take advantage of their potential for seeing extraordinarily fine detail in faint objects. But according to UCSC's Nelson, who discussed the topic at a workshop on Extremely Large Telescopes last spring in Bäckaskog, Sweden, the challenge of correcting atmospheric distortion rises exponentially as mirrors grow larger. The number of actuators needed for the deformable mirror scales with the square of the telescope diameter, and the necessary computer power scales with the fourth power of the telescope size, to take into account all possible combinations of actuator movements. At present, even the fastest supercomputer couldn't do the job.

    The new center will develop small and cheap actuators, faster computer code, and other technologies that might make AO practical for these optical giants. It will also apply the technology to the inner world of the eye. Using AO to correct for lens distortions should yield sharper images of receptors and blood vessels in the retina and turn lasers into more precise scalpels for treating retinal disorders.

    The center, which has enlisted nearly 30 university and industry partners, will also look for ways to open the entire sky to adaptive optics. It will pursue a concept called atmospheric tomography, in which measurements of five or more guide stars are combined in a computer to create a full three-dimensional view of the major turbulence layers in the atmosphere right above the telescope. That way the system could calculate the corrections needed for every point in the field of view, not just for the neighborhood of a single guide star. If AO experts succeed again in turning an ambitious concept into a working technology, even the biggest Earth-bound telescope could take a quick step into space wherever it was pointed.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution