News this Week

Science  22 Jun 2001:
Vol. 292, Issue 5525, pp. 2226

    Texas Medical Center Staggered by Deadly Tropical Storm

    1. Mark Sincell*
    1. Mark Sincell writes from Houston.

    HOUSTON—Building on a Houston floodplain is a dicey proposition, and last week the dice came up snake eyes for the Texas Medical Center (TMC). Years of scientific work were destroyed, millions of dollars' worth of equipment crippled, and thousands of lab animals drowned when the region was hit with a second burst from Tropical Storm Allison. All research at TMC was shut down as scientists set to the grisly task of clearing out carcasses before they rotted in the sweltering Houston heat.

    The TMC is the largest collection of medical research facilities in the world, including the University of Texas M. D. Anderson Cancer Center (MDACC), the University of Texas Medical School at Houston (UTMSH), and Baylor College of Medicine. It also sits at the bottom of an 8-kilometer-square bowl-like depression in the Houston landscape bordered to the south by the Braes Bayou.

    Rains from Allison's first assault on 7 June had already filled the bayou to the brim. So when Allison unexpectedly returned from the Gulf of Mexico on 9 to 10 June to drop another 36 centimeters over the research complex—an event that occurs on average less than once per century—the water had nowhere to go but into the TMC buildings. “There is no way any [flood prevention] system could survive that amount of rain,” says Rice University flood expert Philip Bedient.

    A full accounting of the disaster, which claimed up to 24 lives and cost the region billions of dollars, may take months, but the flood's toll on TMC research is already staggering. “We had 2500 animals, from mice to large animals, and we believe we have lost all of them,” says UTMSH dean Maximilian Buja, adding that the floods and resulting power outages probably also destroyed the school's new experimental nuclear magnetic resonance facility, ruined a cyclotron used for positron emission tomography scans, and shut down freezers that preserve valuable tissue samples and antibodies.

    Flooding at Baylor damaged at least three multimillion-dollar electron microscopes, possibly irrevocably, and killed animals in the older of their two vivariums. “We got everything from the cows down to the rabbits out in time,” says Claire Bassett, vice president for public affairs at Baylor. But 30,000 mice and rats may have drowned, she says. The MDACC miraculously escaped serious damage, however, and its researchers worked through the weekend to help others save perishable materials, says MDACC Chief Academic Officer Margaret Kripke.

    Insurance and federal aid money should replace most of the lost material, but researchers can never get back the time they have invested. For example, some of the drowned transgenic mouse colonies at UTMSH took a decade to build, and only a minority can be regenerated from breeding pairs sent to other universities. “This is a devastating loss,” says George Stancel, dean of the graduate school of Biomedical Science at UTMSH.

    But it could have been even worse. Only four of the nine stacked racks of mouse cages in the Baylor vivarium were underwater, says Baylor molecular biologist Joe Bryan. “We lost critical experiments,” says Bryan. “But we still have breeding stock for most of the lines.” Bryan estimates that it will take 6 to 8 months to regenerate his mouse herd.

    A higher priority is the cleanup. Last week Bryan and other scientists donned hazard suits, rubber boots, and gas masks before descending into the basements to gather dead animals and dump them into biohazard bags destined for the incinerator. Dehydration from the intense Houston summer heat forced most people back to the surface within 45 minutes. And the smell. “I'll leave that to your imagination,” says Bryan.

    Emergency power and communications systems have now been restored to almost all of the buildings, but the air conditioning was expected to take a few more days. All research remains on hold as faculty members join forces with relief workers to dig out—and dry out—after the disaster.


    Volunteer's Death Prompts Review

    1. Eliot Marshall

    Clinical researchers at the Johns Hopkins University Bayview MedicalCenter in Baltimore braced for a round of public investigation last week after reporting the death of a volunteer in a study of lung function. The study, funded by the National Institutes of Health (NIH), was directed by Hopkins asthma researcher Alkis Togias, with Solbert Permutt as co-investigator. Healthy volunteers in the experiment inhaled two chemicals that have contrasting effects on the lungs' airways—one constricting and the other relaxing them—yielding data that may shed light on the causes of asthma.

    Hopkins made public a short summary of the case on 13 June, saying that an investigation was still under way, but that details would be withheld to protect the volunteer's family. Hopkins later released a copy of the research protocol and consent form. A university official then confirmed press reports that the volunteer, who died on 2 June after several weeks in the hospital, was a 24-year-old Hopkins lab technician named Ellen Roche.

    Neither Permutt nor Togias was available to comment. But in a protocol submitted to Hopkins's Institutional Review Board (IRB) last year, Togias explained that the experiment was designed to examine two distinct aspects of normal lung physiology called “bronchoprotection” and “bronchodilation.” Togias intended to ask up to 10 healthy subjects to inhale chemicals or saline (as placebo) and breathe into instruments that measure lung capacity. All volunteers were to inhale methacholine, which causes a temporary constriction of the airways, mimicking asthma. And some were to be given hexamethonium, a ganglion-blocking drug that affects the nervous system, lowering blood pressure and relaxing the airways.

    Hexamethonium formerly was prescribed to lower blood pressure, but that use was withdrawn by the manufacturer. It has not been used in many recent studies, researchers say, although Togias's protocol cites four human studies in the 1980s that used the drug. The protocol suggests that its main risk is that it can induce an excessive drop in blood pressure, and for this reason the protocol calls for a physician to be on hand to oversee its staged administration.

    Togias suspended the research in May after Roche became ill. He notified the IRB in a 9 May letter of “a serious adverse event,” explaining that a volunteer had reported dry cough, shortness of breath, and flulike symptoms 24 hours after participating in the hexamethonium part of the experiment. The volunteer was hospitalized after an x-ray showed signs of “early pneumonitis.” On 17 May, Hopkins vice dean for research Chi Van Dang alerted the U.S. Office for Human Research Protections (OHRP) of the “serious, unexpected” adverse event. Three weeks later, Dang sent notice that the woman had died. He added that the clinicians had performed an autopsy and were investigating many aspects of the study, including the supplier's claim that the hexamethonium was 99.6% pure. OHRP has now begun its own investigation. “It's proper to characterize this as a mysterious death,” says university spokesperson Joann Rodgers.

    Claude Lenfant, director of NIH's National Heart, Lung, and Blood Institute, which funded the research, says clinicians seem to have followed the right procedures, and that Hopkins is known to be “ferocious” in getting ethical issues correct. Clinical experiments always contain an element of risk, adds Richard Boucher, an expert on cystic fibrosis at the University of North Carolina, Chapel Hill. He cited the death 5 years ago of a volunteer at the University of Rochester in New York after a reaction to an anesthetic. Since then, he says, “everyone has redoubled their efforts to make this research as safe as possible.”


    Polymorphous Particles Solve Solar Mystery

    1. Charles Seife

    For particles with almost no mass, neutrinos are making quite a splash. On Monday, scientists from three countries announced that they had spotted neutrinos that had been missing for 3 decades.

    In the late 1960s, physicists calculated the number of relatively energetic neutrinos that should be streaming from the sun—due to the decay of boron-8 cooked up in the solar furnace—but experiments came up short. There were too few neutrinos. This is the mystery that Canada's Sudbury Neutrino Observatory (SNO) has now cleared up. “I'm thrilled by the precision of the result; I'm thrilled it agrees with the solar model calculations; I'm thrilled we have an answer to the problem,” says John Bahcall, a physicist at the Institute for Advanced Study in Princeton, New Jersey.

    In fact, SNO has confirmed what several experiments, notably Super-Kamiokande in Japan, had already indicated: The missing neutrinos had simply changed flavor. Neutrinos come in three flavors, named after the particles they are linked with. Electron neutrinos are the type produced by the sun; muon and tau neutrinos, which result from various particle interactions, are harder to detect. In the late 1990s, experiments provided fairly strong evidence that electron neutrinos turn into muon and tau neutrinos as they stream away from the sun—something that can happen only if the particles have mass (Science, 4 July 1997, p. 30). The “missing” neutrinos from the sun had merely changed into muon and tau neutrinos and escaped detection.

    Neutrino detector.

    At Sudbury, 10,000 photomultipliers on an 18-meter-wide sphere watch for elusive particles.


    Buried 2 kilometers underground in a nickel mine in Ontario, SNO has just given a resounding conformation to this picture. The detector measures the neutrinos coming from the sun in two ways. The first method spots the recoil of a neutrino off of an electron. Any of the three flavors of neutrino could potentially cause such a recoil and be detected. The second method detects when an electron neutrino strikes a neutron within a 1000-ton sphere of heavy water. Only an electron neutrino can make the neutron spit out an electron, triggering the detector. The two methods, combined with results from Super-K, reveal just how many neutrinos are coming from the sun and what proportion of them is either muon or tau neutrinos.

    “What we find is that there is an appearance of muon and tau neutrinos en route from the sun to the Earth,” says SNO project director Art McDonald of Queen's University in Kingston, Ontario. “The electron neutrinos transform into another type.” The transformation confirms earlier observations that neutrinos have mass. Better yet, the measurements agree with first-principles calculations of the amount of solar neutrinos created by the sun. “It is in very good agreement,” says SNO team member Kevin Lesko, a physicist at Lawrence Berkeley National Laboratory in California. “Basically, we've resolved the solar neutrino problem with a 99% confidence level. It's oscillations.”

    “This is an absolutely direct measurement,” Bahcall says. “Previous results were not so direct.” SNO scientists have already added salt to the heavy-water sphere, which will increase the instrument's sensitivity to muon and tau neutrinos and add another level of precision. “That will be a thrill,” says Bahcall.


    Picture Brightens a Bit as First Bills Advance

    1. David Malakoff*
    1. With reporting by Erik Stokstad and Jeffrey Mervis.

    U.S. scientists anxious about next year's federal research budget got some good news last week. Several congressional panels approved preliminary 2002 spending bills that restore research programs targeted for cuts by the Bush Administration, while others are considering channeling part of larger budget allocations to science. Pentagon officials also signaled that they may request a significant boost for defense R&D. Congress, however, still has a long way to go before any numbers become final for the fiscal year that begins 1 October.

    The first bit of good news came from a panel that oversees the U.S. Geological Survey (USGS). Its members recommended that the agency get an $18 million boost to $901 million, some $87 million above the president's request. The biggest winner was USGS's $203 million Water Resources Division, which would get a 1% increase rather than a 22% cut (Science, 13 April, p. 182). “It's essentially a restoration budget, [and] that's a good thing,” says lobbyist David Applegate of the American Geological Institute in Alexandria, Virginia, which had lobbied hard against the cuts. He is optimistic that the full House will approve the numbers later this month, and that the Senate will eventually follow suit.

    The USGS funding was part of an $18.9 billion bill approved 13 June that funds the Department of Interior and a flock of smaller agencies. One, the Smithsonian Institution, was singled out in the wake of a controversial effort to reshape the museum complex's science programs. The panel ordered Secretary Lawrence Small to tread water until a new external advisory panel makes its report later this year, in effect backing complaints by Smithsonian scientists that Small has ignored their advice (Science, 11 May, p. 1034).

    Many biomedical scientists were also pleased with language in a $74 billion agriculture spending bill approved by a House panel on 13 June. It would postpone for another year the development of new federal rules for the care of millions of laboratory mice, rats, and birds. Biomedical groups claim the rules would be duplicative and expensive.

    Prospects for the Department of Defense's (DOD's) science budget—a mainstay for university math, engineering, and computer science departments—are also looking up. New DOD R&D chief Edward Aldridge told a Senate subcommittee on 5 June that he hoped to spend between 2.5% and 3% of the Pentagon's total budget on basic and applied science, endorsing a goal set by an advisory panel several years ago. That target, if incorporated in a long-delayed DOD budget request later this summer, could generate more than $10 billion for research, a 10% increase over current levels.

    National Science Foundation (NSF) officials, meanwhile, are hoping to get a portion of an extra $1 billion allocated to the House and Senate panels that handle its budget, along with those of the Veterans Administration, the Department of Housing and Urban Development, NASA, the Environmental Protection Agency, and several other agencies. Lawmakers from both parties have deplored the Administration's 1.3% increase for NSF, a paltry $56 million, including a cut in its $3.3 billion research account. The House subcommittee will vote on a bill next week.


    Memo to Congress: Get Better Advice

    1. David Malakoff

    Add science policy wonks to the list of those hoping to bring extinct species back from the dead. Academics, science lobbyists, and government officials gathered in Washington last week to hash out ideas for reviving Congress's Office of Technology Assessment (OTA), a science advice agency that lawmakers killed in 1995. By the end of the daylong workshop, however, there was no consensus on what might convince Congress to change its mind.

    Created in 1972, OTA was known for organizing diverse panels that churned out well-regarded reports on hot policy topics such as genetic engineering. It's also been the inspiration for similar science advisory agencies established in other countries. But some lawmakers felt that OTA had become a bastion of Democratic bias that took too long to complete expensive studies. When Republicans won control of Congress in 1994, one of their first moves was to eliminate the $22 million office. Ever since, science community leaders have complained that lawmakers lack a trustworthy, neutral source of expertise on emerging issues such as stem cell research and nanotechnology.

    To fill the gap, workshop organizer M. Granger Morgan of Carnegie Mellon University in Pittsburgh, Pennsylvania, asked 10 academics and science advice veterans to explore five potential models. The ideas ranged from a small body that would contract out studies to a neo-OTA housed at an existing nonprofit or university. However, none of the plans escaped criticism from workshop participants, who included a number of former OTA staffers.

    Packaged advice.

    OTA reports covered the world of science.


    One plan, which would require Congress to decide ahead of time on a list of “well-established” nonprofits approved to conduct studies, is “the most hopelessly impractical thing I've ever seen,” said Bruce Smith of Columbia University in New York City. Others pronounced a bill to resurrect OTA in its original form, H.R. 2148, recently introduced by Representative Rush Holt (D-NJ), as dead on arrival. “Congress at this point does not seem ready to invest in a new staff-heavy organization,” said Bill Bonvillian, a senior aide to Senator Joe Lieberman (D-CT). Any such proposal also faces a steep learning curve: Holt confessed that some of his colleagues “didn't even know that OTA had been abolished.”

    Despite the darts, Morgan said that the workshop achieved its intended goal of “getting a national conversation started.” For skeptics, however, the meeting demonstrated that convincing Congress it needs a new OTA will be about as easy as cloning a dinosaur.


    Math Trick May Cause Tension Headache

    1. Charles Seife

    Albert Einstein's rubber sheets may be due for a dose of starch. The reason, says Christos Tsagas, a physicist at Portsmouth University in the United Kingdom, is magnetism. By reanalyzing the basic equation of general relativity—which treats space and time as a stretchy membrane—Tsagas discovered that magnetic fields tend to flatten and stiffen the fabric of space-time. The discovery might force cosmologists and astronomers to reexamine how magnetic fields have shaped the evolution of the cosmos.

    “The normal assumption is to neglect magnetic fields in the early universe, mostly for simplicity,” says Bernard Carr, a physicist at Queen Mary's College in London. “But magnetic fields could have an interesting cosmological effect. It might not be satisfactory to neglect it.”

    Einstein's general theory of relativity is essentially a description of the geometry of space and time. According to Einstein, a hunk of matter such as a star bends space-time like a bowling ball perched upon a rubber sheet. The result, described in relativistic terms, is gravity. That much has been known for the better part of a century. But Tsagas looked at the equation in an unusual way, switching the roles of space and time—a swap that makes no mathematical difference but changes the form of the equation. “You see effects that are a bit difficult to see in a more traditional form,” Tsagas says. As a result, Tsagas spotted something no one had seen before: A term in the equation showed that magnetic fields transfer their properties to the very fabric of space-time itself.

    Starched sheets.

    A new look at Einstein's theory shows that the fabric of space-time may be stiffer than physicists thought.


    Like very elastic rubber bands under tension, magnetic field lines try to remain as straight as possible. Magnetic fields transmit that tension to space-time, Tsagas realized, making nearby space like a rubber sheet that has been stretched a little bit tighter. According to Tsagas, such a region becomes stiffer and flattens out somewhat. “This effect can be crucial,” he says.

    If the big bang created a primordial magnetic field, then the extra stiffness of space-time would have resisted the rapid inflation that many physicists think occurred in the first split second of the universe. It also would have ironed out the entire universe. “It tends to make the background cosmology more like a flat cosmological model,” Carr says. That might help explain why the cosmos doesn't appear to have any curvature, a role physicists have traditionally assigned to inflation. Stiffer space-time might also damp gravitational waves, Carr says, making them harder to detect than physicists at observatories such as LIGO and TAMA have been counting on.

    Magnetic stiffening of the early universe probably won't win instant acceptance; inflation is so useful to physicists that any challenger is going to be tested very sternly. But if it pans out, cosmologists will have to rethink the role of magnetic fields in shaping the cosmos. And black hole theorists—who deal with sharply curved space near strong magnetic fields—might need to revise some pet notions as well. Astrophysicists in general, it's safe to say, could lose a little sleep over stiff sheets.


    Cluster Watchers View a Hot, Violent Birth

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    In a dwarf galaxy 12 million light-years from Earth, astronomers may be witnessing the birth of a globular cluster. The cluster, the youngest so far detected, could shed light on how similar balls of stars formed in our own galaxy billions of years earlier.

    According to Jean Turner of the University of California, Los Angeles, the new cluster may contain up to a million stars in a region only 10 light-years across. “It's the best, closest, and probably youngest example of a super-star cluster,” says Turner, who presented her team's findings at the 198th meeting of the American Astronomical Society in Pasadena.*

    The huge burst of star formation first caught astronomers' attention last year, when the Very Large Array radio telescope in New Mexico detected a glowing bubble in a dwarf galaxy known as NGC 5253 in the constellation Centaurus. “Back then, I was nervous about calling this a young globular cluster,” Turner says. But infrared observations and spectra obtained with the 10-meter Keck telescopes at Mauna Kea, Hawaii, confirmed her original suspicion. By measuring Doppler changes in infrared light from hydrogen in the bubble, Turner and colleagues calculated that the bubble was being blown by stellar winds moving at 5000 kilometers per hour—far stronger than winds astronomers had seen in other bubbles. Massive, powerful young stars, they concluded, must be churning out light and gases vigorously enough to produce 25% of the energy output of the dwarf galaxy. The infrared data also enabled the team to estimate the size of the star-forming region.


    One star-filled bubble (false-color red blip) in NGC 5253 radiates 25% of the dwarf galaxy's energy.


    The 12-million-year-old newborn could help resolve enigmas in our own galaxy. The 150-odd globular clusters in the Milky Way are billions of years old, so little is known about their origins. According to Turner, similar clusters-in-the-making probably exist in other galaxies, but most are much farther away and harder to study than the one her team found. “I won't call this a Rosetta Stone,” she says, “but if astronomers are to understand the birth of these clusters, they will keep getting back to this one.”

    Small galaxies like NGC 5253 are proving fertile breeding grounds for new stars. At the same meeting, Armando Gil de Paz of the Infrared Processing and Analysis Center in Pasadena, California, reported evidence of another huge (though nonglobular) starburst in a dwarf galaxy known as Markarian 86. According to Gil de Paz, the 30-million-year-old burst has triggered the formation of new stars across the galaxy.

    Why do dwarf galaxies undergo super-starbursts? Turner says no one knows yet, but in the case of NGC 5253, interaction with a neighboring spiral galaxy may be pumping star-forming material into the dwarf system. “[The luminous bubble] is a short-lived phase in the life of the cluster,” she says. “We are lucky that NGC 5253 is at the right place and the right time for us to detect this extraordinary windblown bubble.”

    • *3-7 June, Pasadena, California.


    Why Some Leukemia Cells Resist STI-571

    1. Jean Marx

    The antileukemia drug known as Gleevec or STI-571 has been heralded as the vanguard of a new generation of cancer chemotherapy agents. Most current cancer drugs were discovered by randomly screening thousands of chemicals to see if any kill cancer cells. But STI-571—which is remarkably effective in treating chronic myeloid leukemia (CML)—was deliberately designed to counteract a specific biochemical change that makes cells cancerous. Yet STI-571 shares an unfortunate characteristic with conventional cancer drugs: Patients with advanced disease often relapse; their tumor cells become resistant and eventually grow out of control. Results published online by Science on 21 June ( now explain why, and perhaps point the way to improved therapies.

    STI-571 works by inhibiting an enzyme that fuels cancer cell growth in CML—a kinase enzyme produced by the BCR-ABL oncogene. Almost all patients treated in the early stages of CML respond, and some have been in remission for more than 2 years. But the drug has been less effective in patients who are in an advanced phase of the disease called “blast crisis.” These individuals sometimes go into remission on the drug—which almost never happens with older treatments—but 80% relapse in less than a year. “As soon as we saw that, it was obvious that the mechanism of relapse would be interesting,” recalls Charles Sawyers of the University of California School of Medicine in Los Angeles, a member of the team that performed the clinical trials of STI-571 in CML patients.


    Replacement of threonine with isoleucine (red) in the Bcr-Abl kinase active site binding of STI-571 (orange).


    To pin down that mechanism, Sawyers and his colleagues first assayed the level of Bcr-Abl kinase activity in tumor cells from 11 patients who had relapsed. They found that it came back in every patient. This was a surprise, says Brian Druker of Oregon Health Sciences University in Portland, who's also involved in the clinical trials but is not a co-author on the resistance paper. Because cells in blast crisis are supposed to be highly genetically unstable, Druker says he expected that a mutation in some other gene might cause the relapse, but “[Sawyers] has shown that it's reactivation of BCR-ABL.” This is encouraging, Druker adds, because it means that targeting the Bcr-Abl kinase or the proteins it interacts with is still a promising strategy for combating the cancer.

    Other work by the Sawyers team revealed the causes of the kinase reactivation. The researchers did not have enough DNA from two of the patients to analyze, but they found that in three of the remaining nine, the BCR-ABL gene was amplified: The extra copies of the gene overwhelmed the drug by producing more enzyme than it could handle. In the other six, the cause was a point mutation that changed a single amino acid in the enzyme's active site: A threonine was replaced by an isoleucine. “To me what's amazing is that it's exactly the same mutation in everybody,” Sawyers says.

    An x-ray crystallography analysis of an STI-571 variant bound to the Bcr-Abl active site, which was performed last year by John Kuriyan's group at Rockefeller University in New York City, shows that the threonine that was replaced in these patients is important for binding the drug (Science, 15 September 2000, pp. 1857 and 1938). Apparently, the mutation hinders STI-571 binding to Bcr-Abl, but it doesn't knock out the kinase activity. In fact, Sawyers suggests, that may be why they found only one point mutation; the enzyme may not be able to tolerate other changes without losing the kinase activity that drives tumor cell growth.

    Now that the cause of STI-571 resistance is known in at least some patients, the next step is to try to get around it—a step that might also turn out to be necessary for patients with early CML. It's too early to tell whether the drug will eventually lose its effectiveness for them as well, but Druker worries that the fate of the advanced patients “could be a harbinger of relapse in our other patients.”

    Possibilities for future treatments include designing additional kinase inhibitors that would be given along with STI-571, because the enzyme should be less able to become resistant to all the drugs at once. Another is to look for alternative targets for drugs that could also be given in combination with STI-571. Like most successful designs, STI-571 is likely to spark a host of refinements.


    Synchronizing the Brain's Signals

    1. Laura Helmuth

    Sometimes neurons get so excited that they fire in harmony. This synchronized firing has long excited neuroscientists, but they aren't sure what it means. Some have suggested that it allows the brain to perform sophisticated computations, such as tying together various aspects of an experience that are distributed in many different areas of the brain. But “there are still a lot of holes” in such theories, says Barry Connors of Brown University in Providence. For starters: How do neurons pick up on the synchrony and pass along the precisely timed message?

    Now, Mario Galarreta and Shaul Hestrin of Stanford University may have provided a partial answer to these questions. On page 2295, they report results suggesting that networks of fast-spiking (FS) cells, a type of inhibitory neuron, could play a central role in detecting and fostering synchrony in the cortex, the large outer region of the brain that processes everything from complicated images to math problems. The study demonstrates that “if you are a neuron,” says Daniel Barth of the University of Colorado, Boulder, “it is not just what you have to say that counts, it is exactly when you say it.”

    Feel the beat.

    Interconnected FS cells (in red and green) favor synchronized signals.


    Galarreta and Hestrin set out to test just how precisely FS cells can register an incoming signal. They first teased pairs of cells, each consisting of one FS cell and one pyramidal cell, an excitatory neuron that commonly connects with FS cells, from slices of rat cortex. When they then stimulated the pyramidal cell and measured how long it took for the FS cell to respond, they found that it fired within 1 to 2 milliseconds—even faster than they expected.

    Because the FS cells fire so quickly, the researchers suspected they might be capable of noticing—and responding to—precisely timed signals from several other neurons at once. Evidence that that might happen came when the researchers tested two interconnected FS cells and found that the cells fired best if stimulated by two signals, one to each cell, that were separated by less than 1 millisecond. But if the two inputs were 5 milliseconds apart, the cells were even less likely to fire than if only one signal had arrived.

    In addition to their speedy response times, FS cells have other properties that might allow them to maintain and pass along a synchronized message. Not only do they exchange messages via the chemical neurotransmitter GABA, they're also more directly connected by what's called an electrical synapse, formed by the close juxtaposition of segments of the FS cell membranes. Thus, when one FS cell fires, a shadow of that burst quickly passes through the electrical synapse. This functionally immediate echo might allow a network of FS cells to pick up on a wide spatial distribution of synchronized signals.

    The new study “offers a system that's exquisitely sensitive to timing,” says Connors, and therefore it's “plausible” that networks of FS cells could detect and pass along synchronized signals. But, he cautions, many tantalizing questions remain. He'd like to know, for instance, whether the electrical synapses or the GABA-mediated connections make the network of two FS cells sensitive to the timing of incoming signals. And even more importantly, no one yet knows if the rest of the brain is paying attention to the precisely orchestrated performance.


    Precision Cosmology Takes Flight

    1. Robert Irion

    The Microwave Anisotropy Probe heralds a new era of sensing the remnant heat from the big bang

    When navigating the side streets of a big city, it helps to have an accurate map. And when it comes to charting the fine details of the big bang, there's nothing like an accurate MAP: the Microwave Anisotropy Probe. The $95 million satellite, scheduled for launch on 30 June from Cape Canaveral, Florida, has cosmologists aglow about measurements that should confirm —or refute—their dearest models of the first incandescent moments of the universe.

    MAP will pick up where its orbiting predecessor, the Cosmic Background Explorer (COBE), left off a decade ago. COBE made a big splash in 1992, when it exposed subtle ripples in the faint bath of microwave radiation that fills the sky. This heat, which registers a mere 2.725 kelvin, is literally the dying ember of the once-hot cosmos. Its fluctuations encode a wealth of information about the birth and evolution of the universe and the types of matter it contains (see p. 2236). However, COBE saw those ripples with a blurry eye: Its sharpest focus spanned a patch of sky nearly 15 times the width of the full moon.

    Since then, microwave-sensitive telescopes on high-altitude balloons and perched at high, dry spots on the ground have picked and prodded at the fabric of the cosmic background. The latest analyses from a few such experiments yielded strikingly similar numbers for the basic parameters of matter and energy in the cosmos (Science, 4 May, p. 823).

    Still, microwave observers face uncertainties that limit the accuracy of their data. For instance, it's hard to reconstruct precisely where a telescope is pointing when it's dangling from a balloon 37 kilometers high. Ground-based instruments have to peer through the atmosphere and sit on a planet hundreds of degrees warmer than the 0.0001-kelvin fluctuations they are trying to detect. Such limitations have made it hard for cosmologists to constrain all but the broadest details of their models.

    MAP should erase those doubts. “It's a pivotal moment in cosmology,” says astrophysicist David Spergel of Princeton University, a member of MAP's science team. “We'll find ourselves either converging on a standard model, or things aren't going to fit.” Adds Lyman Page, Spergel's colleague at Princeton who is also on the MAP team: “The jump from where we are now to MAP's map of the sky is a giant leap. It's going from indications to certainty.”

    Sky watcher.

    The MAP satellite sports egg-shaped reflectors (upper ovals) to steer microwaves into amplifiers beneath. A 5-meter shield will block radiation from the sun and Earth.


    MAP's main advantage will be its isolation from the heat and din of our planet. After MAP's launch on a Delta 2 rocket, a series of cigar-shaped loops around Earth will position the 800-kilogram satellite to whip past the moon. That gravitational assist will deliver MAP to prime real estate: the L2 Lagrange point, a spot in space 1.5 million kilometers from Earth in a straight line away from the sun. There, the combined gravitational pulls of Earth and the sun equal the centripetal force required to revolve in tandem with Earth. That allows the spacecraft to wander lazily around L2 like a tiny asteroid, with little need for fuel during its 2-year mission. MAP will be the first probe to occupy this choice spot. A solar shield will block Earth, the sun, and the moon. Further, the angle between the satellite and the sun—and therefore the amount of heat influencing the satellite—will never change. “It's hard to imagine a better environment” for MAP, says theorist Max Tegmark of the University of Pennsylvania in Philadelphia. “It will just stare into dark, pristine space.”

    Equally critical to MAP's mission is its goal to chart the entire sky, not just the small patches probed from the ground and from balloons. COBE did the same thing, but with a coarse resolution of 7 angular degrees; MAP will resolve details less than one-quarter of an angular degree across. Millions of measurements will lead to a full-sky map of the microwave background during each 6-month orbit around L2. To create that atlas, MAP will sweep the sky as the satellite spins once every 2.2 minutes and precesses on its axis like a top once per hour. The resulting pattern covers about 30% of the sky each hour in an interlaced web, like a Spirograph drawing. Two pairs of reflectors will funnel microwaves into a series of amplifiers from two directions on the sky at the same time. The detectors will measure the difference in the temperature readings but not their absolute values. This “differential” approach, also employed by a COBE instrument, is like putting two kilometer-long rods alongside each other to ascertain that one rod is a few millimeters longer rather than trying to measure each one from end to end.

    Another challenge for MAP scientists is to account for microwaves from our Milky Way galaxy. Those emissions—streaming from electrons spiraling in magnetic fields, warm gas, and dust—make it harder to determine which microwaves truly journeyed from the early universe. To sort the various kinds, MAP's readings will span five frequencies of radiation, from 22 to 90 gigahertz (wavelengths between 3 and 13 millimeters). Subtle distinctions among the channels should expose the galaxy's contributions. A final strength of MAP will be its ability to yield what Page calls a “true map of the sky.” The observation design leads to an atlas in which each pixel has its own set of readings, uncorrelated with those around it or elsewhere on the sky. That's not the case with other cosmic background experiments, in which the statistical noise at any one pixel must be correlated with many thousands of other pixels. That linkage makes the data cumbersome to analyze.

    These careful plans should lead to the new benchmark in the field: microwave charts with uncertainties perhaps 100 times lower than those produced by any previous project. “MAP's map will be a joy to work with,” says principal investigator Charles Bennett of NASA's Goddard Space Flight Center in Greenbelt, Maryland. “It will really nail the cosmological models. … There won't be a lot of wiggle room left.” If all goes well, Bennett adds, MAP may gain an additional 2 years of observing time to clamp down on those wiggles even further.

    For all its prowess, MAP won't monopolize the microwave measurements. Balloon-borne and ground-based instruments will continue to play vital roles in two other areas.

    The first is to probe the microwave signature on ever-smaller angular scales. By focusing on narrow strips of sky, cosmologists intend to tease out the tiny heat fluctuations that reflect the minutiae of physical processes during the big bang and as the universe evolved. For example, the photons we see in the cosmic background were deflected slightly on their way to Earth as they passed by galaxy clusters and other masses. This gravitational lensing averages about 1/60th of an angular degree for each photon, says theorist Wayne Hu of the University of Chicago. Research teams are devising plans for large ground-based reflectors—at least 6 meters across—to home in on that scale, which is about 15 times finer than MAP can resolve. That could provide the most powerful probe of how matter was distributed throughout the history of the universe.

    A second enticing frontier is the suspected polarization of the microwave background. Models predict that about 5% to 10% of the microwaves became polarized, largely by scattering off electrons in the young universe. Some of this occurred when the first stars and galaxies lit up, bombarding the surrounding gas with radiation and stripping electrons from hydrogen atoms. By detecting polarized microwaves, researchers might be able to pin down when that happened and what those first luminous objects were like. Another target is the polarization from density ripples in the primordial plasma that produced the microwave background itself. That's likely to be the first type spotted.

    However, no one yet knows whether such studies are feasible. “Polarization is a new game,” says physicist John Ruhl of the University of California, Santa Barbara. For starters, the signals probably are only a tenth as strong as the already faint temperature signals. “It's that much harder to see it and to convince yourself that you're measuring the cosmos, not some instrumental effect,” Ruhl says. Further, polarized microwaves from the Milky Way may shroud the background patterns.

    Distant vision.

    MAP will follow a loopy 3-month path to the L2 Lagrange point, where the satellite will need little fuel.


    An answer is likely to come within a year. Although MAP will not detect the polarization of any one point on the sky, it should be able to draw a statistical correlation between the temperature map it sees and the expected patterns of polarization over the whole sky. “We'll use the temperature fluctuations as a template to ask where we should see more polarization, and where we should see less,” Bennett says. That would help tell ground- and balloon-based detectors where best to look.

    But other experiments may get there first. The Degree Angular Scale Interferometer, a network of 13 microwave collectors run by the University of Chicago at the South Pole, is now collecting data with polarization-sensitive detectors in an attempt to see polarized microwaves directly. In December, Ruhl and an international team will travel to Antarctica to launch BOOMERANG 2001, a follow-on to the most successful balloon-borne mapping mission to date. The new mission, which will stay aloft for at least 10 days, will include polarization detectors. “If the models are correct, we should see it with good signal to noise,” Ruhl says.

    Polarization of the microwave background will drive the field once MAP nails down the temperature fluctuations, says theorist Marc Kamionkowski of the California Institute of Technology in Pasadena. The real push, several years from now, will be to detect a faint but distinctive spiral pattern of polarization in the microwaves imprinted by gravitational waves in the early universe. Those waves, says Kamionkowski, were spawned by the violent expansion of the cosmos in its first fraction of a second—the scenario called inflation.

    The mission with the best chance to see such curlicues is Planck, a satellite slated for launch in 2007 by the European Space Agency (ESA). Planck, which also will orbit around the L2 point, will carry delicate polarization detectors that just might see the inflationary signature. Kamionkowski puts those odds at 50-50—and he hopes that Planck won't be the end of the line. “Before Planck flies, we'll begin serious development of a post-Planck polarization experiment,” he says. “Detecting the gravitational-wave polarization with confidence would be something spectacular. Physicists would leave their particle accelerators to go after that.”

    However, Planck's designers note that its primary purpose is to serve as a much more sophisticated temperature probe than MAP. It will have twice the angular resolution of MAP and about 10 times the sensitivity to temperature fluctuations. “As you go into the kind of precision that Planck will provide, you will achieve a much greater ability to discriminate between cosmological models,” says Planck project scientist Jan Tauber, an ESA radio astronomer in Noordwijk, the Netherlands.

    For now, though, all eyes are turning to see what MAP will reveal about the oldest light in the cosmos. “In some fields of physics, the style of work is one of ultimate precision craftsmanship,” says astrophysicist Craig Hogan of the University of Washington, Seattle. “We're seeing that in cosmology now, and it's very different from the raucous cavalry style of the last few decades.” Still, there's a chance that MAP or its Earth-bound brethren will find something out of kilter in the patterns of the microwaves—forcing cosmologists back on their horses to round up new explanations for our wild universe.


    Peering Backward to the Cosmos's Fiery Birth

    1. Charles Seife

    A generation after the cosmic microwave background sold physicists on the big bang, ever-sharper views of it are filling in the theoretical details

    The universe has walls of fire. No matter where astronomers point their telescopes, they see a distant sheet of light surrounding us. Beyond that enormous shell of radiation, astronomers can see nothing. We are caged in by this surface: the cosmic microwave background (CMB), the faint afterglow of the big bang. The patterns on this surface hold the secret to the birth of the cosmos—and they foretell its death.

    Last spring a balloon-borne experiment called BOOMERANG began to decipher the writing on those walls when it returned the first detailed map of the small-scale patterns in the CMB (Science, 28 April 2000, p. 595). Since then, the discoveries have been coming faster and faster. This April, three teams simultaneously released data providing the finest picture ever of sections of the universe's edge. Starting this summer, the Microwave Anisotropy Probe (MAP) promises to do the same for the entire sky. At the same time, new measurements will look at the CMB with polarized glasses, revealing how space-time creaked and groaned under its enormous load of matter and energy.

    As they wait for the floodgates to open, cosmologists agree that by the end of the decade, theorists may be well on the way to answering some of their deepest questions about what the universe is made of and how it evolved. “There are a lot of weird things in the standard model—dark matter and dark energy, for one thing,” says Matias Zaldarriaga, an astrophysicist at New York University (NYU), who hopes CMB measurements will soon clear up some of the mystery. “If the model hangs together, then we might be able to say interesting things about fundamental physics.”

    Echoes of the big bang

    The discovery of the cosmic microwave background in the mid-1960s gave scientists their first view of the origin of the cosmos. Until then, cosmologists were split between two vastly different models: the “big bang,” which proposed that the universe had exploded into being and then evolved into its present form, and “steady state” cosmology, which held that the universe is continuously under construction, new matter coalescing to fill the voids as galaxies fly apart. Starting in the 1940s, different groups of physicists realized that if the big bang had taken place, microwave radiation left over from the explosion should still be detectable. In 1965, two engineers at Bell Telephone Laboratories, Arno Penzias and Robert Wilson, concluded that stubborn static in their microwave antenna was the afterglow of the big bang. The discovery netted them the Nobel Prize and enshrined the big bang as the reigning model of cosmology.

    Modern big bang theory states that a massive explosion created all the mass and energy in the universe, as well as the fabric of space-time. This fabric inflated rapidly after the cataclysm, but within a tiny fraction of a second, the rapid inflation slowed down and the universe cooled. Free-roaming quarks began to form protons and neutrons. Within minutes, the temperature dropped enough that some of the protons and neutrons coalesced into nuclei of simple elements. The universe was filled with a plasma of atomic nuclei and electrons—and with light. Whenever an electron tried to combine with a nucleus, a hurtling photon would strip it away; conversely, a photon couldn't get very far before it scattered off an atom trying to coalesce. As a result, light stayed trapped in a cage of plasma until about 300,000 years after the big bang. Then the expansion of space-time cooled everything to the point that the electrons combined with their nuclei. This “recombination” freed light from its confines; the entire universe glowed (See figure, p. 2238).


    MAP's maps of the CMB, simulated here (top), will be 1000 times as detailed as COBE's (bottom).


    The light from recombination still rattles around the cosmos. As the fabric of space-time expanded, the original ultrahigh-energy gamma rays stretched into x-rays, visible light, and now, 15 billion years after recombination, microwaves. The scream of light has become a mere whisper, a faint glow with a temperature 2.7 degrees above absolute zero. This is the CMB.

    Theoretical explosion

    Once physicists knew the CMB existed, they set to work figuring out its attributes. They concluded that the background radiation had to look like a “blackbody” spectrum—the sort of light that an object radiates because of its thermal energy. Furthermore, its spectrum must bear the stamp of the tiny mass fluctuations that eventually developed into galaxies and galaxy clusters.

    By the early 1970s, the Russian physicist Iakov Zel'dovich realized that these fluctuations would have a subtle signature. The light and matter in the pre-recombination universe would have been ringing like a bell. Alternately pulled together by gravity and blown apart by intense scattered light, lumps of matter oscillated, compressing and expanding, from the moments after the big bang until recombination finally set the light free. Zel'dovich and others, such as Princeton physicist Jim Peebles, showed that these “acoustic oscillations” should have stamped the CMB with a measurable imprint: a “fundamental” of hot spots each about 1 degree across, sprinkled with “overtones” of smaller hot and cold spots. The only problem was that nobody had any way of measuring such tiny features. In 1990, the Cosmic Background Explorer (COBE) satellite and an unnamed rocket-borne experiment confirmed that the cosmic background spectrum was indeed blackbody radiation. However, even COBE's vision was too blurry to see features smaller than several degrees. Astronomers had to wait another decade to see the acoustic oscillations.

    That wait just ended. Last year, the BOOMERANG balloon experiment saw the 1-degree hot spots, which showed up as a peak in a graph. From the size of the spots, theorists concluded that the universe is “flat” in a four-dimensional sense, rather than curved (see sidebar). More data from BOOMERANG and other experiments —the Cosmic Background Imager, MAXIMA, and the Degree Angular Scale Interferometer (DASI)—found evidence of the second peak and hints of a third (Science, 4 May, p. 823). These peaks not only confirmed the acoustic-oscillation model of the early universe but are also revealing the contents of the cosmos. For several decades, astrophysicists have postulated the existence of dark matter—objects that have mass but don't interact with light very well. By skewing the tug-of-war between light and gravity in the early universe, dark matter would have subtly altered the oscillations in the plasma in much the same way that tightening a guitar string alters its pitch. By measuring the relative sizes of the first and second peak, scientists have calculated that baryonic, ordinary matter—the stuff of stars and of people—makes up just over 4% of the energy and matter in the universe.

    What about the rest? Combined with observations from galaxy surveys and other sources, the CMB measurements give some rather troubling numbers. They show that about 30% of the stuff in the universe is dark matter. The remaining two-thirds, theorists believe, is a mysterious “dark energy” or “quintessence”— a large-scale antigravity-like effect that is making the universe expand ever faster while keeping it from curling into a saddle shape. Ordinary matter is just a drop in the cosmic bucket. “We may be made of star stuff, but the universe ain't,” says University of Chicago cosmologist Mike Turner.

    Map of the future

    MAP promises to give cosmologists a much better fix on what it takes to make a universe. With its whole-sky coverage and high resolution, the satellite will survey temperature fluctuations in the CMB far more extensively than its predecessors, enabling scientists to take a more detailed inventory of the amount of dark matter and dark energy in the universe. “They will have precision measurements of the matter budget, which enables you to do all sorts of cool stuff,” says cosmologist Max Tegmark of the University of Pennsylvania in Philadelphia.

    Meanwhile, ground-based observers are now doing what even MAP cannot. DASI, PIQUE, and other detectors are searching for an even fainter signal nestled within the cosmic background radiation: polarization.

    When electrons recombined with nuclei, the photons stopped scattering and escaped from their cage of plasma. The cosmic background radiation is an image of the “last scattering surface”—the cloud of plasma that scattered the light for the last time before setting it free. The problem with studying temperature variations in the light coming from that surface is that the photons get stretched and kneaded by their trip across the universe. “The photons climb in and out of dimples in space-time” caused by massive bodies, says John Kovac, a physicist at the University of Chicago, and the passage through those “gravitational potential wells” masks some temperature fluctuations.

    Condensed history.

    Observations and theory reveal the interplay of light and matter in the cooling universe.


    But as the last scattering surface bounced photons into space, it also polarized them. By studying the angles of the polarization of the CMB, scientists can calculate where the light was bouncing from, information that reveals how matter was distributed in the early universe. And because the polarization of light is unaffected by gravitational fields, polarization measurements should be much more precise than temperature measurements, Kovac says: “Polarization is a lot cleaner. It directly probes the last scattering surface.”

    Polarization will also give astrophysicists a tool to probe another era in the early universe: reionization. A billion or so years after the big bang, stars, galaxies, and quasars formed, and the radiation they produced stripped the electrons from atoms once again. This effect should appear as a small peak in the CMB spectrum on large angular scales. Gravitational kneading erases the peak from temperature-variation data, but polarized light should make it visible, Tegmark says. The information will reveal how long ago reionization happened. “It's one of the most exciting numbers that we have no clue about right now, and there's no way we can do that with current measurements,” Tegmark says.

    But scientists are hot on the trail of CMB polarization. Initial results just released by the PIQUE experiment, whose hot tub-sized detector sits atop the roof of the physics building at Princeton University, show that that instrument is slightly too insensitive to see the expected polarization. But PIQUE's successor, CAPMAP, is beginning construction, BOOMERANG is being upgraded, and DASI is already taking data. “It's a very exciting race,” says Tegmark. “It's hard to say who the winner will be.”

    CMB polarization might even help cosmologists close in on a long-sought quarry, gravitational waves. The key is a component of polarized light with a spiral quality that mathematicians call curl. “It turns out that density fluctuations cannot produce any curl-type polarization,” says NYU's Zaldarriaga. But inflationary theory predicts that gravitational waves in the early universe would have produced a curl-type component. If scientists studying light from the CMB see a clean curl-containing component—one uncontaminated with false signals from, say, polarized light from galaxies—it will be “a smoking gun for gravitational waves,” Tegmark says. Unfortunately, the curl-type component is extremely weak, so astronomers don't expect to see it for the next few years. But observations, if they come, could make or break inflation theory. “Gravitational waves might tell whether inflation is right, or if something else, like the ‘big splat' [Science, 13 April, p. 189] is,” says Princeton's Peebles. “We can certainly hope so.”

    But even absent such theoretical jackpots, MAP and its CMB-probing cousins seem sure to yield rich payoffs. Merely giving a precise accounting of the mass and energy of the universe could shed light on the nature of dark matter and dark energy in ways that, with luck, will reveal the physics of the universe from before it was a second old. Physicists are ecstatic. “My colleagues at the University of Chicago accuse me of wandering around the halls saying ‘We're in a golden age of cosmology, we're in a golden age of cosmology,’” says Turner. “Well, I'm proudly saying that we're in a golden age of cosmology.”


    Shaping a Universe

    1. Barry Cipra

    What links the cosmic microwave background (CMB) to the grand structure of the universe is the fabric of space-time. But just what is that fabric, and what can CMB measurements tell us about it?

    In Einstein's general theory of relativity, space and time are knit together in a stretchy “manifold”—a mathematical object, every small patch of which looks roughly like a four-dimensional rubber sheet. Light rays follow contours of the manifold, called geodesics. On a flat plane, parallel rays from a distant object will stay the same distance apart as they approach an observer. But on a surface with “positive” curvature, like a sphere, approaching rays will move farther apart, making distant objects look bigger than normal. And on a surface with “negative” curvature, like a saddle, parallel beams will get closer together, making the object look smaller (see figures A).


    Because curved manifolds distort light differently from flat ones, they should also give rise to different sorts of CMB. The 1-degree-wide ripples that BOOMERANG observed were precisely what theory predicted for a flat universe—a conclusion that most physicists fully expect the Microwave Anisotropy Probe's (MAP's) maps to bear out.

    Some researchers hope that MAP will give more specific information about the size and shape of the universe. “When we look at the microwave background, we're basically looking out to the surface of a sphere,” explains David Spergel, an astrophysicist at Princeton University and a member of MAP's science team. If the universe is infinite, that “surface of last scattering” will give few clues about its shape. But if the universe is finite, then space-time—and the scattering surface nestled within it—must bend back on itself. A large enough sphere would then intersect itself in at least one circle, just as a disk wrapped around a dowel overlaps itself at the ends (see figures B).


    In fact, Spergel says, because light can take more than one path through curved space-time, astronomers would see each intersection not once but twice—as paired circles tracing out identical patterns of hot and cold spots in different parts of the sky. Spergel's group in the United States and a group headed by Jean-Pierre Luminet at the Paris Observatory are developing algorithms to look for such signatures in MAP's data.

    Meanwhile, mathematician Jeff Weeks, a freelance geometer based in Canton, New York, has written a computer algorithm that turns paired circles into model universes. Easiest to visualize, Weeks says, is a “toroidal” universe slightly smaller than the surface of last scattering. In a 2D universe wrapped around a torus, he points out, astronomers would seem to see identical points on opposite walls of an imaginary box of space (see figures C). Similarly, astronomers in a 3D toroidal universe would see three pairs of circles in opposite directions.


    Toroidality is just the simplest of 10 different topologies for a “flat” finite universe. If the universe turns out to be curved—which is currently thought not to be the case—then there will be infinitely many more possibilities for Weeks's algorithm to sort through. “We'll start taking a look as soon as any sort of data is available,” Weeks says. If the cosmos cooperates, they may not have long to wait, Spergel says: “In 2 years, we could know that we live in a finite universe.”


    U.K. Agency Spawns Private High-Tech Behemoth

    1. Andrew Watson*
    1. Andrew Watson writes from Norwich, U.K.

    The latest “product” of Britain's Defence Evaluation and Research Agency is actually a high-tech company that will be the country's largest

    NORWICH, U.K.—Mobile phones were the last thing on the minds of scientists at Britain's Defence Evaluation and Research Agency (DERA) as they developed flat- panel displays for tanks and other military hardware. But their seminal contributions over decades to the technology behind today's ubiquitous hand-held devices is exactly the sort of success that British officials are hoping to repeat—again and again—in the commercial world starting next month.

    On 1 July most of DERA will step out from under the umbrella of the Ministry of Defence (MOD) and become the country's largest independent science and technology company in a compromise intended to appease Britain's most important defense ally, the United States. But there are questions about whether the shift ultimately could weaken national security or undercut other U.K. defense R&D contractors.

    The new $1.05 billion entity, called QinetiQ (pronounced like “kinetic”), will be owned by the U.K. government but directed by an independent board. Its mandate will be to develop and market ideas and technology for customers around the world. Its 6500-strong scientific staff is headed by current DERA chief Sir John Chisholm, a former industry executive. The company comprises some three-fourths of DERA, and what's left, to be called the Defence Science and Technology Laboratory (DSTL), will remain in the clutches of the MOD and undertake work deemed too sensitive for a commercial outfit. DSTL will retain the Porton Down chemical and biological research facility, the naval research center at Portsdown, and some of the armaments research facility at Fort Halstead. QinetiQ will be located at Farnborough, south of London, and at Malvern in western England.

    The privatization is intended to unfetter DERA's scientific workforce from its military masters, who had cut research spending in half over the last decade. Its past achievements include novel technologies ranging from carbon fiber, Chobham tank armor, and ground-penetrating synthetic aperture radar to the world's biggest trimaran warship and software to block e-mail distribution of computer viruses. Last year a group of scientists formed DERA'S first spin-off company, ZBD Displays, to commercialize a liquid crystal display for mobile phones. The technology, called zenithal bistable display, allows the display to retain its image even when the power is turned off, using less power and, thus, requiring smaller batteries.

    The new company is intended to foster more such ventures. “DERA in the past has been one of those smart-arse organizations that's an expert on how everybody ought to do everything,” says John Mabberley, one-time head of Britain's nuclear weapons program and past chief of DERA's commercialization arm. “But, of course, most people want solutions, not advice.” Mabberley is helping set up the company's U.S. office in space overlooking the Pentagon, building ties to its most important foreign customer.

    Despite being firmly rooted in military research, QinetiQ officials point to several civilian projects already under way as examples of the new outfit's diversity. These include a joint venture with Ford Motor Co. to exploit DERA's holographic imaging technology to create full-scale digital models of prototype cars, and links with British audio technology company NXT to exploit and market flat loudspeaker and speech recognition technologies.

    Catching a wave.

    The RV Triton, the world's largest trimaran warship, is based on technology developed by British military scientists.


    The government's original plan, advanced in 1998, was to privatize all of DERA, which was created in 1995 as an amalgam of several defense labs. That stance stood in stark contrast to that of the previous government, which had insisted that the defense sector should not join a wave of other privatizations. But the plan was scrapped after strong objections from U.S. officials and replaced with the current compromise. “We share some fairly sensitive information on a government-to-government basis, and we were concerned that the same mechanisms would not be in place with a private company” that may compete with U.S. firms, says a former Pentagon official involved in the negotiations. DERA's technical director, Adrian Mears, says he was “surprised” at how sensitive the U.S. has been to issues of information leakage. “But they are an important ally, and we have gone to great lengths to meet their needs.”

    Also no great fan of QinetiQ is the U.K. Parliament's bipartisan Select Committee on Defence, which opposes what it calls “the misguided change of status for DERA.” The committee worries that the new structure might weaken national security by failing to provide the government with adequate independent advice.

    For its part, British industry worries that QinetiQ's close government ties may give it an unfair advantage. “All that DERA has to offer is knowledge,” says Alan Sharman, head of the Defence Manufacturers Association, who points out that DERA is not allowed to make anything. “Is that fair competition for industry, since this knowledge was gained for free at the taxpayers' expense?” At the same time, he says that any U.S. hesitancy to share information with DSTL will make it harder for British companies receiving government contracts to remain leaders in advanced military technology. Indeed, the former Pentagon official expects the U.S. government to proceed cautiously. “If the new company looks like it's trying only to maximize shareholder value by selling its technology to the highest bidder, then the U.S. skepticism will increase,” the official says.

    DERA veterans are upbeat about the transition. They believe that the noncommercial arm will continue to provide top-notch scientific advice to the U.K. government. “It's going to be very much business as normal at DSTL,” predicts Martin Earwicker, DSTL's new chief. They also expect the commercial wing to thrive. “A lot of people within QinetiQ might be nervous about the change, but that's natural,” says physicist Cliff Jones. Jones is part of the five-member team that moved from DERA to ZBD Displays, which hopes its phone displays will become part of every teenager's electronic fashion wardrobe. From his vantage point, going from a government job to one in the commercial sector “is quite a pleasant experience,” he says.


    Canada Bids to Host International Reactor

    1. Dennis Normile,
    2. Wayne Kondro*
    1. Wayne Kondro writes from Ottawa.

    The three partners in the proposed International Thermonuclear Experimental Reactor are weighing a surprise offer from upstart Canada

    TOKYO AND OTTAWA—Canada has made a surprise bid to host the International Thermonuclear Experimental Reactor (ITER), a move that signals the start of serious political jockeying among the three partners over a site for the $5 billion fusion project. If nothing else, last week's offer demonstrates that plans to harness the nuclear fusion process that fuels the sun are moving forward despite a conspicuous lack of interest on the part of the United States.

    ITER began in 1986 as a joint project of the United States, European Union (E.U.), Japan, and the Soviet Union. Over the next decade, worries about the cost and technical feasibility of the project gradually eroded political support in the United States. Although the design phase was extended and the project scaled back, the United States formally dropped out in 1998.

    But the rest of the world is moving ahead. As the design of the scaled-down reactor nears completion, the next big hurdle will be selecting a site and dividing up responsibilities and contributions to build it. The formal proposals will include in-kind contributions, with the host country or region picking up at least 25% of the cost in return for an expected economic and scientific bonanza. The E.U. is expected to offer a site in southern France next to Cadarache, the main French nuclear power research facility, while Japan has three sites under consideration (see map). Russia's continuing economic woes since the collapse of the Soviet Union greatly diminish its chances.

    But Canada's sudden bid has opened up the race. Its offer, presented to ITER partners at their latest meeting on 7 June in Moscow, lifted their spirits. “From the beginning we wanted ITER to be a very broadly international project, so we're very happy to see Canada express such interest,” says Hidetoshi Nakamura, director of the Office of Fusion Energy in Japan's Ministry of Education, Science, Technology, Sports, and Culture.

    Site specific.

    Canada wants ITER on the shores of Lake Ontario (bottom), next to a tritium-making facility; Japan and Europe (above) have other ideas.


    The proposal comes from a coalition of Ontario-based businesses, in particular Ontario Hydro, that set up a nonprofit entity, ITER Canada, to lobby for the site. The national government has declined to put up any money, but it has endorsed the bid. “For Canada, it'll be something of a technological gold mine sitting in the middle of the country,” says Peter Barnard, head of ITER Canada.

    Its novel mechanism for financing ITER calculates that the site, formerly earmarked for a nuclear power reactor, is worth $650 million. The site is on the north shore of Lake Ontario, an hour east of Toronto and adjacent to the Darlington nuclear power station. The Ontario provincial government has pledged $200 million over 30 years for operations. And the tritium needs will be supplied by Ontario Hydro at a projected value of $450 million, in return for ITER buying all its electricity from the utility. The proposal also includes a $1 billion bridge loan to cover construction and infrastructure costs, to be repaid by Japan and Europe.

    Canada also hopes that a site in a major city, Toronto, that is roughly halfway between Europe and Japan, may help entice the U.S. government to reconsider its role in the project. “The assumption is that the U.S. is going to view ITER more sympathetically if it's in Canada than if it were in northern Japan or central Europe,” says Barnard.

    That's unlikely, however, according to Department of Energy officials, who say that the die has been cast on the United States' role. But Stephen Dean, head of the private Fusion Power Associates in Gaithersburg, Maryland, believes Canada's move could help prove to a skeptical Congress that “the partners are serious about building ITER.”

    ITER officials have their own concerns, starting with the assumption that the host country or region would make a strong long-term financial and political commitment. “In Canada's bid, the country itself is not proposing to take a central role in the project,” says Nakamura. But officials say they are taking the Canadian plan seriously. The proposals are due by the end of the year, says Jean-Pierre Rager, head of fusion activities at the European Commission in Brussels, “and then we will begin to talk.”

    Japan is hoping that its offer to pay at least 50% of the construction costs will make it an attractive suitor. The three sites vying for the prize are Tomakomai, on the northern island of Hokkaido; Rokkasho, in Aomori Prefecture at the northern tip of Honshu; and Naka, in Ibaragi Prefecture about 100 kilometers north of Tokyo. France offered the Cadarache site to the E.U. last year, and Rager says they are now working on the details of the E.U. proposal.

    A decision on the site and conditions for participation is expected in the spring of 2003. If all goes smoothly, construction could begin in 2005 and be completed by 2013.


    Of Ozone, Teapots, And Snowballs

    1. Richard A. Kerr

    BOSTON—Almost 3000 earth scientists of every stratum gathered here late last month for the spring meeting of the American Geophysical Union. Topics ranged from what's happening to the stratospheric ozone over Boston to how mysterious gullies formed on Mars and whether Earth really was one big snowball 600 million years ago.

    Less Ozone Without Pollutant Chemistry?

    Chemicals from refrigerators and air conditioners have long been the prime suspects in the erosion of the protective ozone shield over our heads, letting more and more ultraviolet radiation stream down. But one group presented evidence at the meeting implicating another perpetrator in the loss of stratospheric ozone over the midlatitudes: climate change, possibly the strengthening greenhouse. Researchers still see chlorofluorocarbons (CFCs) as the villains behind the destruction of ozone over the poles every spring, however.

    Based on an analysis of 15 years of satellite ozone measurements, “most of the ozone trends we're seeing [at midlatitudes] aren't due to chemistry,” says ozone researcher Robert Hudson of the University of Maryland, College Park. “I think this is pretty firm.” Others are intrigued but not yet convinced. “I take it very seriously,” says atmospheric chemist James Anderson of Harvard University, “but it needs to be tested.”

    Hudson and his Maryland colleague Alexander Frolov used the downward- looking TOMS ozone sensor on the Nimbus 7 satellite to map out the global pattern of ozone abundance between 1978 and 1992. Typically, there is less stratospheric ozone overhead in the tropics, more at midlatitudes, and the most in the polar region, thanks to a thickening of the ozone-laden stratosphere from one zone to the next. That thickening occurs at the boundaries between the three zones: the subtropical front between tropics and midlatitudes and the polar front between midlatitudes and polar regions.

    When researchers looked at ozone in the northern midlatitudes as they are usually defined—between 25° and 60°—they found that levels fell at a rate of 2% to 4% per decade through the period, the decline leveling off in the mid-1990s. But when Hudson and Frolov defined midlatitudes meteorologically, as the area between the two ever- shifting fronts, they found a very different picture: The amount of ozone overhead did not change at all through 1992. By their count, over time the geographical midlatitudes included more and more meteorologically tropical air, presumably because the subtropical front moved northward. “We think we're seeing a climate change here,” says Hudson. “If we're right, and most of the ozone trend in midlatitudes is due to frontal movement, then even if we cut back the [CFCs], we aren't necessarily going to see a change in ozone trends at midlatitudes.”

    Ozone palette.

    Shifting low (dark blue), moderate (blue-green), and high (yellow-red) ozone may be behind a midlatitude ozone decline.


    The redefinition of midlatitudes is getting a tentative reception. “It's a neat new way, a better way, of looking at ozone trends,” says atmospheric physicist Paul Newman of NASA's Goddard Space Flight Center in Greenbelt, Maryland. However, he cautions, “it's still a little bit early” to conclude that climate rather than chemistry is to blame.

    One thing researchers would still like to see is independent evidence of a northward shift of the fronts. By chance, that's what turned up in the June Journal of Climate paper by climatologist Gregory McCabe of the U.S. Geological Survey in Denver and his colleagues. They tallied the storms that tended to track along the Northern Hemisphere's polar front from 1959 to 1997 and found “a pretty large decrease in the frequency of cyclones in midlatitudes and an increase in high latitudes,” says McCabe. That implies a northward shift in the storm track and in the polar front the storms follow, he says. And that's just what meteorologists would expect to have happened as the atmospheric seesaw known as the Arctic Oscillation drifted to the positive side over the past 20 years (Science, 9 April 1999, p. 241), drawing the fronts northward. Even farther along the meteorological chain of causation, some climate models have the Arctic Oscillation under the sway of increasing greenhouse gases. In that case, human activity could still be behind the decline of midlatitude ozone, but through climate rather than chemistry.

    An Alternative to Snowball Earth

    When a proposed scenario for a chapter of Earth's history is so catastrophic that most researchers balk at the sheer audacity of it, a milder alternative won't be long in coming. At the meeting, three researchers put forward just such an alternative to snowball Earth, the 10-million-year-long cold spell that some earth scientists believe iced over land and sea 600 million years ago. The group proposed tossing the snowball and substituting a slush ball. According to this milder view, it was cold in the tropics— indeed, permafrost was pervasive there—but not so cold that the ocean froze over, and life would never have been in danger of extinction. “They're starting to make a case” for milder glacial episodes in Earth's Neoproterozoic era, says paleontologist Bruce Runnegar of the University of California, Los Angeles.

    In the prevailing picture of snowball Earth, ice averaging 1 kilometer thick sealed off the ocean from the atmosphere. It took 10 million years of volcanic outgassing to rescue the planet, building up enough carbon dioxide to create a supergreenhouse that eventually melted the ice. The rigors of the snowball, according to this view, may have triggered the evolution of multicellular organisms. But paleontologists have wondered how life could have withstood these rigors, and climate modelers have complained that the more realistic the models they used, the more difficult—even impossible—it was to freeze over the model globe (Science, 26 May 2000, p. 1316).

    Avoiding these perceived difficulties, as well as others, geochemist Martin Kennedy of the University of California, Riverside, and geologists Nicholas Christie-Blick and Linda Sohl of the Lamont-Doherty Earth Observatory in Palisades, New York, reinterpret two key observations that underlie the snowball Earth hypothesis. The first is a dramatic enrichment of carbon's lighter isotope at the time of tropical glacial deposits. Geologist Paul Hoffman and geochemist Daniel Schrag of Harvard University had linked this abrupt isotopic shift to the sudden mixing of atmospheric, isotopically light carbon dioxide from volcanoes into the ocean as its ice cover melted. But Kennedy and his colleagues attribute it to the release of microbially produced, isotopically light methane gas long frozen in permafrost. In their scenario, as glaciers melted, rising seas would have flooded the land, warmed the icy methane hydrates locked in the permafrost, and released its methane into the shallow seas. Kennedy notes that light carbon is produced in arctic permafrost in quantities that, if extrapolated to Neoproterozoic glacial times, would account for the carbon shift. And bursts of methane from subsea hydrates have been linked to isotopic shifts in younger sediments (Science, 18 August 2000, p. 1130).

    Their second reinterpretation concerns the odd carbonate sediments lying above the glacial deposits. Hoffman and Schrag argued that these “cap carbonate” sediments were precipitated from the supergreenhouse's carbon dioxide after it weathered continental rock and washed into the sea. But Kennedy and his colleagues believe they were produced by biological oxidation of the released methane. They think they can see traces where gases—presumably methane—rose through now-filled tubes in the cap carbonate from glacial deposits below. They also see disruption of sediments by gas flow and sediments once glued together by mucilaginous mats of microbes. In fact, it appears to them that cap carbonates most resemble deposits that form on the sea floor today where methane seeps up to be oxidized by microbes at so-called cold seeps.

    The methane scenario got a mixed reception. “I think it's a much more viable mechanism myself,” says geologist Maarten De Witt of the University of Cape Town in South Africa. “It's a very good way to produce a negative carbon [isotopic] spike.” But the particulars of cap carbonates came in for more criticism. “I've seen a lot of fossil seeps,” says paleontologist David Bottjer of the University of Southern California in Los Angeles. “From a distance, it's a reasonable guess these cap carbonate structures are seep-related, but on closer inspection it's likely not seeps at all.” Tiny layers across the tubes might have been formed by microbial mats blocking gas from seeping through, he and others suggest. Proving they were seeps will probably require finding the powerful isotopic signature of methane in carbonate cements laid down directly from escaping gas. So far, no one has seen that proof.

    Gullies and Lobes on Mars

    The gullies of Mars, which have been appearing in images sent back from Mars Global Surveyor since 1998, are proving difficult to explain. The initial suggestion that they formed by water gushing out of the rock from subterranean aquifers seems unlikely, given the current deep freeze on the planet's surface; dry ice deposits bursting out seem too exotic, melting snow too prosaic. Now comes the giant teapot hypothesis.

    A study of the gullies of Dao Vallis by planetary scientist Martha Gilmore and undergraduate Avi Stopper of Wesleyan University in Middletown, Connecticut, conjures up that analogy. Dao Vallis is a good area for gully spotting: Gullies turned up in 23 of 31 high-resolution images returned by Mars Global Surveyor, and the 600-kilometer-long valley's varied geology also allowed them to compare gully formation in different types of rock in the same background climate. Running down the valley, the exposed rock grades from the heavily layered lavas of the volcano Hadriaca Patera, through a mix of sediments, layered lavas, and impact debris, to nothing but jumbled impact debris near the great Hellas impact basin.

    Gullies, it appears, much prefer layered rock. Gilmore and Stopper reported that they are common in the layered volcanic rock, less abundant in the area of mixed geology, and absent in the unlayered impact debris. And even within the volcanic rock, gullies prefer one or two particularly solid-looking layers. Gully formation, Gilmore and Stopper conclude, requires an impermeable layer of the sort volcanic lavas can provide but impact rubble cannot.

    Gilmore and Stopper also see a relation between layering and another mysterious martian feature, the so-called “lobate structures,” which have been interpreted as ice or ice-rich material. Lobate structures lie in alcoves as gullies do and appear to flow down from the same layers as gullies, sometimes side by side with them. Lobate structures seem to prefer valley walls with the least exposure to the sun, while gullies are more common on the sunniest, and therefore warmest, exposures.

    Drips from a teapot?

    Water distilled from deep within Mars may form gullies like these in Dao Vallis.


    All this reminds one of a teapot, albeit a chilly one. Water kilometers below the martian surface is gently warmed by the planet's meager interior heat, Gilmore and Stopper note. What little vaporizes could then diffuse upward through pore space toward the top of the “teapot.” There it would condense and freeze near the surface. During times of climatic warmth, this ground ice could melt and percolate downward. If this water encounters an impermeable layer below, it could move laterally along it—as if out the teapot's spout—to a valley wall. If further warmed by the sun, the water might erode the wall to form an alcove and carry the debris downslope to form the gullies. If the sun weren't up to keeping the water liquid, it could freeze at the wall to form an icy lobate structure. The teapot mechanism could even work where aquifers are unlikely to feed gullies, note Gilmore and Stopper, such as seen on the central peaks of impact craters.

    The teapot mechanism of gully formation “is a viable alternative” to the aquifer mechanism (Science, 30 June 2000, p. 2295), says planetary scientist James Zimbelman of the Smithsonian Institution's Air and Space Museum in Washington, D.C. It is also an alternative to snow nestled on the valley wall simply melting in the spring and washing downward (Science, 6 April, p. 39). But it isn't everybody's cup of tea. It leaves open the question of how the climate ever gets warm enough for water to approach the martian surface, notes planetary scientist Stephen Clifford of the Lunar and Planetary Institute in Houston. Periodic extreme tilting of Mars to warm the summers is the most promising idea so far.

    Viable gullying alternatives aside, says planetary scientist John Mustard of Brown University in Providence, settling the origin, or origins, of martian gullies seems a distant prospect. Soon “we're going to have a half-dozen views equally supported by the observations,” he says. Then researchers will have to ask themselves, “What data do we need to resolve it?”


    Tracking Icebergs for Clues to Climate Change

    1. Kevin Krajick*
    1. Kevin Krajick is the author of Barren Lands: An Epic Search for Diamonds in the North American Arctic.

    To learn more about climate change at the poles, daredevil scientists are getting up close and personal with icebergs to follow their every move

    In January, University of Chicago glaciologist Douglas MacAyeal and two colleagues took off by helicopter from the icebreaker Polar Sea and sped toward a massive iceberg in Antarctica's Ross Sea. Like a wandering frozen mesa with a sheer 50-meter drop to the water, the perfectly white, flat berg, dubbed B15A, has its own weather system of high winds, blowing snow, and fog—a pilot's nightmare. When the researchers landed in the middle of this icy expanse, the ocean was nowhere to be seen. “You can't be someplace flatter or more alien,” says MacAyeal.

    This behemoth started as part of the even more monstrous B15, which calved in March 2000 off the Ross Ice Shelf and subsequently broke up. At 11,000 square kilometers, it was the largest iceberg ever reliably measured. The researchers hastily set out automatic weather stations and global-positioning trackers at three sites on B15A—the most complex array of instruments ever deployed on an antarctic berg—then fled.

    MacAyeal and his colleagues' touch-and-go trip to wire up B15A is part of an intensifying interest in icebergs, for the lives and deaths of these glacial frigates provide windows to potential changes in ocean circulation and climate. Researchers are closely watching an alarming increase in the calving of bergs in outlying parts of Antarctica —perhaps a result of climate change—for signs that the trend might presage a rapid breakup of the continent's largest ice shelves like the Ross, which lie in the deep south. Tidewater glaciers in the Northern Hemisphere, from Alaska to Greenland, also seem to be calving faster.

    Cold warriors

    Iceberg research is a venerable tradition. After one sank the Titanic in 1912, the International Ice Patrol formed and began to track as many as 2000 a year coming down “Iceberg Alley” from west Greenland, first by ship and now with airplane-mounted radar. It was Soviet researchers, however, who pioneered true iceborne science, setting up 31 Arctic Ocean ice camps from 1937 until the collapse of the Soviet Union in 1991. Among their accomplishments were invaluable long-term measurements of arctic weather, seminal studies of how sea ice forms, and maps of the arctic sea floor.

    Gyring around the North Pole on circular currents, the stations were occupied for up to 8 years; many were actually on sea-ice floes, not glacial bergs, but scientists used bulky bergs from the Canadian archipelago whenever possible, for flimsier sea ice often lasted just months, ending with ominous crackles and watery chasms opening amid camp. One of the more long-lived ones, station NP8, lasted from 1959 to 1962, and the occupants twice scrambled to drag buildings to new locations when giant cracks fractured their aircraft runway. Finally, when there was nowhere left to go, they were hastily evacuated from a makeshift airstrip, leaving most of their equipment behind.

    Fearing that the Soviets were using the ice for hostile purposes such as listening for U.S. submarines, the Americans in 1952 set up “Fletcher's Ice Island,” a drifting ice-shelf chunk used off and on until it was spit into the Atlantic in the 1970s. And they raided their adversaries' sites. According to the book Project COLDFEET by William Leary, after the Russians evacuated NP8, Navy geophysicist Leonard LeSchack secretly parachuted in to rifle their camp for clues to what they were up to. “What a horror!” wrote LeSchack. “There was dried blood all over, and animal carcasses, including dog[s].” On the other hand, he said, the Soviets had an “extremely complete” program of oceanography and meteorology and gear “superior to U.S. drift station[s].” In a scene straight out of a James Bond movie, LeSchack escaped by floating a balloon above the ice and harnessing himself to the bottom of its tether. A B-17 bomber swooped in, grabbed the tether with a hook in midair, and winched the triumphant if breathless geophysicist to safety.

    Riding the behemoth.

    Douglas MacAyeal (right) and Jonathan Thom install an instrument tower that enables the remote tracking of iceberg B15A.


    The drift stations are long gone, but their hundreds of thousands of daily measurements including air and water temperatures, precipitation, and solar radiation fluxes spanning 6 decades are now being declassified and collated by both countries. As the only such long-term data from the central Arctic, they offer a baseline for modern measurements aimed at studying changes in far northern climate and sea ice. “It's a huge treasure,” says Mark Serreze, a climatologist at the National Snow and Ice Data Center (NSIDC), who is plugging in old handwritten data on solar radiation and wind to his own models of climate change. In cooperation with Russian colleagues, the center is issuing CD-ROMs of the data, including a chronological atlas of arctic sea ice, available this year.

    Sophisticated new satellites and automated buoys have partly made up for the stations' demise. But “there is no adequate substitute for people on the ice,” says Vladimir Radionov, chief meteorologist of the Russian Arctic and Antarctic Research Institute. “They could carry out experiments everywhere from the sea bottom to the ionosphere.” This included sending up twice-daily weather balloons and lowering instruments into the deepest water—feats even modern technology cannot duplicate.

    Currents, surf, and sails

    Despite past experience, much about icebergs remains mysterious, including what combination of currents, tides, and winds impels them. Because of their deep keels and low profiles, it has long been assumed that subsurface currents do most of the steering; thus icebergs are considered useful proxies for studying such currents. Indeed, many of the estimated 1500 cubic kilometers of icebergs breaking off Antarctica yearly appear to end up swirling in a powerful counterclockwise coastal current.

    Big ones are relatively easy to follow. When one designated B9 broke off the Ross in 1987, satellites showed it moving some 2010 kilometers before stranding in 1991 in a vast shallow-water “iceberg graveyard” below western Australia where great ranks habitually stick fast; a decade later most of it is still there. Last year Neal Young of the Australian Antarctic Division tracked another piece of B9 to the continent's other side—a journey of some 10,000 kilometers. But Young says the picture is incomplete. Satellites show others spinning out of invisible “exits” from the coastal current to seas farther north, perhaps carried on regional gyres; and when some break up, fragments go chaotically in opposite directions, ending up hundreds of kilometers apart. In places like the enclosed inner Ross Sea, MacAyeal believes the main force may not be currents at all but long-length waves created by daily tides, which the flat bergs ride like lumbering surfboards.

    Cold warriors.

    The Soviet Union's NP8 drift station before it was abandoned in 1962 and later visited by a U.S. Navy geophysicist.


    Probably half the total mass of antarctic icebergs is in fragments a kilometer square or less, notes Rupert Gladstone, a geophysicist at the University of East Anglia in Norwich, U.K. Tracking these individuals is hard. To sample their movements, Eberhard Fahrbach, a senior scientist at Germany's Alfred Wegener Institute for Polar and Marine Research, has helicoptered to 30 in the past 3 years, planting global-positioning trackers on top—a daunting undertaking, as many small bergs are riddled with crevasses and prone to capsizing. As a result of their work, Fahrbach's co-worker Christoph Lichey will report in the Journal of Glaciology that some bergs embedded in sea ice appear to be steered directly by winds, previously thought to play a minor role. He says the sea ice acts like a horizontal sail, sending the bergs in unexpected directions.

    Where icebergs large and small go is a matter of more than academic interest. For one, if gigantic B15A ends up in the wrong place next January, it will close the brief shipping season to McMurdo Station, which cannot subsist without the fuel, heavy equipment, provisions, and other supplies brought by ship. And the berg is now blocking access to major nesting sites of Emperor and Adélie penguins, says David Ainley, a penguin expert with H. T. Harvey & Associates, a California ecological consulting firm. The penguins migrated to sea just before it arrived but will return between October and December. “It will be interesting to see what they do if it's still there,” says Ainley.

    Global meltdown?

    While some glaciologists are tagging and tracking bergs, others are keeping an eye on an alarming trend: sudden, explosive calving in parts of Antarctica. The fear is that if this continues, it may hasten the death of glaciers at an unanticipated rate.

    On the Antarctic Peninsula, which extends well north of the main landmass, average temperatures over the past 55 years have risen more than 2.5°C; apparently as a result, 1300 square kilometers of the relatively small Larsen A Ice Shelf simply fell apart into thousands of tiny bergs over the course of just a few days in 1995 without any particular warning (Science, 9 February 1996, p. 788). By contrast, the calving of B15, from the much colder Ross Ice Shelf, was preceded by years of preliminary cracking and is considered to be part of a longtime, regular process. David Vaughan of the British Antarctic Survey says the fast, piecemeal dissolution on the Larsen A has continued and moved rapidly south to the huge Larsen B and Wilkins shelves. “They are not calving big, healthy, periodic bergs like B15, and that is the whole point,” he says. “It is not business as usual.”

    It is not clear exactly what is triggering the explosive breakups. They are too rapid to be explained by temperature rises alone, says the NSIDC's Ted Scambos; the ice, hundreds of meters thick, cannot soften that fast. Instead, he surmises that summer temperatures are now high enough to form melt pools on the glacial surfaces, which percolate rapidly into small weaknesses to form crevasses. Once a complex of crevasses hits sea level, seawater rushes in, refreezes, and the mass blows apart.

    What will happen next is pure speculation. If the massive breakups continue, the speed of some glaciers' seaward slides could easily double. Vaughan believes it would take 200 years or more for the much bigger southerly ice shelves to be affected, but Scambos is not so sure; he asserts that the currently stable Ross could begin disintegrating within just 50 years. Worrying trends are already appearing in the Northern Hemisphere. Some of Alaska's famously scenic tidewater glaciers like the Muir are fast becoming alpine glaciers; rapid melting over the last 100 years and a quintupled calving rate since 1982 have pulled the leading edges of some glaciers back through fjords 80 kilometers or more, onto dry land. And in 1999 the International Ice Patrol saw only 22 bergs come down Iceberg Alley, raising dire warnings that global warming was about to do away with the phenomenon of icebergs altogether. However, in 2000 the denizens of Iceberg Alley came back in force—over 800 of them—showing that the warnings were off base. Ice Patrol oceanographer Don Murphy says shifting winds had probably pushed the icebergs out of shipping lanes the previous year. Unlike his colleagues working in the Southern Hemisphere, Murphy has no plans to land on any of these bergs. “Way too dangerous,” he says. “We're trying to keep people away from them, not get closer.”

  17. Finding New Ways to Fight Plant Diseases

    1. Anne Simon Moffat
    1. Anne Simon Moffat is a freelance writer in Chicago.

    With plant pathogens and the insects that spread them exploding worldwide, researchers must continually search for better ways to protect plants against infectious diseases

    In March of this year, the city of New York dedicated a memorial park to a scourge that starved hundreds of thousands of people and spurred a huge wave of immigration to the United States: the Irish potato famine of 1845 to 1847. A fitting reminder of the kind of devastation unlikely to occur in today's world of high-tech agriculture? Not exactly. “Worldwide, plant diseases with the potential to wipe out crops are exploding,” says Cornell University agricultural scientist David Pimentel.

    There are several reasons for this explosion. Many plant pathogens and the insects that often spread them have overcome the pesticides, agricultural practices, and biocontrols that once held them in check. At the same time, some effective chemicals, such as the fungicide methyl bromide, are being banned because of environmental concerns. And efficient global travel is spreading viral, bacterial, and fungal plant pathogens into new areas, while global warming is allowing insect vectors to expand their ranges.

    For example, Phytophthora infestans, the fungal-like alga responsible for the potato famine, caused devastating losses when it was transported on fruits and vegetables from northwest Mexico to Canada and the northwestern United States in the early 1990s. And geminiviruses, once a major threat mainly in Africa and the Middle East, have wiped out vegetable crops in the Caribbean and Florida (Science, 3 December 1999, p. 1835). “Few people realize that our ‘scientific advantage' in agriculture is dwindling rapidly, and pest problems are as serious today as they were 100 years ago,” says University of California, Berkeley, plant pathologist Milton Schroth. And like mid-19th century Ireland, developing countries with no food surpluses are especially vulnerable, Schroth points out.

    Plant researchers are struggling to maintain their scientific advantage, often in the face of public opinion wary of genetically modified foods. On the scientific level, they are achieving success on several fronts. They are developing an armory of new weapons, from high-tech genetic engineering, to techniques that induce a plant to turn up its own defense mechanisms, to relatively low-tech cultivation strategies aimed at disrupting plant-pest interactions. “Disease resistance is turning out to be a lot more complicated than we thought,” says Cornell plant pathologist William Fry. Still, he says, there are “hundreds of opportunities for reducing disease” waiting to be explored.


    The papaya trees at right, which carry a gene from papaya ringspot virus, resist infection, while the unmodified trees at left are susceptible. The bottom image shows fruit from the modified papaya strain.


    The best defense

    The first line of defense against plant diseases is natural resistance, which can often be transferred between species by crossbreeding. But plant breeding is time-consuming, and microbial pathogens evolve quickly to overcome breeders' painstaking work. “Durable resistance is elusive,” says University of Florida, Gainesville, plant pathologist Ernest Hiebert. By allowing researchers to introduce genes directly into plants, genetic engineering has offered a new tool for staying a step ahead of rapidly evolving pests. But it has also sparked a huge international backlash and introduced a new term into the lexicon of public protest: GM, for genetically modified, foods.

    Until this backlash gathered force in the late 1990s, GM crops made rapid progress. The most prominent development involved the transfer of a gene encoding an insect-killing protein from the bacterium Bacillus thuringiensis (Bt) into crop plants, including corn, cotton, and potato. Bt-transformed crops are now widely planted, particularly in the United States, resulting in reduced application of chemical insecticides for some crops, such as cotton.

    Among the specific concerns raised by Bt-modified crops is that insects that feed on the plants will become resistant to the toxin. Researchers are already trying to head off that possibility. Different strains of Bt produce different toxins, and entomologist Leigh English of Monsanto Corp in St. Louis, which has pioneered the technology, says the company is transforming crop plants with several different Bt toxin genes at once. Resistance to such multiple toxins is unlikely to develop rapidly.

    Other companies, such as Ecogen in Langhorne, Pennsylvania, and Syngenta (formerly Novartis) in Greensboro, North Carolina, are also developing products that combine genes from different Bt strains to increase activity against key pests, including the armyworm and cotton bollworm. Farmers are also encouraged to plant unmodified plants as insect refuges. These steps will “allow us to stay ahead of the possibility of insects developing resistance to Bt,” English says.

    Modification with Bt genes makes plants toxic to insects that eat them. Another strategy, which has already saved one crop from a devastating infection, aims to boost a plant's resistance to specific disease-causing pathogens. It involves a process somewhat akin to vaccination: revving up the plant's own defenses by exposing it to the pathogen's proteins. In this case, researchers stitch bacterial or viral genes, or their fragments, into the plant's genome.

    Exactly how this produces resistance has been somewhat mysterious, but within the past 2 or 3 years, David Baulcombe of the Sainsbury Laboratory in Norwich, United Kingdom, Herve Vaucheret of the Laboratory of Cellular Biology in Versailles, France, and others have identified a likely mechanism. They have shown that, in ways that are not yet completely understood, the viral genes trigger a plant defense mechanism that degrades viral RNAs, thus disabling the infectious agent (see Review by Vance and Vaucheret on p. 2277).

    Researchers, such as Roger Beachy—who pioneered pathogen-derived resistance, first at Washington University and now at the Donald Danforth Plant Science Center, both in St. Louis—have transformed various plants with genes from many of the 40 or more families of plant viruses. This work has produced melon, squash, tomato, tobacco, and papaya strains that are protected from a variety of viral diseases. Indeed, a modified strain of papaya produced by Dennis Gonsalves and Carol Gonsalves of the New York State Agricultural Experiment Station in Geneva, New York, and colleagues from the University of Hawaii and Upjohn company has essentially saved the Hawaiian papaya industry.

    In the mid-1980s, the researchers began attempting to modify papaya with a gene from ringspot virus, a pathogen for which there is no known natural resistance. Their initial field trial showed success in 1992, the year the ringspot virus started causing widespread damage to Hawaiian papaya trees. Now, about 60% of papaya trees in the state are protected by this means.

    Last year, the Gonsalves team took another step forward. They introduced a chimeric transgene, containing sequences from the turnip mosaic and tomato spotted wilt viruses—two of the top 10 viral pathogens of vegetables—into tobacco. As reported in the August 2000 issue of the Journal of General Virology, the resulting plants became resistant to both viruses—the first time researchers achieved multiple virus resistance by transferring viral genetic material. Others are hot on their heels.

    In a paper published electronically on 7 May by the Journal of Biological Chemistry, Claude Fauquet of the Danforth Center and his colleagues describe results that may lead to a way to make plants resistant to multiple geminiviruses. About 150 sorts of these viruses are plant pathogens, causing, for example, tomato yellow leaf curl and tomato mottle diseases. Hoping to find a viral component that might protect against many, if not all, of these viruses, the Fauquet team turned to the gene encoding a protein needed for the replication of geminivirus DNA.

    The researchers found that a truncated version of the gene could, when introduced into plants, inhibit the replication of the geminivirus from which they obtained the gene. But it also inhibited replication of three other geminiviruses, although to a lesser degree. This suggests that a few transgenes could disrupt replication of a wide range of geminiviruses. “We're on a search for a universal strategy for controlling plant viruses,” says Fauquet.

    Plant engineers have achieved their earliest successes against viruses, but they are now turning their attention to bacteria and fungi, with some promising results. As reported in the November 2000 issue of Nature Biotechnology, William Kay and Santosh Misra of the University of Victoria, British Columbia, and their colleagues engineered potatoes with a chimeric gene encoding segments of two insect proteins: a cecropin, an antimicrobial peptide made by moths and other organisms, and melittin, a component of bee venom. The plants proved resistant to the potato blight fungus, and the harvested tubers were also protected against a bacterium that causes “soft rot” in stored potatoes (Science, 16 March, p. 2070). Feeding the modified raw potatoes to mice produced no obvious toxic effects.

    Genetic options

    The anti-GM movement has focused its opposition largely on the transplanting of genes from entirely different organisms into crop plants. Some researchers are trying what may be a less contentious strategy: tinkering with a plants' own disease-resistance genes. For example, plant pathologist Brian Williamson and his colleagues at the Scottish Crop Research Institute in Dundee have found that polygalacturonase-inhibiting protein (PGIP) can control the gray mold Botrytis cinerea, which is the most serious cause of disease in ripe raspberries. The problem is that PGIP is normally expressed only in immature green fruit.

    The researchers would like to alter the PGIP gene's regulatory sequences in raspberry so that the protein would be made all the time, but so far they haven't been able to produce such modified raspberry plants. They have, however, introduced the raspberry PGIP gene, with a regulatory element from cauliflower mosaic virus that keeps the gene continuously active, into chickpea—a valuable crop in India, the Middle East, the southern Mediterranean, and Australia that is vulnerable to fungal diseases. “The germ plasm has been exported to our partners at ICRISAT [the International Crops Research Institute for the Semi-Arid Tropics in Patancheru, Andhra Pradesh, India] for testing,” says Williamson. If successful, it would be a significant breakthrough, he says, because plant breeders have worked without success for many years to make this nutritious legume resistant to gray mold.

    Researchers have numerous other candidates for such genetic manipulations. In the last decade or so, they have isolated several dozen disease-resistance genes, defending against, for example, tobacco mosaic virus and Pseudomonas syringae, from the experimental plant Arabidopsis and also from about a dozen crop plants, such as potato, tobacco, flax, rice, and tomato. Work by plant molecular biologist Barbara Baker at the U.S. Department of Agriculture (USDA) lab in Albany, California, and others has shown that, in many cases, disease-resistance genes from different species are similar. This suggests that the resistance mechanisms they encode have been evolutionarily conserved and may be general.

    At this point, however, it's too soon to tell whether it will be possible to beef up disease defenses in crop plants by giving them additional copies of resistance genes or by altering the genes' regulatory sequences to make them more active. Some practical problems remain to be solved. For example, one source of unwanted variability in transgenic plants is the unpredictable placement of transgenes. Also, unknown point mutations generated during tissue culture, which is needed to grow transformed cells into whole plants, can induce unexpected traits or loss of desirable traits.

    Meanwhile, an alternate approach to crop protection is showing promise: the use of externally applied chemical stimulators that elicit plants' natural defensive mechanisms. These include the so-called hypersensitive response (HR), which causes cells to die in the immediate vicinity of infection sites, thereby preventing further pathogen spread, and systemic acquired resistance (SAR), which first results in necrotic lesions but also activates a signal system that results in a marked reduction in disease symptoms after subsequent infection.

    Researchers have begun to identify compounds that elicit these defenses. Steven Beer and his colleagues at Cornell found that Erwinia amylovora, the bacterium that causes fire blight in apple and pear trees and related plants, encodes harpin, a glycine-rich protein that elicits HR and SAR to pathogens and insects. As an added benefit, it also enhances growth. The harpin technology was licensed to Eden BioScience in Bothell, Washington, and last summer its harpin product, known as “Messenger,” went on sale for use on a broad array of plants, including strawberry, cotton, and tomato.

    In a related strategy, Alison Tally and others at Syngenta took a page from the drug developers' book. In 1989, after screening at least 40,000 compounds for their ability to trigger defense mechanisms in plants, they identified an isonicotinic acid derivative, known as “Actigard,” that induces SAR. Once activated by this compound, a plant's defense mechanisms may remain active for many weeks, even when the inducer is degraded, says Tally. Actigard was registered in August 2000 by the U.S. Environmental Protection Agency and is being introduced into the marketplace to prevent bacterial spot and speck infections of tomatoes, downy mildew on spinach, and blue mold on tobacco.

    An ecological approach

    After bolstering plants' own defenses, the next best strategy for protecting crops is managing the environment to give plants a competitive edge over pathogens. This may involve using biocontrols, organisms or their products that can kill plant pathogens, or tinkering with physical aspects of crop growth, such as altering light, temperature, or the times of planting and harvesting.

    The biocontrols include Bt, both the whole organism and the toxic proteins, which were sprayed on crops to control insects for 50 years before researchers began genetically engineering the toxins directly into plants. Also, growers have had decades of experience with pheromones, chemicals that elicit certain behavioral responses from insects. For example, traps containing pheromones that attract insects searching for a mate are used to determine insect populations and the need for pesticides.

    View this table:

    More recently, a new product, called CideTrak, which was developed by scientists at the USDA and Trece Inc. of Salinas, California, couples an insect feeding stimulant with a low-dose pesticide. When sprayed on corn, this preparation kills adult corn rootworms, preventing egg-laying in the soil. This holds down next year's population of ravenous worms, thus reducing by 90% the amount of pesticide needed to control the rootworm.

    Whole organisms can be used as insecticides as well. These include baculoviruses, which are safe for nontarget insects, humans, and the environment, and insect-killing fungi, which infect a broader range of insects than do other microbes. The largest use of baculoviruses is in Brazil for protection of soybeans against the velvetbean caterpillar. The fungal insecticide Beauveria bassiana is used on a moderate scale in eastern Europe and China to control the silverleaf whitefly, which infects hundreds of plants.

    Cultivation practices can also help control plant diseases. For example, vegetable growers have begun adopting a strategy long known to wine enthusiasts. Some of the best grapes are grown by grafting vines that produce superior fruit onto the rootstocks of varieties that produce less desirable grapes but are resistant to soil-borne pathogens. In southern Europe, tomatoes are grafted onto the rootstock of eggplant and peppers, and melons are grafted onto cucumber rootstocks, often with the aid of machines. One thing that helps make this feasible is cultivating the plants to produce over longer periods. “It is now possible to grow productive tomato vines to a length of more than 20 feet [6 meters],” says plant virologist Jane Polston of the University of Florida, Bradenton.

    Even plastic, generally thought to be detrimental to the environment, can be the farmer's friend. For decades, some growers have protected vegetables with reflective plastic mulch, which repels many insect pests, such as troublesome whiteflies and aphids. Lately, there have been some new entries to this field. For example, Yehezkel Antignus and his colleagues at the Volcani Center in Bet Dagan, Israel, are testing ultraviolet (UV)-absorbing, clear plastic films to protect greenhouse crops against virus-carrying insects, such as aphids, white flies, and thrips.

    Kibbutz workers first noticed that tomatoes grown under such plastic to protect them from burning by the sun escaped viral diseases. A study of insect behavior beneath the plastic revealed why: It disrupts the spread of microbes carried by insects that need UV light to navigate. “By interfering with vision behavior, contact between the vector and the plant may be prevented, and, therefore, virus spread is decreased,” says Antignus.

    Clear plastic can also help when used to cover the soil for 3 to 6 weeks before crops are planted. During this period, temperatures can rise to about 50 degrees Celsius, enough to kill some fungal spores, weed seeds, and nematodes. The technique has been used in Florida to protect vegetables and ornamentals, in Egypt and Italy to protect tomatoes and carnations, and in Italy and Turkey to protect peppers. In some cases, this technique replaces synthetic chemicals that are being banned or restricted, says plant pathologist R. J. McGovern of the University of Florida, Bradenton.

    But despite the wide range of pest-control strategies under development, researchers know that they, like the Red Queen in Alice's Adventures in Wonderland, have to keep running just to stay in place. History has shown that no control, no matter how clever, is immune to pest evolution. “The battle against crop pests is ongoing, with short periods of relief when science temporarily gains the advantage,” says Berkeley's Schroth. Adds plant pathologist Carolee Bull of the USDA in Salinas, California, who has just begun studies to learn why some organic farming techniques succeed while others fail: “There is an enormous amount of work to be done.”

  18. The Push to Pit Genomics Against Fungal Pathogens

    1. Elizabeth Pennisi

    Despite the damage done by these serious plant pests, researchers have barely begun sequencing their genomes

    Ask a plant pathologist to draw up a “most wanted” list of dangerous microbes, and, chances are, many fungi would be near the top. Fungi and funguslike organisms called Oomycetes destroy crops, kill trees, ruin lawns and golf courses, and contaminate foods and animal feed with deadly toxins. Their dirty work causes some 10,000 different diseases in plants alone. Notorious examples include the late blight of potatoes, which caused the great Irish famine in the 19th century, and Dutch elm disease, which wiped out many elms in the United States in the mid-20th century. And now, a relative of the late blight potato pathogen, called Phytophthora, is taking out century-old live oaks in California.

    Given this trail of destruction, most plant pathologists would put fungi near the top of another most wanted list: microbes whose genomes should be sequenced. Determining the complete sequences of pathogenic and nonpathogenic fungi could be a big help in determining what makes some fungi the microbial equivalents of Bonnie and Clyde while others never cause trouble—information that could help in the design of much-needed antifungal agents.

    But although academic researchers have sequenced the genomes of dozens of bacteria, including important pathogens such as those causing cholera and Lyme disease, they have so far completed the sequence of only one fungal genome: that of yeast. A few others—such as those of the nonpathogenic bread mold Neurospora crassa and white rot fungus, which destroys deadwood—are in draft form. But even the white rot fungus was chosen more to test a new sequencing strategy rather than because of its economic importance. Although several biotech companies are sequencing the genomes of fungal pathogens, what they are learning largely remains behind closed doors. For the most part, “fungi have been left out in the cold,” says Ralph Dean, a plant pathologist at North Carolina State University in Raleigh.

    Rice menace.

    When the spores of the rice blast fungus (bottom) infect rice plants, they can cause the rice head to droop and die.


    Money is the main problem. “The government funding agencies haven't committed the resources to get fungal genomics going,” says Olen Yoder, a plant pathologist at the Torrey Mesa Research Institute in La Jolla, California. The National Institutes of Health, the Department of Energy (DOE), and the National Science Foundation (NSF) have poured hundreds of millions of dollars into genome sequencing; the draft of the human genome sequence alone cost more than $300 million. In contrast, plant pathology's fungal camp has had virtually no support. And fungi, with genomes that are several times larger than that of the average bacterium, are relatively expensive to sequence and assemble.

    Still, there are signs that the situation is beginning to improve. As the human genome project winds down, the big sequencing centers are beginning to take an interest in fungal genomes. In September 2000, the Whitehead Institute/Massachusetts Institute of Technology Genome Sequencing Project in Cambridge got NSF support to sequence the genome of Neurospora crassa. With that under its belt, the Whitehead would like to start on a new “Fungal Genome Initiative,” with the goal of sequencing the genomes of 15 fungi, at least some of which will be plant pathogens. Moreover, this past spring, the U.S. Department of Agriculture (USDA), in conjunction with NSF, initiated a new $9 million program for sequencing the genomes of agricultural and forest pests, which could get the Whitehead's fungal initiative or other fungal sequencing efforts off the ground.

    These efforts won't be sufficient by themselves, but they should at least begin to provide plant pathologists with a new arsenal of knowledge for neutralizing the threats that fungi pose to both food crops and ornamental species. And the information could be applicable to much more than plant diseases. Some of the fungi that cause problems in people “are the same pathogens that cause plant diseases,” says Ann Vidaver, a plant pathologist at USDA headquarters in Washington, D.C. Thus, procuring the genome sequences of some of these pests and assessing wholesale gene expression or protein-protein interactions can benefit biomedicine as well as agriculture. “By studying plant pathogens, we will be able to get a better handle on treating human fungal diseases,” Vidaver predicts.

    Slow start

    For now, however, researchers must take a piecemeal approach to their genomics studies. Consider rice blast fungus, Magnaporthe grisea, the most important fungal pathogen of rice. Researchers would love to bring this microbial marauder under control: It causes an annual loss of 157 million tons of rice worldwide. “[The fungus] evolves and mutates very rapidly,” Dean points out, and quickly overcomes resistant strains. Rice blast is such a serious problem that in 1998 researchers from the United States, France, South Korea, and Japan formed an international consortium to sequence the M. grisea genome.

    Despite its best efforts, the consortium has been slow to get going, however. Not until 2000 did Dean get support from the USDA Initiative for Future Agriculture and Food Systems for a pilot effort: deciphering the genetic code of chromosome 7, the smallest of the fungus's seven chromosomes. Dean has just started sequencing its 4.2 million bases. In addition, Daniel Ebbole of Texas A&M University in College Station got funding, also from USDA, to generate 30,000 expressed sequence tags (ESTs), short bits of genes that help researchers pick out specific genes. So far, he has produced 5000.

    In a similar fashion, Phytophthora researchers have made some initial forays into genomics in their attempts to find genes involved in virulence and in controlling plants' responses to these pests, which attack a wide variety of plants. Brett Tyler, a molecular plant pathologist at the University of California, Davis, and his collaborators got $1 million in 2000 from the same USDA initiative to look at Phytophthora ESTs. But, he adds, “we've been working since 1997 to get interest in the total genome sequence” without any success yet.

    Plant pathologists have also been producing ESTs from another fungal pest, Fusarium graminearum, which infects wheat and other cereals, causing a disease called scab and producing enough toxins to make the grain poisonous. This is another microbe facing some serious charges: Wheat and barley scab “is probably one of the worse plant diseases in the U.S. in the last 10 years,” says Corby Kistler, a geneticist with the USDA's Cereal Disease Laboratory in St. Paul, Minnesota.

    As part of a 5-year, congressionally mandated program to combat F. graminearum, the USDA has provided enough support to Kistler and his colleagues at Purdue University and Michigan State University to produce 30,000 ESTs from the fungus at various stages of development and under a variety of conditions. “People are mining [those ESTs] for genes that are expressed during infection,” says Anne Desjardins, a USDA plant pathologist based in Peoria, Illinois. “But what we really need are complete genomes of these organisms.”

    Wasted wheat.

    Green no more, the pale head of a wheat stalk (top) is a telltale sign of wheat scab, caused by the fungus Fusarium graminearum (bottom).


    Indeed, the few fungal genomes that have been sequenced are making Desjardins and her colleagues hunger for more. Dean and others are eagerly matching genes they study in their pathogens with the Neurospora sequence to begin to understand what makes some fungi virulent and others harmless. In addition, DOE's Joint Genome Institute in Walnut Creek, California, which sequenced the genome of the white rot fungus, Phanerochaete chrysosporium, is making the data available to plant pathologists. Even though these sequences are incomplete and the genes in them have not yet been fully identified and characterized, people are “very excited,” Dean says. He and his colleagues are confident that even more insights will come once additional genome sequences are in hand.

    Whitehead researchers are eager to provide those genome sequences and want to sequence the genomes of a range of fungal species, including fungi that cause human disease as well as those that damage plants. “We expect that fungal sequencing will be a main thrust of the Whitehead sequencing center,” notes Whitehead molecular biologist Bruce Birren. He hopes to snare some of the $9 million that USDA and NSF plan to spend to sequence the rice blast and wheat scab genomes, among others. And other fungal researchers are hopeful that they too will get support for sequencing the genome of their organism.

    There is a catch, however: The new $9 million USDA-NSF program won't buy many fungal genome sequences. Moreover, no award can exceed $2 million, which is only about one-third the amount needed to complete a fungal genome sequence. “The door has been opened,” says Dean, “but it has not been opened very wide.” So he worries that no genome will be completely sequenced.

    Although public efforts are gearing up slowly, sequencing in companies is well under way. In June 2000, for example, Exelixis Inc., based in South San Francisco, announced that it has completed a working draft sequence of the genome of corn smut fungus, Ustilago maydis. In Germany, Lion Bioscience has sequenced the genome of the same fungus. Yoder's team, which is sponsored by the agbiomed conglomerate Syngenta, a Novartis spin-off, is completing a draft of the genome sequence of Cochliobolus heterostrophus, a pathogen of corn.

    The company is also sequencing the genomes of the wheat scab fungus and of Botrytis cinerea, which damages many different crops, including strawberries and grapes. Syngenta researchers are also tooling up for large-scale gene expression and protein identification and characterization studies. “In the works are complete protein profiles of various stages in the life of the fungus,” Yoder explains. Currently, these company-produced genome sequences are available only through collaborative agreements.

    Birren hopes the Whitehead's Fungal Genome Initiative will pull in private as well as public support. It will take high-quality genome sequences of at least a half-dozen fungi for fungal genomics to mushroom, he adds. Yoder agrees that his academic colleagues need a bigger boost than NSF and the USDA are offering. But the prospects for a public-private collaboration are far from certain, as no definite commitments have been made at this point.

    Yet while Yoder and others worry that the federal agencies might never provide adequate support for fungal genomics, the field should continue to grow, he says, “simply because there's so much interest in it.”

  19. Florida Fights to Stop Citrus Canker

    1. Kathryn Brown

    State scientists hope to wipe out a disease that threatens Florida's citrus industry—but local homeowners may stop them in their tracks

    Every evening, in the quiet of his oak- shaded home, Wayne Dixon pauses at the mirror. He asks two questions: “Do I like what I've done today?” and “How do I feel about canker?”

    Even for a plant pathologist, this nightly reflection on a bacterial disease may seem a tad odd. But then, citrus canker dominates many of Dixon's days. As head of the plant industry division of Florida's Department of Agriculture and Consumer Services (FDACS), he stands at the center of a controversy that has roiled the state for 6 years. Citrus canker, which scabs oranges, grapefruit, and other citrus with corklike lesions, has been storming across southern Florida since 1995. And so have chain saw-wielding work crews.

    There's no cure for canker. So, in hopes of stopping the disease before it ravages central Florida's $8 billion commercial orchards, the crews have chopped down more than 1.8 million infected and exposed trees—roughly a third of them in Miami-Dade and Broward counties. In fact, the canker eradication effort, which has cost over $200 million, is the biggest single government-run program ever devoted to a plant disease. But where chain saws are screaming, so are homeowners who want to save their trees. Threats, sobs—and even lawsuits aimed at stopping the saws—greet canker eradication efforts.

    The citrus industry is just as adamant that the efforts continue. Although growers could try to slow the disease spread with windbreaks, copper sprays, and inspections, those controls—and the fruit lost anyway—could cost well over $100 million a year, driving some growers out of business, says Andrew LaVigne, vice president of Florida Citrus Mutual, an industry trade group.

    And the stakes go well beyond Florida's borders. Citrus canker is caused by the Asian strain of the bacterium Xanthomonas axonopodis pv. citri, which may have sneaked into the country on infected fruit from overseas. Such exotic invaders are becoming an international nuisance.

    For instance, plum pox, an Old World disease, has struck in Pennsylvania. At the same time, some homegrown pests are invading new territory. In Napa Valley, a mere smudge of a bug from the southeastern United States—the glassy-winged sharpshooter—threatens California's wine industry. Can regulators keep these exotics in check? Citrus canker is like a training ground, Dixon says. “And we're getting a lot of practice.”

    Early warnings

    The first hint of trouble came in 1995, when an employee with the U.S. Department of Agriculture's (USDA's) Animal and Plant Health Inspection Service (APHIS) noticed canker scabs on a citrus tree while checking fruit fly traps near Miami International Airport. At that time, scientists estimated that roughly 3.6 hectares of residential neighborhoods—possibly including 200,000 trees—were exposed to canker.


    This grapefruit and leaves show the characteristic lesions of citrus canker. In the bottom image, an FDACS crew member cuts down a citrus tree in the yard of a Florida resident in hopes of stopping canker's spread.


    Because there's no way to detect infected trees before they develop symptoms, the scientists who found the canker outbreak near Miami proposed quickly clearing all citrus trees in the exposed area and then paying homeowners for their losses. But state agriculture administrators balked, unwilling to risk public outcry over the unexpected tree removal.

    As a result, Florida's Citrus Canker Eradication Program (CCEP)—a joint effort by the FDACS and USDA-APHIS—got off to a slow start, with few funds and old science. State crews chopped down only actively infected trees and pruned back all citrus within 38 meters of the trees they removed. Relying on a 1980 study done in Argentina, which suggested that canker bacteria could spread an estimated 32 meters to other trees during windy storms, the scientists hoped their strategy would wipe out the disease. They were wrong.

    By 1996, citrus canker had jumped across 23.6 ha. Work crews began removing, rather than just pruning, all citrus within 38 meters of an infected tree. Still, by the end of 1997, the citrus canker quarantine area stretched for 93.5 ha. With costs at $8.7 million, crews had destroyed citrus on more than 34,000 properties, mostly in southeast Florida. No end was in sight.

    Meanwhile, city residents, many of whom see “exposed” trees as perfectly healthy—and worth keeping—were getting angry. “It's like a war zone,” says Helen Ackerman, a homeowner in Plantation, Florida. “All of a sudden, you hear, ‘We're coming into your backyard and cutting down your trees, and you have nothing to say about it.' Isn't this America? Can they do this?”

    In 1998, amid growing public outrage, Florida's Commissioner of Agriculture called a temporary halt to the cutting of citrus trees that had been exposed but not obviously infected. CCEP officials also asked Tim Gottwald, a plant epidemiologist with USDA's Agricultural Research Service in Fort Pierce, Florida, to try to find out why canker was spiraling out of control.

    During the course of an 18-month study, Gottwald, microbiologist James Graham of the University of Florida's Citrus Research and Education Center in Lake Alfred, and colleagues tracked citrus canker as it moved, unchecked, across some 2.6 ha of roughly 19,000 healthy and diseased trees. The results came as a shock.

    The scientists found that canker can travel a whopping 15 times farther than previously thought, in some cases spreading over 580 meters during a single storm. The study, which appeared in the January 2001 issue of Phytopathology, concluded that the only option was to wipe out all citrus within 580 meters of an infected tree—a decision that came to be known as the “1900-foot rule.”

    Outside scientists concur with that conclusion. “To me, the data look very methodical, very reasonable,” says plant epidemiologist Christopher Mundt of Oregon State University in Corvallis. Harold Scherm, a plant epidemiologist at the University of Georgia, Athens, adds that the new data offer a more realistic look at how quickly canker can spread in Florida's dense neighborhoods. “In this case,” Scherm says, “the 1900-foot rule is the right thing to do.”

    With the study in hand, the CCEP cranked into high gear. The state and USDA kicked in over $100 million in additional funding. Florida Citrus Mutual also chimed in, voicing support for the new 1900-foot rule. During the spring of 2000, over a period of 8 weeks, the eradication program boomed from a staff of roughly 300 to 1800—many of them temporary workers who went door to door, checking and marking trees, talking to residents, chopping citrus. “The program,” says Gottwald, “was going at it full steam.”

    A bitter divide

    But the homeowners were definitely not on board. Part of the problem is a severe communication gap between them and CCEP staff. “We have been totally left out of the process,” asserts Jack Haire, a resident of Fort Lauderdale. “When the 1900-foot study was agreed to in 1998, the public was not invited. The citrus interests were present, though. And the fact is, it's cheaper for them to cut our trees down here than to plan control remedies on their own land.”

    In response, Liz Compton, spokesperson for the Florida agriculture department, says the CCEP has spent over $1 million on newspaper ads, bilingual radio programs, hotlines, community liaison officials, billboards, and other educational efforts. For years, she says, open meetings on citrus canker attracted hardly any homeowners. “People just didn't hear it until the chain saws were in their backyard,” Compton says. Still, many charge that the CCEP's public relations effort has fallen far short of its goals. “In the future, in other plant disease scenarios, something has to be done differently,” remarks Scherm.

    As for the charge that the citrus industry is benefiting at the expense of homeowners, LaVigne points out that, at press time, eradication crews had chopped down over 1.3 million commercial citrus trees—compared to just 579,000 residential trees. Cutting crews have wiped out much of Miami's lime industry, he adds.

    But the frustrated homeowners aren't about to give up the fight. They've taken to the courts—and scored some victories. Last November, a circuit judge in Broward county halted the cutting of exposed trees in the 1900-foot canker exposure radius, allowing only the removal of actively infected citrus. The state has appealed the verdict. Meanwhile, a Miami-Dade circuit judge ruled in May that the agriculture department must notify homeowners if a tree will be removed; the homeowners then have 10 days to object before cutting crews arrive. And lawsuits filed by homeowners still loom in Palm Beach and Delray Beach.

    Scientists say ongoing court clashes threaten the entire eradication program. “Delay is often tantamount to failure in an eradication effort,” says Tim Schubert, a plant pathologist with FDACS. The CCEP estimates that over 6000 exposed citrus trees in southern Florida have become infected with canker as the lawsuits drag on. Schubert is as frustrated as any homeowner. “How long will it take to settle the issue?” he asks. “And will we still have an achievable goal at the end of it all?” Those are the kinds of questions that prompt Dixon's nightly reflections. He calls the situation “maddening.” And that may be the one point all sides agree on.

  20. Portrait of a Pathogen

    1. Katherine Brown

    As bacterial diseases go, citrus canker is not the most deadly. Rather than kill trees, canker-causing bacteria simply settle into the surface of fruit rinds, leaves, and twigs. Soon, the bacteria cause blistered or corky lesions on the infected tissue.

    But blemishes are the least worry for Florida's citrus growers, most of whom sell their fruit for juice, anyway. Citrus canker's real damage is to the fruit's internal clock. As bacteria make a home in citrus tissue, they churn out excess ethylene, a plant hormone, which causes fruit to drop prematurely. That means citrus growers end up throwing out vast quantities of rotting fruit that dropped too soon. “You can't pick the fruit off the ground, because it's not mature enough,” explains Andrew LaVigne, vice president of Florida Citrus Mutual, an industry trade group. “At the same time, you've got a smaller crop on the tree.”

    Canker is hard to control, because it spreads quickly and quietly. When it rains, citrus lesions ooze the bacteria, which then ride the wind, splattering onto nearby trees. Canker bacteria seep into surface wounds on citrus tissue, like those caused by leaf miner insects. Strong winds (over 29 kilometers per hour) can also drive the bacteria directly into leaf stomates.

    By the time an infected tree reveals telltale canker scabs, it has probably already infected dozens of surrounding trees, says James Graham, a microbiologist at the University of Florida's Citrus Research and Education Center in Lake Alfred. “The problem is knowing where canker is before it shows itself,” says Graham. “You don't know exactly where to point the control measure—or how far-flung the disease is.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution