News this Week

Science  16 Jul 1999:
Vol. 285, Issue 5426, pp. 306

    Richardson Reverses Course on DOE Reorganization

    1. David Malakoff

    In a switch that surprised supporters and opponents alike, Energy Secretary Bill Richardson last week endorsed a controversial proposal to reorganize the Department of Energy's (DOE's) nuclear weapons program into a semi-independent agency. But it's not yet clear where this change of heart will lead, as lawmakers and DOE officials continue to joust over key details of the plan, which was prompted by allegations of Chinese spying at DOE's weapons laboratories. Although proponents say the changes are needed to prevent future espionage, critics charge they will harm the labs' extensive civilian research programs and allow officials to hide environmental and safety problems from the public.

    Last month, in the wake of a White House report that harshly criticized DOE management and security, Republican senators Pete Domenici (NM), Jon Kyl (AZ), and Frank Murkowski (AK) introduced legislation calling for the first major shake-up of the agency's structure since it was created in 1977 (Science, 2 July, p. 18). Their plan would put DOE's sprawling weapons complex, which employs more than 30,000 people at dozens of research and bomb-making facilities around the United States, under the control of a new, largely independent Agency for Nuclear Stewardship led by a high-ranking DOE official. The agency is needed, the sponsors say, to make DOE managers more accountable for protecting secrets.

    Richardson repeatedly denounced the plan, charging that it would create “a fiefdom within a fiefdom” and unconstitutionally undermine his authority over nuclear weapons research and production. He also said it would “be a disaster” to place the lab's unclassified science—which includes everything from materials research to climate studies—under the security agency's control, as it would make it harder for researchers to share information and recruit talented colleagues. DOE's Office of Science, which supports some $2.7 billion worth of nondefense science a year, would still be able to fund research at the labs, but it would no longer have direct authority over the work. Other critics worried that the agency would be able to block public scrutiny of environmental cleanup and worker safety in the name of national security. “Information that makes the labs look bad could suddenly become classified,” fears Maureen Eldredge of the Alliance for Nuclear Accountability, a Washington, D.C.-based watchdog group.

    On 8 July, however, Richardson told The Washington Post that he was ready to cut a deal with the Senate trio, apparently encouraged by a revised draft of the bill that made it clear he would retain ultimate control over the new agency. The announcement came “as a big surprise—even to some senior DOE folks,” says a Senate staffer. But hopes for a quick compromise dimmed a few days later, when a meeting between Senate aides and DOE staff ended with little substantive negotiation. “Staffers were there with sharp pencils ready … but [DOE officials] wanted to talk more generally,” says another aide.

    DOE officials expressed concern, for example, about the bill's requirement that the agency have its own security and counterintelligence staff, rather than answer to an agencywide czar. They are also uneasy about plans to insulate the agency from the direct oversight of the department's environment and worker safety officials, who would instead be restricted to making recommendations to the secretary. Such changes “could create a serious bottleneck in decision-making,” says a DOE official. A Republican aide, however, says the shift would “put responsibility where it belongs—at the top of the pyramid.”

    The two sides planned to meet again this week, and “the secretary is hopeful an agreement can be reached,” a DOE spokesperson said. But the senators may not let the discussions linger: If there is little sign of progress, aides say they could move to attach their measure—which appears likely to pass the Senate—to national security legislation as early as 16 July.

    The reorganization plan faces an uncertain reception in the House, however, where the Science and Commerce committees met on Tuesday, as this issue went to press, to hear from a panel of plan critics, including Eldredge. The joint hearing, an aide says, was designed “to raise some issues the Senate doesn't seem to be focusing on, such as the reorganization's impact on science.” Scientists working on cleaning up DOE's many contaminated nuclear weapons sites, for instance, could face a bureaucratic maze if they want to share their results with colleagues outside the new agency, says political scientist Don Kettl of the University of Wisconsin, Madison, a former department adviser who testified at the hearing. “You could reduce DOE's ability to respond [to pollution problems] by building walls that are too high,” he says.

    Whether such sentiments will convince the House to reformulate the Senate plan won't be known until later this month, when lawmakers from both bodies will meet to hammer out an agreement on the issue. Whatever the outcome, however, a House aide predicts that “DOE's structure is going to change; the only question is how.”


    Anti-Immune Trick Unveiled in Salmonella

    1. Evelyn Strauss

    The Salmonella pathogen is best known as an intestinal bug. But various species also cause severe systemic illnesses such as typhoid fever, and part of the reason people get so sick is that their immune systems cannot quell the infection immediately. Now researchers studying a particular Salmonella protein have discovered a surprising new weapon that may help explain the pathogen's virulence: At least one species can create an intracellular traffic jam within certain of the host's immune cells.

    Once inside those immune cells, Salmonella enterica, the bacterium that causes food poisoning in humans and a typhoid fever-like illness in mice, shoots a protein called SpiC into the host cell's cytoplasm. Somehow that protein clogs an intracellular transport system that would normally dump the organism into a toxic cellular chamber called the lysosome. The blockage is so far-reaching that it also prevents other deliveries to the lysosome and other cellular locations, according to work in the 15 July EMBO Journal by Eduardo Groisman, a microbiologist at Washington University School of Medicine in St. Louis, and colleagues.

    “This [finding] opens up a new area of research,” says Jorge Galán, a microbiologist at Yale University. Researchers already knew that Salmonellae have a specialized protein export machinery that they use only within host cells and that helps them replicate, but SpiC is the first secreted protein of this system to be identified. Galán adds that “there has to be a target for that protein in the host cell.” SpiC may lead scientists to that target, illuminating both normal intracellular traffic patterns as well as how Salmonella jams them.

    Immune cells such as macrophages normally kill bacteria by first engulfing them—encircling them with the cell membrane and thus forming a vesicle. The vesicle then moves through the cell, stopping in a regular itinerary to fuse with other vesicles and transfer its contents. Eventually it docks with the lysosome—a death chamber where toxic chemicals and enzymes chew up the cargo. Some research suggests that Salmonella survives in vesicles that have lysosome-like characteristics. But other experiments indicate that the organism manages to avoid this fate.

    Seeking a clue to Salmonella's activity once inside a macrophage, the researchers focused on an apparently unique gene, spiC, that they had previously sequenced from the bacterium. The team created a mutant strain that doesn't produce SpiC and found that it grows poorly in macrophages and is much less virulent in mice than are wild-type bacteria. This suggested that the protein is central to Salmonella's harmful effects.

    The team labeled cells with gold particles, which collect in the lysosome and highlight it, then infected the cells with wild-type Salmonella or SpiC mutants. The researchers found that wild-type Salmonella were less likely to end up in the gold-labeled lysosomes than were SpiC mutants. In a separate experiment, they used radioactively labeled molecules to monitor vesicle traffic. Even for vesicles that didn't contain Salmonella, they found, transport seemed to be inhibited in cells infected with wild-type bacteria, while lysosomes received their usual deliveries in cells infected by SpiC mutants.

    To find out whether SpiC alone could block vesicular transport, the team analyzed the protein's effect on the movement of transferrin, a protein that normally uses vesicular transport to ferry iron from outside the cell to compartments inside and then returns to the surface. The researchers found that cells infected with a genetically engineered virus that produces SpiC both brought less transferrin into the cell and recycled it to the surface less efficiently than did those carrying virus without SpiC. Similarly, in a system of cell extracts, the team found that purified SpiC prevented vesicles from fusing.

    Other bacteria trapped in vesicles have evolved ways to prevent their compartment from fusing with the lysosome, but SpiC is the only known bacterial protein to tie up global vesicular traffic. “This is a totally new way of altering trafficking,” says Galán. “It points at a mechanism that's very different from that of any other bug. … It has to be interfering with some key regulator of the trafficking pathway.”

    The new work may open the way to resolving the long-standing controversy about whether Salmonella-bearing vesicles fuse with lysosomes, but it also raises questions. Vesicles normally fuse with lysosomes within 20 minutes of infection, but researchers can't detect SpiC until about an hour later. SpiC production may begin while the bacteria are in other cells, but before they enter the macrophages, suggests Samuel Miller, a microbiologist at the University of Washington, Seattle. And because macrophages rely on vesicle fusion to secrete factors that stimulate and attract other cells of the immune system, SpiC's blockage of vesicle fusion might also affect macrophage activity in unanticipated ways, hampering immune system function, says Ralph Isberg, a microbiologist at Tufts University School of Medicine in Boston.

    Just as one crucial accident can slow activity throughout a city, SpiC's traffic snarl may have profound effects on its host cell.


    Physicists Tame a Single Photon

    1. Andrew Watson*
    1. Andrew Watson is a science writer in Norwich, U.K.

    “To catch a baseball without stopping it” may sound like a Confucian riddle, but that is the essence of a groundbreaking quantum manipulation experiment reported in this week's issue of Nature: A team of French physicists has managed to detect a single photon repeatedly without destroying it. “The basic idea is that we can trap a single photon in a box … and monitor and make repeated measurements on it as though it were a particle in a box,” says Serge Haroche, who led the team at the Ecole Normale Supérieure (ENS) in Paris. The experiment is a unique demonstration of a phenomenon known as quantum nondemolition —the repeated nondestructive measurement of a quantum state—that a few teams of physicists have managed to demonstrate before, but never with anything as delicate as a single photon. “I think it's marvelous,” says Wojciech Zurek, a quantum measurement guru at the Los Alamos National Laboratory in New Mexico. “They have implemented one of the goals, one of the mileposts, which has defined the field of quantum measurement for close to 20 years.”

    It is a fact of life in quantum mechanics that an observation or measurement alters or destroys the object that is being observed. But theorists know it need not be so. In principle it should be possible to observe a quantum system without destroying it, and repeat the observation later and get the same result. Achieving nondemolition is extremely difficult, however, because of the fragile nature of quantum states. Over the past decade or so, several teams have managed it using interferometry, a technique that involves blending two light waves in such a way that minute changes in either of the two beams modify the recombined beam. Such a setup can reveal the impact of a “signal” light beam that disturbs the path of one of the other two beams before they are combined. The signal beam continues unperturbed, but the imprint of its passing is recorded in the altered interference pattern.

    This technique requires bright light beams. The ENS researchers wanted to see if they could achieve nondemolition with a single photon, much too feeble to disturb the path of a detection beam. Instead, they harnessed the sensitive quantum energy ladder of electrons around an atom. The first step is to trap a photon. The researchers built an open-sided cavity 3 centimeters long and 5 centimeters in diameter bounded at either end with spherical niobium mirrors, which reflect photons of the correct microwave wavelength. Then they cooled the trap to 1 degree above absolute zero, still warm enough to guarantee a single thermally induced microwave photon bouncing between the mirrors.

    To detect the photon, the researchers shot a rubidium atom through the cavity. But before they sent it on its journey, the atom was pumped up with energy, so that its outermost electrons were not in their lowest energy states but in orbits far from the nucleus, a state known as a Rydberg atom (Science, 19 July 1996, p. 307). In this long-lived, bloated state, the atom is very sensitive to microwaves, guaranteeing the strongest possible interaction with any microwave photons lurking in the cavity.

    The aim was to use the swollen Rydberg atom as a detector to see if a photon is resident, and if it is to leave it pinging around within the cavity in its original state. The cavity is just the right size, and the atom's speed carefully set, so that during its passage through the cavity there is just enough time for the atom to absorb the photon and reemit it before the atom reemerges.

    At first sight, the exiting atom appeared unchanged from when it entered. “So you have the feeling that nothing has happened,” says Haroche. But the cycle of absorption and emission does leave an imprint on the atom wave by altering its phase: The exiting atom was now out of step with its state on entry into the cavity. A separate system compared the phases before and after, revealing a half-wave phase shift—the signature of a cavity that contains a single photon. The researchers found that sending a second atom through the cavity produced the same result. “It shows that the first atom has made a measurement and left the photon behind for the second atom to read it,” says Haroche.

    Other physicists have lauded the technical skills of the ENS team. “It's an amazingly complex experiment, and there are several pieces of it, each of which is an amazingly complex experiment alone,” says Zurek. “They have thought up some neat tricks to solve the experimental difficulties they're faced with,” adds Oxford University's Andrew Steane. “It's a piece of work which probably no one else in the world could have done.”


    Belgian Socialist Tapped to Head EU Research

    1. Robert Koenig

    Philippe Busquin, a Belgian Socialist Party official with a background in physics, has been selected to become the European Union's (EU's) new chief research executive. Romano Prodi, the European Commission's incoming president, last week presented his new team of commissioners, with Busquin as his candidate for research commissioner. After holding hearings, the European Parliament is scheduled to vote on Prodi's new team by mid-September.

    In his new job, Busquin will lead the EU's research directorate—known up to now as DG-XII—and administer the 4-year, $17 billion Fifth Framework research program. The portfolio had previously included education, but—contending that research and technology “represent a full portfolio”—Prodi decided to marry education with culture and has put forward Viviane Reding, a former Luxembourg journalist, to head this new directorate. But Prodi plans to shift agricultural research into Busquin's directorate.

    The new commission is being formed now because of the mass resignation last March of the previous incumbents in the wake of a scathing report by a European Parliament investigative panel that had alleged cronyism and mismanagement among Brussels officials, with Busquin's predecessor, Edith Cresson of France, one of the most heavily criticized (Science, 19 March, p. 1827). Prodi called the new candidates “a top-quality team in which jobs have been allocated to match the proven abilities and experience of each commissioner.” He said he would demand that the commissioners streamline the Brussels bureaucracy, live up to high ethical standards, and “give clear direction and leadership.”

    Busquin, 58, is known mainly as the leader of the Socialist Party in Belgium's French-speaking region. He received a physics degree from the Free University of Brussels in 1962 and was an assistant physics lecturer at the university's medical faculty from 1962 to 1977. He studied ecology and environmental issues at the Free University in 1976, and was chair of the board of directors of Belgium's Institute of Radioelements from 1978 to 1980. He entered local Belgian politics in 1977 and later held various national and regional ministerial posts until becoming vice president of the Socialist International, a federation of socialist parties, in 1992. He was elected as a member of the European Parliament last month.


    Commercial Firms Win U.S. Sequencing Funds

    1. Eliot Marshall

    Several new groups are joining the government's human genome sequencing project this month, including—for the first time—two commercial firms. The National Human Genome Research Institute (NHGRI) in Bethesda, Maryland, quietly awarded three yearlong grants totaling $15 million on 1 July. The winners can expect to be funded for at least two additional years at the current rate, NHGRI notices say. The objective is to scale up production of human DNA sequence and help deliver a 90% complete “working draft” of the human genome for public release next spring and a 99.99% finished version by 2003. NHGRI turned down some academic centers while funding commercial outfits, indicating that it is serious about rewarding efficiency.

    View this table:

    The latest grants raise the total NHGRI kitty for human genome sequencing to nearly $100 million per year through 2002. The principal investigators (PIs) leading the newly funded teams are Maynard Olson at the University of Washington, Seattle ($7 million per year); Douglas Smith, co-director of the sequencing center at Genome Therapeutics Corp. of Waltham, Massachusetts, the first commercial firm to take part ($5 million); and Ronald Davis of Stanford University ($3 million). According to documents released by NHGRI, Olson expects to sign a contract with another company, Incyte Pharmaceuticals Inc. of Palo Alto, California, for about $3 million worth of DNA sequencing per year. NHGRI plans to continue funding these teams through 2002.

    The newcomers join university-based groups that won larger NHGRI grants in March, including the Whitehead Institute/MIT Sequencing Center in Cambridge, Massachusetts, Washington University in St. Louis, and the Baylor College of Medicine in Houston, Texas (Science, 19 March, p. 1822). They are part of an international network that includes the U.S. Department of Energy's Joint Genome Institute in Walnut Creek, California, and the nonprofit Sanger Centre in Hinxton, U.K.

    Smith says his group will work closely with the Sanger Centre, focusing mainly on sequencing chromosome 10. The Stanford group, says Davis's colleague Nancy Feldspiel, will contribute some DNA data but, more significantly, develop robotic instruments to make genome work more efficient. Olson will supervise a consortium that includes sequencers at Incyte focusing on chromosome 7 and on automated methods of finishing. All members of this network, including the companies, agree to release raw DNA data on a daily basis, refrain from patenting raw data, and publish finished data within 6 months of “validation.”

    Geneticist David Cox of Stanford University, whose lab did not get funded in this competition, says: “I think it's a great idea that we're looking for the most efficient ways to get high-quality sequence data.”


    Keeping Bone Marrow Grafts in Check

    1. Michael Hagmann

    Cancer patients who have received aggressive chemo- or radiotherapies often need bone marrow transplants, because the treatments wipe out their immune systems as well as their tumors. But bone marrow transplants (BMTs) often come at a price. Because the donor and recipient tissues usually differ genetically, about two out of three patients develop graft versus host disease (GVHD), in which donor T cells turn against their new host and wreak havoc in organs such as the skin, liver, and the intestines. Fever, rashes, and diarrhea ensue, and in severe cases GVHD can be lethal, making it the primary cause of death after BMTs.

    To curb GVHD, clinicians either sift out all the T cells from the donor marrow or treat recipients with powerful immunosuppressive drugs. Both approaches leave patients extremely vulnerable to infections, however. A report on page 412 now suggests another, and perhaps less dire, strategy. A team led by immunologist Stephen Emerson of the University of Pennsylvania School of Medicine in Philadelphia has found that GVHD can be suppressed in mice by inactivating the recipients' antigen-presenting cells, or APCs. APCs display snippets of foreign proteins to T cells, sparking an immune response. Suppressing these cells blindfolds the donor T cells toward host cells, the team found. In contrast, the T cells should still be capable of responding to viruses or other pathogens presented by donor APCs from the transplants.

    The study “offers a new approach to tackle a problem that has pestered us for the last 25 years from an entirely different angle,” says bone marrow transplant specialist Joseph Antin of the Dana-Farber Cancer Institute in Boston. If it pans out in humans, Emerson adds, it may “improve the safety of bone marrow transplants so they could be used much more widely.” They might, for instance, replace the faulty bone marrow in patients with sickle cell anemia or other blood diseases.

    Emerson and his collaborator, Mark Shlomchik at Yale University School of Medicine, initially wondered whether GVHD is caused by donor APCs or by the recipient's own APCs, some of which survive the chemotherapy. To answer that question, the researchers created a new strain of mice whose bone marrow-derived cells no longer carried the proteins known as major histocompatibility complex (MHC) antigens. Because antigens must be displayed on MHC proteins, this rendered the APCs of the mouse strain incapable of presenting antigens to any T cells.

    Emerson and his team then irradiated the altered mice, along with “normal” mice who were otherwise genetically identical, and performed BMTs on all the animals. For the transplants, the researchers used bone marrow from a strain of MHC-identical mice that differed only in minor surface markers —a match like the one doctors seek for human patients, says Emerson.

    The team found that GVHD occurred less often and in a milder form in mice whose APCs had been crippled by their lack of MHC molecules. These animals lost about 46% less weight due to GVHD-induced diarrhea, and only two out of 16 died, compared to 14 out of 16 in the control strain. This indicates, says Emerson, that “the great majority of the APCs that trigger GVHD are host-derived rather than donor-derived.”

    Of course, it is impossible to use the same APC-inactivating strategy on human cancer patients, but other work by Emerson and his colleagues suggests that host APCs can be inactivated with antibodies. The researchers irradiated mice and then injected them with an antibody that binds to the cell surface of APCs. When they dissected the animals' lymph nodes and spleens, organs where APCs abound, they found that “the antibody had covered all the [APCs] present,” says Emerson. If the antibody were coupled with a toxin, he adds, it might be able to eliminate the remaining host APCs and thus prevent GVHD.

    The study is a “proof of principle; it shows that host APCs play a major role” in GVHD, says Voravit Ratanatharathorn, a BMT specialist at the University of Michigan, Ann Arbor. But he cautions that “how to block the APCs in [patients] is yet another problem.” So far, he notes, the Emerson team has not shown that anti-APC antibodies work in mice, let alone humans.

    He and others point out that after a BMT, the donor immune cells also play a role in keeping some cancers, such as leukemias, from recurring. If so, then inactivating the patients' APCs may lessen this protective effect. As immunologist Jonathan Sprent of The Scripps Research Institute in La Jolla, California, asks, “If the [donor] T cells never see [host] APCs, are they ever going to be activated” against leukemic cells if the cancer were to recur after the BMT?

    But others think that the strategy of inactivating recipient APCs is worth exploring, as the alternative often involves eliminating the T cells from the graft. Compared to recipients of complete bone marrow, “the engraftment of T cell-depleted bone marrow is much worse, and the relapse rate in leukemia patients is much higher,” says immunologist H. Joachim Deeg of the Fred Hutchinson Cancer Research Center in Seattle.


    A Microscope With an Eye for Detail

    1. Meher Antia*
    1. Meher Antia is a writer in Vancouver.

    Once, it was a microscope's optical precision that limited the detail it could see. Today microscope design has reached the point where the limits of resolution are set by the laws of physics. But even basic laws can sometimes be flouted with creative thinking. In the 15 July issue of Optics Letters, researchers describe a principle they say can dramatically improve the resolution of fluorescence microscopes, which have traditionally been constrained by the so-called diffraction limit.

    Diffraction limits resolution because light waves passing through any aperture, such as a lens, spread out slightly, countering the focusing effect of the lens and making it impossible to focus the light to an arbitrarily small point. “For decades most people had accepted the diffraction limit,” says Min Gu, a physicist at Victoria University of Technology in Australia, “but it can be overcome.” Stefan Hell, a physicist at the Max Planck Institute of Biophysical Chemistry in Göttingen, Germany, and his colleagues have now shown as much with a clever combination of two laser beams. One illuminates and images the sample, while the second sculpts the first to reduce the effects of diffraction.

    Because of diffraction, most microscopes using light have a resolution no smaller than about 200 nanometers (nm), about the size of a large virus. Electron microscopes can do better, and light microscopes can also beat the diffraction limit by simply omitting the lens. The scanning near-field optical microscope (SNOM), for example, images objects with a resolution as fine as 80 nm or so by squeezing light through a tiny opening in a fiber and scanning the fiber tip across the object, collecting reflected light.

    But nothing can quite replace an optical microscope based on focusing lenses. Cells have to be killed and dehydrated to be viewed with an electron microscope, and SNOMs construct an image slowly. And neither kind of microscope can image structures in the interior of a cell, as an optical microscope can when proteins or other cellular components are tagged with dyes that light up when photons from a laser excite them.

    Diffraction prevents the laser beam—and hence the spot of fluorescence—from being focused to a spot any smaller than about 200 nm. So any two features closer than 200 nm apart will fluoresce together and be mistaken for one. But Hell thought that if he could suppress the fluorescence from part of the beam, objects closer together than 200 nm could be illuminated and detected separately.

    A few years ago, he had shown that theoretically, a second beam of laser light that partially overlapped the first one could force the excited dye molecules to take another path down to their ground state, in a process known as stimulated emission. They would give off light at a different wavelength, reducing the size of the fluorescent patch. “If you look down from the lens, the spot [of light] is round, like a hamburger,” says Hell. “Now imagine taking a bite off the outer part of the hamburger.” The sculpted beam could be scanned across the sample, lighting up features smaller than the diffraction limit one by one.

    Hell and his colleagues have now demonstrated this technique in the laboratory with a test sample consisting of scattered nanocrystals of a fluorescent compound, pyridine. With a burst of an ultraviolet laser, they sparked fluorescence in the crystals. They then sent in the second laser pulse, known as the stimulated emission depletion (STED) pulse, to take a bite out of the first one. The result was dramatic: Where two pyridine molecules appeared as a single blur without the STED beam, they could be distinctly resolved once the STED beam was turned on.

    Gu says he is impressed by the 30% improvement in resolution, which allowed the STED microscope to distinguish crystals as little as 100 nm apart. Peter So, a mechanical engineer at the Massachusetts Institute of Technology in Cambridge, thinks the resolution could eventually reach 30 nm, fine enough to distinguish structure in individual DNA molecules. Advances like Hell's are a sign, So believes, “that we are in the midst of a renaissance in optical microscopy.”


    NASA Plans Close-Ups of Mercury and a Comet

    1. Laura Helmuth

    NASA last week selected two spectacular shows as part of its Discovery program of quick and cheap space missions. In 2008 and 2009, a spacecraft will scrutinize Mercury, and in 2005, another mission will shoot a massive copper cannonball into a comet to learn more about its innards. The scheduled date for the cometary fireworks, which space enthusiasts can watch from Earth: 4 July.

    The spacecraft Messenger, to be launched in spring 2004, will orbit Mercury for 1 year after two brief flybys. Loaded with cameras to map the planet's surface and spectrometers to analyze its crust and tenuous atmosphere, Messenger will transmit the first close-ups of Mercury since the Mariner 10 mission in 1974–75. Messenger should shed light on how planets form and why some, like Mercury and Earth, have retained their magnetic fields while others, like Mars, have shed theirs, says planetary scientist Sean Solomon of the Carnegie Institution of Washington's Department of Terrestrial Magnetism, who leads the $286 million mission.

    The extremely dense planet consists mainly of a large metal core, says planetary geophysicist Raymond Jeanloz of the University of California, Berkeley. A giant impact, much like the one that chipped off Earth's moon, may have splashed off most of Mercury's mantle, he says. Messenger's gravity mapping studies will probe for evidence of crust-busting impact sites. The mission should also reveal whether volcanoes have shaped Mercury's surface and if ice exists in the shadows of its polar craters, says planetary scientist Faith Vilas of the Johnson Space Center in Houston.

    In January 2004, a $240 million mission called “Deep Impact” will take off for comet Tempel 1, which circles the solar system every 5.5 years. When it arrives a year and a half later, an observation module will release an “impactor”—essentially a 500-kilogram copper bullet—which will slam into the comet's surface at a speed of 10 kilometers per second. A camera onboard the bullet will transmit images as it hurtles toward its target; the hovering observer module will record both the crash and the size and shape of the resulting crater, and analyze solid and gaseous material released by the blast.

    The crash may help answer questions about the composition of comets and their chemical histories, says Lucy McFadden of the University of Maryland, College Park, one of the project's scientists. Comets formed from primordial material condensing at the edge of the solar system, but their interiors may have heated and undergone chemical changes during their tours through the solar system. So far, scientists have only been able to model these processes and simulate comets' internal properties. “This is an in situ experiment that will constrain these theories,” says McFadden. Indeed, Deep Impact marks planetary science's graduation from classic, observational studies to active experimentation, notes Alan Stern of the Southwest Research Institute's Department of Space Studies in Boulder, Colorado.

    The approval of Deep Impact follows close on the heels of a NASA decision to scrap a mission, called Champollion, that would have attempted a soft landing on Tempel 1. The lander would have drilled into the comet's surface and analyzed core samples at different depths. Although its estimated cost was roughly the same as that of Deep Impact, Champollion fell within NASA's New Millennium Program and was competing for scarcer funds than the Discovery Program missions.

    Deep Impact should entertain the Earth-bound as well as further space science: If skies are clear, the celestial collision will be visible with a pair of ordinary binoculars. But don't expect too much: The comet will look like a small smudge, and the impact will show up as a mere pinpoint of bright light.


    Keck Helps Five Careers With $1 Million Grants

    1. Jeffrey Mervis

    All spring Yale University biophysicist Mark Gerstein had been on tenterhooks. As one of 10 finalists in the W. M. Keck Foundation's new Distinguished Young Scholars in Medical Research program, Gerstein was on the verge of getting a flexible, $1 million grant—a significant bounty for a researcher of his age (33). But after making his pitch in April, Gerstein's phone went silent—until last week, when he learned that he was one of five junior faculty members in the United States to make the final cut. “I was ecstatic,” he says. “But I'm also relieved—that's a lot of grant applications that I won't have to write for a few years.” Gerstein's $1 million award will support his research in genomics and bioinformatics.

    The Los Angeles-based foundation, created in 1954 by the founder of the Superior Oil Co., has long been a supporter of higher education and research. But last year its trustees decided to take advantage of a growing endowment, now $1.4 billion, to back the next superstars of biomedical research—a total of 25 over the next 5 years. “We wanted to identify the people who seem likely to become the really outstanding scientists over the next 20 to 30 years,” explains William Butler, chancellor of Baylor College of Medicine in Houston, Texas, and chair of the scientific advisory board that helped to design the program and select the first batch of winners. “Keck deserves a lot of credit for coming up with such an exciting concept.”

    The four other winners are: Bruce Clurman, a cancer biologist at the Fred Hutchinson Cancer Research Center in Seattle; Judith Frydman, a biochemist at Stanford University working on protein folding; Partho Ghosh, a structural biologist at the University of California, San Diego; and Phyllis Hanson, a cell biologist at Washington University in St. Louis who studies protein and membrane dynamics in the neural system. All have taken up their first faculty position within the last 3 years, and all say that the money will allow them to scale up their research in a way that would otherwise be impossible.

    The new program outdoes all other private efforts to support young faculty. The size of the awards beats the $625,000 over 5 years offered by the Packard Foundation and dwarfs the typical start-up grant from private and public bodies. Its closest competitor is a one-time awards program sponsored by the James S. McConnell Foundation, which earlier this year selected 10 Centennial Fellows to receive $1 million each as part of the 100th anniversary of its founder (Science, 29 January, p. 629). “We wanted to give them a chance to go forward full-bore, and our advisers said that a million dollars should buy them what they need,” says Roxanne Ford, who heads Keck's medical program.

    For Gerstein, the money means a chance to hire a crackerjack computer programmer and systems analyst to design and maintain a database to analyze the entire genomes of various pathogenic organisms. “I'm looking for someone who could command $150,000 in Silicon Valley,” he says. “I can't pay that much, of course, but I want someone who can create a computational environment that my students and postdocs can take advantage of and not have to do it themselves.”

    The competition was by invitation only. Keck asked 30 top-ranked universities and medical institutions to submit one application from their single most promising young faculty member. From those, the foundation chose 10 finalists for face-to-face interviews. Next year Keck will make the same offer to another group of 30 institutions, with some holdovers, and after 5 years its board of trustees will evaluate the impact of the program and decide its fate.


    Organic Molecule Rewires Chip Design

    1. Robert F. Service

    Only one thing hasn't changed over the decades in the computer industry: the equation smaller equals faster. But as the transistors and other circuitry on computer chips continue to shrink, errors become easier to make. And a single broken wire or faulty transistor in the millions of devices on a chip will often render it useless. Now a team of California-based researchers is offering a revolutionary—and potentially cheap—way to sidestep the need for precision as circuit features get still smaller: a strategy for laying down millions of wires and switches without worrying too much about quality control, then electronically configuring the best connections, akin to the way the developing brain strengthens active neural connections while allowing inactive ones to wither away.

    Simple switch.

    The electronic state of V-shaped organic molecules controls whether electrons can hop from the bottom to the top wire.

    The key to the new design is its simplicity. Instead of spending billions of dollars on fabrication plants to ensure patterning perfection, the new approach would lay down its millions of wires and switches in a simple grid and then allow a computer to use those wires to configure the grid into proper circuits. At this point, the researchers, led by Jim Heath at the University of California, Los Angeles, and Stan Williams at the Hewlett-Packard Laboratories in Palo Alto, remain a long way from configuring millions of switches; the network that they describe on page 391 contains just four switches. And the five connecting wires are up to 11 micrometers wide, a hefty size compared to those in today's chips. Nevertheless, the novel approach “is pretty remarkable,” says Dan Herr of the Semiconductor Research Corp., an industry-backed center in Research Triangle Park, North Carolina. “If something like this pans out, it would have a tremendous impact on the semiconductor market.”

    Chip designers worry that when the size of features on individual chip components drops below 100 nanometers—perhaps by the middle of the next decade—engineers will have little room for error: A wire misplaced by just a few tens of nanometers could cause a circuit to fail. And although patterning technology is improving, “we do not have the ability to make highly intricate patterns on that length scale,” says Paul Alivisatos, a nanoscale patterning expert at the University of California, Berkeley. “But we can make simple patterns of that size.”

    So that is what Heath and his colleagues did. Instead of trying to precisely control the positioning of wires and switches, they opted to lay down a simple grid of devices, all electronically linked. Later, they figured, they could activate certain paths within this electronic maze and shut down others, configuring the switches electronically to work together as a circuit. If particular switches in the grid were out of place or otherwise defective, the configuration would simply bypass them.

    For their demonstration, the researchers used conventional chip patterning techniques to lay down a parallel set of four aluminum wires on top of a silicon chip coated with a layer of insulating silicon dioxide. Next, they coated the whole surface and wires with a layer, just one molecule thick, of organic molecules called rotaxanes. Finally, they channeled a vapor of titanium and aluminum through a mask carrying a thin slit oriented perpendicular to the other wires. As the vapor condensed, it formed the top wire that crossed each of the four others.

    Each of these junctions—with a patch of rotaxane molecules sandwiched between the perpendicular wires—formed a switch. Unaltered, each switch was “on,” allowing current to flow from the bottom wire, through the rotaxanes, and into the top wire. “[The rotaxanes] are like a stone in a river,” says Heath. “It's hard for the electrons to make the entire leap across the river [from one wire to the other], but they can easily make half the leap and then jump again to the other side.” But applying a small positive voltage between two perpendicular wires oxidizes the rotaxanes, permanently removing electrons and altering their electronic behavior to prevent current from flowing through them. “Now the only way electrons can get across is to jump over the entire river in one leap,” says Heath. Because only a paltry few can manage it, the current drops precipitously and so the switch is “off.”

    This demonstration circuit is not very flexible, because the rotaxanes cannot be restored to their original state once oxidized, so turning off any given switch is irreversible. That's sufficient for read-only memory and certain kinds of logic circuits, as the California researchers showed when they linked several switches in circuits to perform basic logical operations. But the researchers are currently investigating similar organic molecules in hope of finding one that will switch back and forth. Such reversibility, says Heath, would allow them to create a novel version of computer memory that could be written and erased many times over.

    For now, says Heath, the novel circuitry has a long way to go before it's ready to challenge the Pentium. One project, he says, is shrinking the hefty wires. In theory, the switches should be unaffected as the wire dimensions fall, because the switching is performed by the rotaxanes, which would remain unchanged. In fact, Heath says that he and his colleagues are already working on a scheme to forge the wires out of carbon-based nanotubes, which can be just a single nanometer across yet micrometers long. If a computer based on nanotube wires and molecular switches could be built, says Heath, “you would get 100 workstations in a grain of sand.” That would keep the computer industry humming along for quite a few years.


    U.S., European Backers Differ on E-biomed Plan

    1. Eliot Marshall

    U.S. and European groups hoping to start an Internet publishing outlet known as “E-biomed” appear to be on divergent paths, raising a question about whether they can agree on a format. Whereas the Americans want to begin the project with an unedited, unreviewed preprint depository, the principal European advocate—Frank Gannon, executive director of the European Molecular Biology Organization (EMBO) in Heidelberg, Germany—states in a position paper released on 7 July that he does not support such a scheme. In his paper, “EMBO and the electronic publishing initiative,” Gannon says he welcomes electronic publication but draws the line at a “non-reviewed depository.” “EMBO would have no role” in the latter, he writes.

    Gannon is concerned that an unedited outlet could “severely undermine biomolecular research,” and he thinks it “requires some element of monitoring.” He believes that monitoring must “go beyond” culling out “injurious or insulting passages,” as U.S. advocates have suggested. In Gannon's view, all articles with EMBO's stamp should be cleared at a minimum by a panel of “assessors,” which he views as less demanding than full peer review. Gannon sees a need to distinguish between properly vetted reports and those that may be “incomplete” or “erroneous.” His “simple” solution: create a streamlined process that checks to see that the experiments described are “correctly designed, the data are factually correct, and the conclusions are not exaggerated.”

    This approach differs from the original E-biomed concept. It was the brainchild of several U.S. biomedical researchers, including Stanford University geneticist Pat Brown, National Center for Biotechnology Information director David Lipman, and National Institutes of Health (NIH) director Harold Varmus. Varmus first mentioned that the federal government might get behind the proposal in comments to NIH's budget-writing overseers in the House in March (Science, 12 March, p. 1610). Since then, he has refined the idea in two commentaries published on NIH's Web page and in talks to scientific groups, including meetings of gene therapists, science writers, and Chinese researchers.

    Although the details have changed, E-biomed's core format has remained the same. Brown and several colleagues in his field of genetics first envisioned it as a way to share large files of gene expression data rapidly without going through traditional peer review, editing, and paper printing. The basic idea was to support free access and immediate publication in an e-print depository, which would accept the work of any scientist, with screening to remove only obscene or gratuitous material. As the plan evolved, its advocates at NIH broadened the scope to include research across all the life sciences. They also added new layers, giving authors the option of submitting to an unreviewed section of the depository or to a section that would include multilayered review schemes, perhaps run by existing peer-reviewed journals. Publishers have mixed policies on whether they would accept e-print articles for print publication, but few have welcomed E-biomed.

    Meanwhile, Varmus has suggested in recent talks that E-biomed be launched with what Lipman calls the “noncontroversial” element—the release of genetic data files, as originally proposed. This “would be a healthy place to start” the experiment “and see how we manage it,” Varmus told a meeting of science writers in Washington, D.C., on 30 June.

    Gannon, for his part, seeks to minimize differences with NIH. “I think that we are both still working toward the same general goal of a single searchable site,” he wrote in response to an e-mail query: “We have different appreciations at present about how that can be achieved.”


    Repairing the Genome's Spelling Mistakes

    1. Trisha Gura*
    1. Trisha Gura is a free-lance writer in Cleveland, Ohio.

    Short lengths of synthetic DNA and RNA can trick cells into changing single bases in their genomes, possibly opening a new route to gene therapy

    On the computer, correcting spelling errors takes nothing but a quick keystroke or two. Now, researchers are trying to harness the cell's own spell-check program—its DNA repair machinery—to tackle a much more difficult problem: fixing errors in the flawed genes that cause such hereditary diseases as sickle cell anemia and cystic fibrosis.

    In recent work, some of it reported at this year's meeting of the American Society of Gene Therapy (9 to 13 June in Washington, D.C.), researchers have shown that they can remedy defects caused by single DNA spelling mistakes in both cultured cells and experimental animals. The technique has been dubbed chimeraplasty, because it relies on hybrid molecules of DNA and RNA called chimeras. In essence, these molecules contain DNA with the correct version of the misspelled letter flanked by RNA that perfectly mirrors the rest of the target gene segment. By pairing up with the defective gene, the chimeras can trick the cell's DNA repair machinery into replacing the wrong nucleotide in a gene with the right one.

    If chimeraplasty works in human patients—and much more work will be required to show that it does—it could offer some significant advantages over current gene therapy strategies, which leave the defective gene in place and equip the cell with a spiffy new copy. Conventional gene therapy often relies on a virus to “infect” the diseased cells with the correct gene—a strategy that carries the risk of inflammation and other harmful reactions. What's more, because the replacement gene may land anywhere in the genome, it may not be subject to the same regulatory checks as the normal version, which could lead to its product being produced at the wrong time or in abnormal amounts.

    Chimeraplasty, in contrast, can use nonviral carriers, such as the tiny membranous sacs known as liposomes, to shuttle the DNA/RNA molecule into the cell. And the hybrid itself doesn't hang around; it's degraded within 48 hours. But perhaps the biggest advantage of chimeraplasty is that it targets the endogenous gene. “All it does is change its spelling,” says Michael Blaese, who left the National Institutes of Health last year to become chief scientific officer and president of the pharmaceutical division at the biotech firm Kimeragen in Newtown, Pennsylvania. “The gene's context, its regulatory regions, and its position on the chromosome all remain absolutely normal.”

    Even if clinical success turns out to be as elusive for chimeraplasty as it has been for conventional gene therapy, the technique could still prove useful in genetics research. With appropriate changes in the DNA sequence of the DNA/RNA hybrid molecules, chimeraplasty can be used to create specific mutations as well as to cure them. Thus, it could aid in the development of animal models of human diseases and the generation of “knockout” mice, which help researchers probe the normal function of novel genes. The technology also works in plants, where researchers have begun applying it to crop development (see sidebar). “The beauty of chimeraplasty is that it appears to be a universal process,” Blaese says.

    The technique itself is the 6-year-old brainchild of molecular biologist Eric Kmiec at Thomas Jefferson University in Philadelphia, who studies homologous recombination, a natural process in which DNA strands with complementary sequences pair up and then swap closely matching segments. The process works very inefficiently in mammalian cells, except for germ cells undergoing meiosis. When doing routine assays, however, Kmiec found that the rate of recombination rises for active genes being copied into messenger RNAs—the recipes for proteins. He began toying with the idea that synthetic RNA molecules might be put to use in gene repair. They might trick the cell into recombining good DNA into mutant sites if added to organisms with genetic defects.

    But, as hopeful as it sounded, that idea also presented a dilemma: In the cell, RNA degrades faster than DNA and thus might not stick around long enough to be useful for gene repair. It was only after months of pondering that Kmiec hit on the idea of making a hybrid. “Sometimes you get so close to the problem that you just don't see the answer,” he notes.

    Hybrid vigor

    It later turned out that the chimeras did not work quite the way Kmiec thought, but initial tests showed that they do work, nonetheless. For his first target, Kmiec chose a gene called ras, which can be converted into a cancer-causing oncogene by changing a particular thymine base to a guanine. The molecule he designed to do this consisted of a five-base DNA segment flanked by two 10-base RNA segments that were modified to boost their stability. The sequence of this DNA/RNA hybrid mirrored that of the critical ras gene segment, except for the thymine-to-guanine change.

    Kmiec then introduced these chimeric molecules, or “oligos,” as small nucleic acids are sometimes called, into normal cells in culture. He subsequently found that some of the cells began showing the characteristic signs of cancerous transformation—apparently because the chimera had paired up with the normal ras gene in the cells and introduced the oncogenic mutation.

    Soon after, Kyonggeun Yoon, a chemist and molecular biologist in Kmiec's lab, showed that a DNA/RNA chimera could correct mutations as well as introduce them. She corrected a single-base mutation in a human gene for the liver enzyme alkaline phosphatase, which had been introduced into hamster cells. What's more, her results indicated that the change occurred in up to 30% of the cells. In contrast, traditional gene therapy works in less than 2% of cultured cells. “I wouldn't have believed it if I hadn't done it myself,” says Yoon, who notes that she spent months verifying the work.

    And less than a year after the initial hamster cell experiments, Allyson Cole-Strauss, Yoon, and others in Kmiec's lab extended the work, showing that they could correct a known disease-causing mutation, the single-base change in the β-globin gene that causes sickle cell anemia, in cells from patients with the disease (Science, 6 September 1996, p. 1386). The researchers estimated that 5% to 11% of the treated cells ended up with the normal gene.

    Getting to the point.

    In the proposed chimeraplasty mechanism, the RNA/DNA chimera (top) binds to double-stranded DNA, and then the mismatch where two cytosines (Cs) pair is corrected by replacement of one cytosine with a guanine (G).


    To many gene-therapy experts, these early results sounded just too good to be true. As gene-therapy pioneer Mario Capecchi of the University of Utah, Salt Lake City, and his colleague Kirk Thomas pointed out in a letter in Science, the gene correction rates were three to six orders of magnitude higher than would be expected from the proposed mechanism of homologous recombination. A second letter from a team of European and U.S. researchers suggested that contamination with normal cells could have skewed the results, making the “cure” rate appear higher than it was. (The letters appeared in the issues of 7 March 1997 and 25 July 1997.)

    Since then, Yoon has addressed the doubts with a series of experiments that were published in the December 1998 issue of Nature Biotechnology. After moving to the dermatology department of Jefferson Medical College in Philadelphia, she introduced a DNA/RNA chimera into albino mouse cells that can't make the pigment melanin because they have a single-letter mutation in a gene needed for melanin synthesis. Because those cells are colorless and normal melanin-producing cells are black, cells in which the chimera has corrected the gene defect are easy to spot.

    Yoon simply picked up the cells that had turned black and grew them individually in cultures to show that the chimeras did create a working gene and that it could be passed on to daughter cells. “It was reproof of principle,” says Tom Wolfe, vice president of technology development at Sequitur, a biotech company located in Natick, Massachusetts.

    Now Howard Gamper in Kmiec's lab is taking a closer look at the mechanism by which the chimeras correct genes and has concluded that it's not homologous recombination after all. Instead, he and Kmiec suggest that chimeraplasty works via mismatch repair, an error-correcting mechanism that detects when one strand of newly synthesized DNA doesn't pair properly with the complementary strand because it contains an incorrect base. The mismatch repair enzymes snip out the offending base and replace it with the correct one. The two investigators believe that chimeraplasty, by creating an intentional mismatch, may co-opt this mechanism to change a single letter in the original DNA strand. Supporting this picture, Kmiec's team found that cells in which mismatch repair genes are mutated perform chimeraplasty less efficiently than normal cells.

    Toward the clinic

    Researchers are now finding that the technique works in whole animals. In 1998, for example, liver expert Clifford Steer of the University of Minnesota, Minneapolis, and his colleagues showed that they could create rats with a hemophilia-like condition by using chimeras, transported in liposomes, to introduce a mutation into the animals' gene for a factor needed for normal blood clotting. In preliminary experiments, the group also did the reverse; they used DNA/RNA chimeras to reverse the hemophilia-like state in dog liver cells with a clotting-factor mutation.

    And at the gene therapy meeting, Li-Wen Lai, a geneticist and molecular biologist at the University of Arizona Health Sciences Center in Tucson, reported that she and her husband, nephrologist Yeong-Hau H. Lien, have used DNA/RNA chimeras to correct a metabolic disease in mice. The mutation disables a kidney enzyme called carbonic anhydrase II and results in dangerously high acid levels in the bloodstream. Lai and her colleagues designed a chimera to reverse it, bound the molecules in liposomes, and then injected them into the ureters of the mice. The gene was corrected in 1% to 15% of the animals' kidney cells. Lai says she and Lien are now working with the animals to see if their blood acidity decreases and if the change lasts over time.

    But even though evidence is building that chimeraplasty can work, the success rate can vary from cell type to cell type, and even from experiment to experiment. For example, Yoon repeated her melanocyte experiment 30 times and found cells turned black anywhere from 0.01% to 15% of the time. Such variations convince Capecchi that “it's a little early to talk about human trials,” although he now says that chimeraplasty “certainly has potential.”

    Researchers also worry about safety, although that's a concern with standard gene therapy, too. Perhaps the chimeras will start “fixing” other parts of the genome that aren't broken, in essence creating mutations like those that lead to cancer. “Just how much less frequent is a nonspecific change than a specific change?” Blaese questions. “Those are the issues we are trying to address.”

    Even so, the first human trial of chimeraplasty may be on the horizon. At the gene-therapy meeting, Steer's group reported results from their recent work on Gunn rats, which carry a single-base deletion in the gene for a liver enzyme that detoxifies the yellow pigment bilirubin. The rats accurately model a rare human hereditary condition called Crigler-Najjar disease, in which patients can't metabolize bilirubin, which builds up to toxic levels. The patients end up severely jaundiced and have to spend 12 to 16 hours a day under blue light, which promotes bilirubin breakdown. If untreated, the disease is lethal, and the only cure is a liver transplant.

    Steer and his colleagues have now found that an appropriate chimera corrects the gene defect in a substantial proportion of the Gunn rats' liver cells. Up to 40% of the cells revert to normal, Steer says, as indicated by tests for the genomic DNA, messenger RNA, and protein sequences. Even more encouraging, the bile of the treated rats contains telltale liver metabolites that signify normal enzyme activity.

    Steer attributes the success of the therapy to the system he used to shuttle the chimeras into the rats' liver cells. The molecules are encapsulated in liposomes carrying surface molecules that specifically target them to receptors on the liver cells. “I've had people get up at meetings and say, ‘I don't believe your data,’” Steer recalls. “But as more labs are becoming successful, people are beginning to accept this.”

    Indeed, Blaese is gearing up with Steer to try the technique in three patients with Crigler-Najjar syndrome. The two groups are now doing safety studies in order to obtain Food and Drug Administration approval to go on into humans. If those studies show that the chimeras aren't targeting other DNA sequences and are safe in humans, then researchers could move on to target a very long list of human genetic diseases of the liver. “I wouldn't have to go back to drug discovery,” Blaese notes. “I could just go to the human genome project, read off what the gene is, and change the spelling of our molecule.”


    Surgically Altering Plant Genes

    1. Trisha Gura*
    1. Trisha Gura is a free-lance writer in Cleveland, Ohio.

    While gene therapists seek to correct the mutations that cause genetic disease, plant geneticists are more interested in creating new mutations—ones that might improve crop plants, making them more resistant to spoilage or herbicides, say, or boosting their nutritional value. Now plant researchers have a potential shortcut to that goal: a new technique called chimeraplasty, which is also being explored as a possible strategy for human gene therapy (see main text).

    One standard method to induce mutations in plants is to expose plant cells in culture to a mutagen, such as radiation, and then screen them to see if any have acquired the desired trait, such as herbicide resistance. But that is a slow, imprecise process. A more direct approach is to genetically engineer the plant to carry a foreign gene for the trait, but that approach has come under fire, particularly in Europe, because the foreign DNA remains a permanent part of the plant. Chimeraplasty, in contrast, should allow researchers to specifically mutate whatever gene they want—without permanent introduction of foreign DNA.

    The technique relies on hybrid DNA/RNA molecules—chimeras—that match the target gene region, except for the one base to be changed. By binding to the target gene, a chimera apparently triggers the plant cell's own DNA repair machinery to “correct” the mismatch between the target gene and the hybrid. As a result, the base change gets introduced into the gene.

    Plant molecular biologists, including Gregory May of the Samuel Roberts Noble Foundation in Ardmore, Oklahoma, and Charles Arntzen of the Boyce Thompson Institute for Plant Research at Cornell University in Ithaca, New York, have tested the technique, delivering the chimeras to plant cells by shooting them in with a gene gun or applying an electric current to open pores in the cell walls, through which the molecules can slip. The results are largely unpublished. But in the book Methods in Molecular Biology: Gene Targeting Protocols,+ May and his colleagues report that they used chimeraplasty to make tobacco cells resistant to sulfonylurea herbicides.

    By altering a single base in the gene coding for an enzyme called acetolactate synthase, which is needed for amino acid synthesis, the chimera destroyed the enzyme responsible for herbicide sensitivity. The investigators also tried the technique in plant cells carrying a mutated version of the gene for the green fluorescent protein and found they could correct the defect, causing the cells to glow green.

    Because the chimeras, which are small molecules, eventually break down after altering the gene, the technique doesn't leave any foreign DNA behind. As a result, chimeraplasty may raise fewer hackles among opponents of genetic engineering. “What we are seeing is a different tool for genetically changing crops,” says Arntzen. “You can surgically inactivate some genes and either change the processing or nutritional value of others. It's really wonderful.”

    • + Published in 1998 by Humana Press in Towtowa, New Jersey.


    Holes in the Sky Provide Cosmic Measuring Rod

    1. James Glanz

    The shadows cast by distant galaxy clusters against the microwave glow of the sky are offering a new gauge of the universe's expansion rate and makeup

    CHICAGOMost astronomers measuring the universe on the largest scales chase bright lights. They look for beacons called standard candles—flickering stars, supernovae, or certain galaxies—and turn their observed brightnesses into cosmic distances, clues to the universe's expansion rate, its age, and whether it is permeated by a strange energy called the cosmological constant. But one group of observers is chasing shadows instead: the dark silhouettes cast by distant clusters of galaxies against what John Carlstrom, a radio astronomer at the University of Chicago, calls “an amazing backlight”: the glow known as the cosmic microwave background radiation (CMBR). The size of the shadows in the CMBR provides a cosmic measuring stick independent of any now in use.

    Cosmic measuring sticks based on standard candles generally rely on a “distance ladder” in which short-range beacons are used to calibrate others that can reach deeper into the cosmos. But the shadows created by the so-called Sunyaev-Zeldovich (SZ) effect can be seen out almost to the edge of the universe, and they can be converted into distances without any intermediate steps. “This method goes straight out to very large distances in one go,” says Mike Jones, an astronomer at Cambridge University in the United Kingdom.

    Conceived decades ago by two Russian scientists, the technique is only now showing its potential, thanks to more sensitive instruments for measuring the microwave background and satellite x-ray images of clusters. At a recent American Astronomical Society (AAS) meeting here, Erik Reese of the University of Chicago and Brian Mason of the University of Pennsylvania each presented new results on distances to a half-dozen clusters, based on work by multi-institutional teams. Combined with separate observations of how fast those clusters are rushing away, the distances give the expansion rate, called the Hubble constant, which can be combined with other cosmic measurements to give an age. Uncertainties in the technique are still large, but results so far provide a comfortable fit with a recently announced value for the Hubble constant. And, as more and more SZ observations roll in, they could help determine not only the expansion rate but also how it has changed over billions of years of cosmic history. That, in turn, could provide a crucial check on observations of distant supernovae suggesting that a cosmological constant is causing the expansion to accelerate (Science, 18 December 1998, p. 2156).

    Everything in the visible universe lies in front of the CMBR, because it dates from when the cosmos was a mere 100,000 to 300,000 years old. The radiation started out with energies corresponding to the temperatures of the hot young universe but cooled as it expanded. Arriving at Earth 14 billion years or so later, the CMBR photons carry energies equivalent to just 2.7 kelvin, which implies radio and microwave wavelengths.

    In the early 1970s, astronomer Rashid Sunyaev and physicist Yakov Zeldovich realized that galaxy clusters, collections of thousands of galaxies sprawling across millions of light-years, would make dents in the CMBR—cool spots measuring arc minutes across. (An arc minute is a thirtieth of the width of the full moon.) “These are one of the few things that produce holes in the sky” rather than bright points of emission, says Reese.

    The holes, in which the CMBR's temperature drops by thousandths of a kelvin, are caused not by the galaxies themselves but by gas that collects in the gravitational “well” of the clusters. Heated to temperatures of 100 million kelvin as it plummets into the clusters, the gas—several times the mass of the galaxies—is an ionized soup of nuclei and free electrons, which interact with passing microwaves. “About 1% of the time, a photon will scatter off one of these free electrons,” says Carlstrom, and will generally be knocked up to a higher energy. That means the photon will be undetected by ground-based instruments, producing the SZ deficit. Just how many photons are scattered—and how deep a hole gets punched in the CMBR—depends on the density of the gas, its temperature, and the diameter of the gas blob along the line of sight to Earth.

    Astronomers quickly realized that an absolute measure of cosmic distances was hidden in those shadows. The first step in determining it is to call upon observations of the same clusters by x-ray satellites such as the German Roentgen Satellite and the Japanese ASCA satellite to measure the spectrum of the x-rays, which indicates the gas's temperature, and the x-ray brightness. Combined with the depth of the SZ hole, that gives astronomers enough information to determine both the gas's density and its diameter in light-years. By simple geometry, that length can be combined with the angle the gas blob subtends on the sky to give the cluster's distance. Finally, straightforward optical measurements of how fast the cluster is hurtling away—called the redshift—are thrown into the mix, and the Hubble constant pops out.

    It's not quite that simple in practice, because space is curved on very large scales and the gas blob doesn't have a sharp edge. What's more, not every blob will be spherical, with the same dimensions along and across the line of sight. So astronomers have to combine the values from a number of clusters—assumed to be spherical on average—to get a true value. Various other subtleties, such as “cooling flows” within the gases that can throw off the temperature measurements, led to some Hubble constants that were unrealistically small when astronomers began exploiting the SZ effect in the 1980s and early 1990s, says Steven Myers, a radio astronomer at the University of Pennsylvania.

    The latest measurements take that “cautionary tale” into account, says Myers. At the AAS meeting, he, Mason, and Anthony Readhead of the California Institute of Technology (Caltech) in Pasadena presented results for five relatively nearby clusters observed with a 5.5-meter dish at the Owens Valley Radio Observatory (OVRO) in California. “These clusters are all well-studied in the x-ray,” says Myers. The result was a Hubble constant of about 71 kilometers per second per megaparsec (3.26 million light-years). Although the uncertainty is about 20%, the central value meshes nicely with the figure of 70 recently announced by the so-called Hubble Key Project, which combined measurements of several different standard candles (Science, 28 May, p. 1438).

    Carlstrom, Reese, Chicago's Joseph Mohr, Marshall Joy of the NASA Marshall Space Flight Center in Huntsville, Alabama, and several other collaborators looked at much more distant clusters, some of them nearly halfway to the edge of the visible universe. To discern the much smaller shadows cast by such remote clusters, they used arrays of radio dishes at OVRO and the Berkeley-Illinois-Maryland Association (BIMA) facility at Hat Creek, California, rather than single dishes. Each array is an “interferometer,” in which signals from the various dishes are combined for a sharper picture of the radio sky. Although BIMA was designed for millimeter waves, the team adapted it to measure the centimeter waves of the CMBR by jamming the dishes together and outfitting them with special receivers (see illustration).

    Using the millimeter array has another advantage, adds Carlstrom: The team can get “lots and lots of observing time in the summer,” when the atmosphere becomes too humid for observations of millimeter waves, which are absorbed by water vapor. Carlstrom's team has now measured the SZ effect in 27 very distant clusters, Reese reported at the meeting, and has derived Hubble constants for six of them. Their central values range from 60 to 67, depending on what the team assumes about the curvature of space, with about 20% to 25% uncertainty.

    So far, such measurements aren't precise enough to strongly confirm or challenge the results of other techniques. But Barry Madore of Caltech, a member of the Key Project, welcomes the prospect of having an independent measure of the Hubble constant. “I think it's wonderful work,” he says. “What they're doing now is the right thing to do, increasing the sample size … so that they get a better representation of what their objects really are doing.” Half a dozen other groups around the world are attempting the same thing, as SZ pioneer Mark Birkinshaw of the University of Bristol in the United Kingdom described recently in Physics Reports. “The potential of this method for measuring the Hubble constant is only starting to be realized,” he wrote.

    The SZ technique could also reveal other vital statistics of the cosmos. Besides galaxies and glowing gas, clusters must contain even larger amounts of invisible “dark matter” to generate the gravity that holds the gas in place. “These clusters are kind of a garbage can which should contain the universal mix,” says Carlstrom. If so, comparing a cluster's SZ shadow, which reveals how much ordinary matter it contains, with its total mass should reveal the cosmic ratio of ordinary to dark matter. Because cosmologists can use other measurements to calculate how much ordinary matter must have been forged in the big bang, they can parlay that ratio into the grand total of all matter in the cosmos.

    But perhaps the field's greatest ambition is to use the SZ effect to measure not just the expansion rate, but how it has changed over billions of years. Observations based on exploding stars called type Ia supernovae have suggested that the expansion is actually speeding up, an indication that the common gravitational attraction of all matter in the universe is being overwhelmed by a mysterious repulsion. Distant clusters like those found by the Chicago group could eventually check the supernova results. Astronomers need better x-ray pictures of those far-off clusters, however, to extract precise Hubble constants from them.

    And those pictures should be on the way. The U.S. Chandra satellite and the European XMM satellite, to fly within the next year, should ride to the rescue with much more sensitive x-ray detectors and spectrometers. “It'll be a revolution when the new x-ray satellites are really able to do those SZ images justice,” says Myers.


    Lynx and Biologists Try to Recover After Disastrous Start

    1. Keith Kloor*
    1. Keith Kloor is a free-lance writer based in New York.

    An effort to bring lynx back to Colorado is mired in controversy after five animals starved to death earlier this year

    The distress call grew louder as Tanya Shenk snowshoed across drifts high in the Rocky Mountains last February. Shenk, a wildlife biologist for the state of Colorado, was in no hurry: The signal from the radio collar meant the victim was dead, perhaps killed by a predator. But when Shenk finally found the lynx curled under a spruce in Rio Grande National Forest, she realized to her horror that the emaciated beast, its telltale tufts of black hair shooting from the tips of its ears, had starved to death.

    Similar scenes have played out four more times, including once last month, unleashing a torrent of criticism over a $1.4 million program to bring the dwindling Canada lynx back to Colorado. News of the starved animals has outraged critics, some of whom maintain the effort was doomed from the start: Even before the first lynx were flown in from Canada late last year, analyses had suggested that the animal's main winter prey, the snowshoe hare, was in perilously short supply. And some wildlife biologists worry that the furor over the Colorado program will provide ammunition to critics of other wildlife reintroduction programs around the country.

    Scientists at the Colorado Division of Wildlife (CDOW) regret the starvation deaths but defend the reintroduction, which they claim may be their best shot at bringing the lynx back to Colorado. CDOW biologists cite a biological imperative to their timetable: Lynx in Canada are booming, so the population—this year, at least—could stand to lose the few dozen individuals shipped stateside. Some observers applaud the effort. Bringing back species like the lynx “is going to take a level of risk that none of us are really comfortable with,” says the U.S. Forest Service's Bill Ruediger, endangered species program leader for the Northern Rockies. “I give them credit for doing it.”

    Other experts disagree, contending that individual animals should not suffer to such a degree for the sake of a species. Death and the struggle for survival in unfamiliar terrain go hand in hand: Managers expect up to half the animals in any given carnivore reintroduction program to die, felled by other predators or hit by cars after ranging far from release points. “We certainly expect them to have a high mortality rate,” says Richard Reading, co-chair of Colorado's Lynx and Wolverine Advisory Team (LWAT) and director of conservation biology at the Denver Zoological Foundation. “But what you don't expect is starvation.” “Without the right habitat and enough food source,” adds Scott Mills, a lynx researcher at the University of Montana, Missoula, “reintroduction just becomes a death sentence.”

    The lynx as a species managed to stave off one death sentence a quarter-century ago, when fur trappers hunted the animal nearly to extinction in the lower 48 states. A furtive creature about twice the size of a housecat, the lynx once ranged from northern Canada and Alaska to southern Colorado, but now only about 500 remain south of the Canadian border. Only once before have wildlife biologists attempted to bolster lynx numbers in North America, when 83 cats from the Yukon Territory were released in New York's Adirondack mountains in the late 1980s. The program failed miserably: Most lynx were killed by cars, and few if any survivors or their offspring are believed to be around today, says carnivore biologist John Weaver of the Wildlife Conservation Society.

    Since the last confirmed sighting in Colorado in 1973, CDOW biologists have tried everything from baited traps to cameras mounted deep in the woods to spot any lynx lurking in the Rockies, without success. Deeming the effort futile, CDOW director John Mumma and several staff biologists concluded during a rafting trip in 1997 that the lynx deserved a shot at reestablishing itself in the southernmost fringe of its historic range, says Gene Bryne, the recovery team's lead biologist.

    In late 1997, CDOW managers decided they had to act fast if they wished to run their own show. The U.S. Fish and Wildlife Service (FWS) at the time had begun considering a petition to list the lynx as threatened with extinction in the lower 48 states; a ruling is expected later this year. If FWS opts for extending federal protection to the lynx, it would call the shots on how and where to do so. “We had a short window of opportunity,” says Bryne. “This was the last chance for the state of Colorado to reintroduce lynx.”

    A biological clock was also ticking. Lynx and snowshoe hare populations go through 10-year cycles of boom and bust. When hares are depleted, lynx begin starving—until hares rebound and the lynx follow suit. Lynx in Canada were cresting, so the Colorado team could gain permission to import some for a reintroduction. Wait any longer, Bryne says, and lynx numbers would plummet—and Canadian officials would never send lynx stateside. “If Colorado would have come to us in a couple of years, we probably would have said no,” says Harvey Jessup of Yukon's Department of Renewable Resources. “We don't encourage trapping when the lynx population crashes.”

    Before bringing in lynx, however, the Colorado team set out to determine which regions had enough snowshoe hares to support the cat. The failed Adirondacks effort offered few lessons, aside from the obvious need to release lynx far from any highways, says Bryne. Thus the CDOW scientists were starting mainly from scratch. “The lower 48 has always been considered a fringe area for lynx, so there's not a lot of data available,” says Terry Root, an FWS biologist in Cody, Wyoming. The bottom line, says recovery team leader Rick Kahn, a terrestrial biologist, is that “there wasn't a cookbook that anyone could open and say, ‘Here's a recipe’” for a reintroduction program.

    Rather than mount a laborious and expensive census that could take years, CDOW biologists in summer 1998 counted pellets of hare scat at randomly chosen sites in 11 Colorado habitats. Using a well-known formula developed by Charles Krebs, a zoologist at the University of British Columbia, the CDOW team extrapolated from the scat to snowshoe hare density. Their findings, presented to the LWAT on 4 September, pointed to the San Juan Mountains as the most promising home for lynx, offering about 1.4 hares per hectare—almost three times the minimum density thought necessary to sustain the cat.

    Later that month, LWAT member Kim Poole, a lynx researcher at Timberland Consultants in Nelson, British Columbia, used an updated equation from Krebs to calculate that the San Juan Mountains had only about 0.7 hares per hectare, a shade above the 0.5 hare minimum. Deeming that the lynx would find enough to eat, the Colorado Wildlife Commission approved the program that November. But Poole, after an e-mail exchange with Krebs, learned that the updated equation still overestimated hare density. Krebs devised a version that takes into account Colorado's patchy rocky mountain landscape. Using the latest formula, CDOW research biologist Dale Reed calculated that the San Juan Mountains may have only about 0.4 hares per hectare.

    Alarmed by the new data, Reed says he counseled to delay the reintroduction until more data were in hand. “I said very firmly, ‘It looks like we have marginal habitat,’” says Reed, who has since left the recovery team. But Kahn, Bryne, and others acknowledge that they chose to forge ahead. “We knew going into this that we had marginal prey base,” says Tom Beck, a state biologist on the recovery team. “We always hedged our bets that they would have a more diverse diet.”

    The recovery team brought in the first lynx from Canada and Alaska in January, let them acclimatize for several days in holding pens in the San Juan Mountains, then began releasing them sporadically. But after four lynx starved to death within 6 weeks of their release, Reading and the other LWAT co-chair, Brian Miller of the University of Denver, on 23 March wrote to Beck—CDOW's liaison to LWAT—recommending that the recovery team suspend further lynx releases until it had found locations with more prey. “[I]t will be a public relations nightmare to continue releases if conditions for success do not exist,” Reading and Miller wrote. If good habitat cannot be found, they added, “we recommend abandoning the lynx program.” Changing protocols, CDOW in April decided to delay the release of more lynx until later in the spring, opting to fatten up the captives and let them loose when alternative prey—red squirrels, marmots, and others—were coming out of their winter hibernation.

    But as Reading and Miller had predicted, the starvation deaths sparked a furor. Outraged animal rights activists protested along with ranchers, who feared that the lynx may trigger new restrictions on land use. Environmentalists and conservation biologists also piled on. Carrying out a reintroduction, despite analyses suggesting that the habitat is unsuitable, is “shoddy science,” charges Reed Noss, chief scientist at the Conservation Biology Institute in Corvallis, Oregon. “The program was rushed without adequate background research,” he says.

    Other wildlife biologists agree. According to Krebs, the hare scat analysis was done poorly. In electing not to clear the plots of scat and wait a year, he says, “they did it all wrong. … You get snowshoe hare pellets from the last 10 years, and then you get 10 times the density” of a single year, he says. As for the CDOW's hopes that lynx would find alternative prey in winter, “that's just whistling in the dark,” says Krebs. “They eat nothing but snowshoe hare. … They are the ultimate in specialist carnivore.”

    Bryne agrees that it would have been better to have cleared the scat plots and wait a year, but he says the analyses were adjusted to account for overestimations based on old scat. Others contend that the scat sampling failed to zero in on the most promising areas to release lynx. “Hares exist in clumps, so instead of coming up with a 0.4 density in a general area, we need to identify where these pockets of prey are distributed,” says biologist Gary Patton, FWS's representative on the recovery team. He and Bryne disagree with Noss's assertion that the science was shoddy. Besides, says Bryne, deaths in reintroductions are unavoidable. “I wish there were no mortalities,” he says, “but that's not realistic.”

    Some experts call CDOW's strategy reasonable. “To restore a food chain and a population, it might be necessary to sacrifice individual animals. That's not sloppy science,” says Paul Nickerson, chief of the FWS Northeast Endangered Species Division. Robert Ferris, a wildlife biologist with Defenders of Wildlife, agrees. “Anytime you have an opportunity to restore an endangered species to its habitat, you do it. These are animals that are destined for the fur trade; either we see them in coats or see them in Colorado.”

    All sides are watching nervously to see how the lynx fare this summer. If the lynx do well enough, state biologists would proceed to stage two, releasing another 40 to 50 lynx next spring. Although no one knows the magic number for a viable population, Patton thinks “you could have a very successful program out there and lose over half the animals.” But the mounting criticism has taken its toll; if more than half the lynx starve to death this summer, the CDOW team has vowed to shutter the program. “That's as much a social and political protocol as it is a biological [one],” says Kahn.

    The stakes are high if the animals survive. “What happens in Colorado will factor into future recovery plans for other states that want to reintroduce the lynx,” says FWS lead lynx biologist Lori Nordstrom. A failure would have “broad implications,” says Dennis Murray, a lynx researcher at the University of Idaho. “Other states might say it's not worth the headaches or politics.” The Forest Service's Ruediger hopes the CDOW effort does not fall victim to unrealistic expectations. “There are all kinds of nuances that can go wrong, and you have to figure it out by trial and error. I don't know any other way to do it.”


    Does a Globe-Girdling Disturbance Jigger El Niño?

    1. Richard A. Kerr

    An unpredictable atmospheric oscillation that may help transform an ordinary El Niño into a monster could frustrate efforts to refine forecasts

    In early 1997, after some rough years in the El Niño prediction business, forecasters thought they might have managed to scout out an El Niño well in advance: One model after another predicted a modest warming of the tropical Pacific Ocean by the winter of 1997–98—the start of a moderate El Niño. As it turned out, forecasters got their warming, but it was several times stronger than any model had predicted. The Pacific sizzled with record heat, driving weather disruptions that caused thousands of deaths and billions of dollars in damages and outdoing even 1982–83's “El Niño of the Century.”

    Researchers have since suggested that decades-long swings of temperature in other parts of the Pacific could have helped drive El Niño to extremes (Science, 19 February, p. 1108), but now many climate scientists suspect that a more mercurial factor may also help stoke the fires of Pacific warming: capricious bursts of wind along the equator, each lasting only a month or two.

    This prime suspect for fogging up forecasters' crystal ball goes by the name of the Madden-Julian Oscillation or MJO, after the researchers who discovered it 30 years ago. Every 30 to 60 days, it sends bursts of wind across the western Pacific, where El Niño gets its start. New modeling and empirical studies suggest that these bursts of wind can trigger an El Niño already poised to strike or intensify one by changing temperature patterns in the western Pacific. The MJO doesn't cause El Niño, says oceanographer Michael McPhaden of the National Oceanic and Atmospheric Administration's (NOAA's) Pacific Marine Environmental Laboratory (PMEL) in Seattle, but it probably helps determine a warming's strength. Unfortunately, that knowledge may not make monster El Niños much easier to call, because the intensity of the MJO remains unpredictable. The MJO probably “adds a measure of unpredictability to El Niño,” says oceanographer William Kessler of PMEL.

    From a satellite, the MJO manifests itself as an eastward-moving patch of tropical clouds that extend high into the troposphere (Science, 7 September 1984, p. 1010). Spawned in an interaction between rising columns of air and high-altitude winds, the intensified updrafts marked by the cloud patch are part of a globe-circling oscillation of the atmosphere. The cloud patch appears in the Indian Ocean, races across Indonesia and the western Pacific, then peters out in the eastern Pacific. At the surface, a passing cloud patch first draws stronger trade winds out of the east, then triggers heavy rains under the clouds, and finally causes weakened trades or even westerly winds as it leaves. This surface expression of the MJO usually fades as it reaches the eastern Pacific, but 15 kilometers up the MJO's wave of high-altitude wind variations circles the globe in 30 to 60 days, when the next oscillation may begin.

    Researchers are now looking closely at the MJO and related tropical wind shifts, because “every El Niño since 1950 has been associated with” particularly powerful bursts of winds in the western Pacific like those of the MJO, says McPhaden. The winds presumably help unleash warm western Pacific water into the eastern Pacific. And the official forecasting model developed by the Bureau of Meteorology Research Center in Melbourne, Australia, suggested that MJO-related winds may create the best sort of wind pattern for doing so, modelers Andrew Moore of the University of Colorado, Boulder, and Richard Kleeman of the Bureau of Meteorology reported in May's Journal of Climate.

    To McPhaden, the '97-'98 event reveals the handiwork of the MJO. Late in 1996, the tropical Pacific was apparently primed for a warming. The prevailing trade winds blowing from the east had piled more than the usual amount of warm water in the west—water waiting to slosh eastward in the next El Niño once the trades weakened or reversed. But the warming did not begin to spread eastward until powerful bursts of wind blew from the west in synch with the MJO in December 1996 and March 1997, says McPhaden. These westerlies pushed warm water eastward and also sent slow ocean waves to the east, cutting off the normal upwelling of deep, cold water to the surface there. In 12 months, while a half-dozen unusually strong westerly wind bursts struck the western Pacific, the warming across the Pacific quickly surpassed the predicted 1°C and peaked at 2.5° to 3°C.

    “That El Niño would probably have occurred anyway,” says McPhaden, but “these wind events affect both the amplitude and timing of El Niños.” They may do so not just by pushing warm water to the east, Kessler and Kleeman found when they modeled the effect of the winds, but also by cooling the western source region, largely by boosting evaporation. When they ran the Bureau of Meteorology's forecast model with and without the cooling effect of the MJO, they found that the MJO strengthened a feeble El Niño by 30%.

    It might seem odd that cooling the western Pacific would strengthen an El Niño, but in this case, the cooler water weakened the temperature difference across the Pacific that drives easterly trade winds, which normally keep the warm water in the west. Weakening those winds allowed more water to spill westward. The resulting El Niño was “still not strong enough,” says Kessler, “but [the MJO] is acting in the right direction.”

    And other factors could boost the MJO's push. Many of the MJO's wind bursts early in the '97-'98 El Niño were unusually large, Kessler notes, enhanced by various forces, such as surges of cold air from the north. The cold surges fueled a pair of cyclones straddling the equator, which in turn funneled westerly winds between them, according to meteorologists Lisan Yu of the Woods Hole Oceanographic Institution in Massachusetts and Michele Rienecker of NASA's Goddard Space Flight Center in Greenbelt, Maryland.

    Many of Kessler's colleagues cautiously endorse the notion that the MJO fine-tunes El Niño's strength. Meteorologist and longtime El Niño watcher Vernon Kousky of NOAA's Climate Prediction Center in Camp Springs, Maryland, agrees that the MJO “may very well be a feature that helps determine the character of a particular El Niño.” And modeler Mark Cane of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, thinks Kessler “makes a good case for the MJO being an amplifier.” Still, at this point it's not clear just how big an effect the MJO exerts on El Niño, cautions modeler David Battisti of the University of Washington. “At some level these things matter,” he says. “The question is, how large an impact do they have?” He thinks Kessler and Kleeman's simulation of how much MJO's cooling effect amplifies an El Niño may have been unduly sensitive.

    But if the interaction of the MJO and El Niño is real, it “adds some level of unpredictability” to forecasting, says Kessler. “Nobody claims to predict the MJO,” he notes, “and most [models] have a poor rendition of it. You may be able to predict an El Niño next year, but it will be much harder to predict its amplitude.” Researchers may find out whether the MJO throws a wrench into the forecasts when they test their skills during the next El Niño, due no earlier than next year.


    Nobel Foundation Seeks Looser Financial Reins

    1. Joanna Rose,
    2. Annika Nilsson*
    1. Rose and Nilsson are writers in Stockholm.

    Long hamstrung by the stipulations of Nobel's will, the world's most famous prize fund wants the freedom to beef up its awards

    STOCKHOLMThe Nobel Foundation generates a flood of publicity in October, when it announces the winners of the world's most prestigious science prizes, and it usually operates for the rest of the year well out of the limelight. Not so this year. In April, Sweden's newspapers carried warnings that the foundation was facing economic hardship. “Direct returns [from the Nobel Foundation's assets] do not cover the prize money any longer,” reported Dagens Industri, a Swedish financial newspaper, under the headline “Nobel—a case for the government.”

    This sudden concern over the financial wellbeing of the foundation, which bankrolls five of the Nobel awards (excluding economics), was prompted by a request from the fund's managers for the Swedish government to relax the rules regulating its investment policies. The publicity sent foundation officials scrambling to reassure the public that the foundation is not in any peril. “As for our ability to finance our current expenditures, there is no problem whatsoever,” says Michael Sohlman, executive director of the foundation. The proposed changes, says Sohlman, would simply do away with outdated restrictions and give fund managers more flexibility to invest in equities rather than fixed-interest bonds. The aim, says Sohlman, is not to stave off erosion of the foundation's finances, but to improve the prospects for increasing the prize money in future years.

    Nevertheless, the very idea of changing the rules governing the high-profile award fund has caused some alarm and highlighted the Nobel Foundation's almost iconic status in Sweden. “The Nobel Foundation is the most well-known fund in the world, and I suppose the application for permission to change its statutes signals their ambition to keep a high revenue profile,” says Henning Isoz, a financial expert who drafted Sweden's current fund legislation and now works for the consulting firm Ernst & Young. The discussion has also focused a spotlight on the foundation's financial management and on the bizarre circumstances surrounding the creation of the fund. These involved spiriting Nobel's fortune from Paris to Scotland and overcoming the resistance of patriotic Swedes to giving any of the awards to foreigners.

    Alfred Nobel, millionaire businessman and inventor of dynamite, signed his final will at the Swedish-Norwegian Club in Paris on 27 November 1895 and died a year later in San Remo, Italy, on 10 December 1896. The part of the will relating to the prize is not much longer than a page. It states that: “The whole of my remaining realizable estate shall be dealt with in the following way: the capital, invested in safe securities by my executors, shall constitute a fund, the interest on which shall be annually distributed in the form of Prizes to those who, during the preceding year, shall have conferred the greatest benefit on mankind.” However, fulfilling Nobel's wish became a long and acrimonious process that lasted until 1900 when the Nobel Foundation was established.

    The will attracted attention across the globe and caused great controversy in Sweden and Norway, which were united at that time. Some of Nobel's relatives contested it, and some of the bodies designated by Nobel to award the prizes (the Royal Swedish Academy of Sciences for chemistry and physics, the Karolinska Institute for physiology or medicine, and the Swedish Academy for literature) were reluctant to assume the task. However, the Storting, Norway's parliament, which the will called on to appoint the peace prize committee, took on the job without hesitation. But in Sweden Nobel's insistence on an international prize drew severe criticism. King Oscar II declared that the money should remain in the Nobel family, or at least not be spent on a dubious “Peace Prize.” Hjalmar Branting, a famous social-democratic leader and himself later a Nobel peace laureate, expressed his disgust with a capitalist easing his bad conscience by giving away money.

    Political acceptance wasn't the only problem. According to the will, all of Nobel's shareholdings in companies around the world had to be liquidated. This required some nimble—and risky—footwork on the part of Ragnar Sohlman (Michael Sohlman's grandfather), a 26-year-old assistant to Nobel. Sohlman gathered all of Nobel's shareholdings, cash, and other financial assets quickly and in great secrecy so that he could avoid French taxes and instead have the will executed under Swedish law. On one occasion, in Paris in 1897, he rushed all over the city in an open horse-drawn cab with a loaded pistol in each pocket collecting together the Nobel fortune. The Rothschilds bank of Paris, the only institution capable of safely handling such a big fortune, transferred everything for sale to Scotland.

    The sale netted 31.5 million Swedish kronor ($3.7 million), and the Nobel Foundation was created. Nobel stipulated that the foundation's money should be invested in “safe securities,” which was interpreted at that time as meaning gilt-edged government bonds. But when governments began to abandon the gold standard in the 1920s, bonds could not keep up with inflation and the value of the foundation's assets began to erode. Taxes were also nibbling away at the prize money. Sweden's Parliament exempted the foundation from income tax in 1946, but by this time it was the biggest taxpayer in Stockholm. By the mid-1950s, only one-third of the original value of Nobel's legacy remained.

    Investment roller coaster.

    After a precipitous slump, the Nobel Prizes took 90 years to regain their real 1901 value.

    The foundation's fortunes began to turn around in the early 1950s, after skilled financiers joined the foundation board. By this time, the idea of “safe securities” had widened and in 1953 the foundation was given permission to invest some money in equities and real estate to compensate for inflation. These restrictions continued to be loosened, so that today the foundation is required to keep only 30% of its assets in fixed- interest bonds. Geographical limitations have also been relaxed, so now only 13% of the foundation's investments are located in Sweden, with 18% elsewhere in Europe, 22% in the United States, 3% in Japan, and 1% in emerging markets. The remaining 43% are spread between real estate holdings (6%) and fixed-interest investments (37%). Over the past 5 years, the average yearly return on the foundation's stock portfolio was 12.5%, with a high of 25.3% in 1998.

    By 1998, the foundation's invested capital had grown to $372 million, from which it spent a modest $11.8 million. The 1998 Nobel laureates were each awarded $894,000, bringing the total for the five prizes—in physics, chemistry, physiology or medicine, literature, and peace—to $4.5 million. The prize-awarding institutions spent $3.8 million assessing candidates. The remaining annual expenditure supports the Nobel symposia, administration, and the prize-giving ceremonies in December.

    Despite the apparent health of the foundation's finances, Sohlman and his colleagues are looking for ways to increase the value of the prizes. The very first Nobel Prizes, awarded in 1901, were for 150,800 kronor, the equivalent of about 15 to 20 years of a professor's salary. “It was not until 1991 that we managed to reach the same real value for the prize as in 1901,” says Sohlman. “Our ambition is to gradually reduce the ratio between total expenditures and market value of capital. This will make it possible to give priority to future growth in the awards.”

    To do that, the foundation has proposed that it should be freed from the requirement to invest at least 30% of its assets in bonds. It also wants to be able to spend realized capital gains on property and stock and bond investments for annual expenditures. At present, it is only allowed to spend direct returns. “The question of using realized capital gains has arisen recently because revenues from the foundation's investments have not matched the rapid growth in the value of their capital on the stock market,” says Bertil Kallner, chief lawyer for the Swedish National Judicial Board for Public Lands and Funds, which oversees the Nobel Foundation.

    Whether it will win this freedom, and future Nobel laureates can continue to look forward to more lucrative prizes, now lies in the hands of the National Judicial Board. A decision is expected before the end of this year. But, as Sohlman points out, money isn't everything: “Of course the money is of importance for the prize. But what really counts is the extensive work done evaluating the candidates and creating the worldwide intellectual network which most probably would not be there without the prize.”


    A Wobbly Start for the Sahara

    1. Mark Sincell*
    1. Mark Sincell is a free-lance writer in Tucson, Arizona.

    Could a tiny change in the angle of Earth's orbital axis trigger a cascade of events that turned an ancient Eden into the Sahara desert? Yes, says a report in the 15 July issue of Geophysical Research Letters.

    Studies of fossilized pollen have shown that grasses and shrubbery covered what is now the Sahara until some unknown environmental catastrophe dried up all the water, leaving nothing but sand. The exact timing is uncertain, but one interpretation of the pollen data suggests that a relatively mild arid episode between 6000 and 7000 years ago was followed by a severe 400-year drought starting 4000 years ago. Such a disaster might have driven entire civilizations out of the desert, leading them to found new societies on the banks of the Nile, the Tigris, and the Euphrates rivers. But the cause of the postulated droughts remained a mystery.

    Now, climatologist Martin Claussen and co-workers at the Potsdam Institute for Climate Impact Research in Germany are proposing that Earth's changing tilt triggered the rapid drying of the Sahara. Like a spinning top slowly wobbling on its tip, Earth's tilt has decreased from 24.14 degrees to 23.45 degrees in the last 9000 years, resulting in cooler summers in the Northern Hemisphere. When Claussen introduced cooler Northern summers into a computer simulation of Earth's atmosphere, oceans, and vegetation, the monsoon storms that provide water to the Sahara grew weaker, killing off some of the native plants. The initial reduction in vegetation further reduced rainfall, says Claussen, starting a vicious cycle of desertification that began to accelerate about 4000 years ago. Less than 400 years later, Claussen's team found, the drought caused by the vegetation-feedback mechanism could have wiped out almost all plant life in the desert.

    “This is a very exciting result,” says climatologist John Kutzbach of the University of Wisconsin, Madison. It is the first time anyone has demonstrated that a change in Earth's tilt can cause a sudden vegetative feedback, he says. “It opens up a whole new class of research problems involving the biosphere,” such as predicting feedback effects in global warming.


    Engineering Plants to Cope With Metals

    1. Anne Simon Moffat

    Newfound genes and enzymes could enable crops to flourish on metal-rich soils and help other plants clean up heavy metal contamination

    Along with disease, drought, and pests, metals are a key enemy of plant growth. Aluminum, for example, the most abundant metal in Earth's crust, is normally locked up in minerals. But in acid soils, like those of the southeastern United States, Central and South America, North Africa, and parts of India and China, aluminum is set free as ions that poison plant roots, probably by making the cells rigid and unable to lengthen. The result is stunted plants and poor harvests, a problem on up to 12% of soils under cultivation worldwide.

    For decades, plant breeders coped with metals in soils by crossing metal-sensitive plant varieties with the few species that thrive despite their presence. But tolerant crops are few, and classical plant breeding is slow because crop genomes are large and complex. Lately, however, crop researchers have turned to genetic engineering to improve traits ranging from pest resistance to nutritional value (see Reviews beginning on p. 372)—and now they are taking the first steps toward producing metal-tolerant plants as well.

    Within the last year, several research groups have identified metal-resistance genes, or their approximate locations, in mutant plants and other organisms. In some cases, they have gone on to identify the enzymes made by the genes, which help cells cope with metals by excluding them, sequestering them within the cell, or transforming them into volatile forms that can escape to the air. “This recent work is exciting,” says plant biochemist Himadri Pakrasi of Washington University in St. Louis. “Now we have mechanisms” for coping with toxic metals—and the possibility of inserting them into crops to boost their growth.

    The findings could also aid efforts to use other plants as cost-effective agents of environmental remediation —growing them on soils contaminated with mercury, copper, or cadmium, for example, where they would extract and store the metals. The plants could then be harvested and incinerated (Science, 21 July 1995, p. 302). U.S. Department of Agriculture (USDA) agronomist Rufus Chaney estimates that the cost of using plants to clean polluted soils could be “less than one-tenth the price tag for either digging up and trucking the soil to a hazardous waste landfill or making it into concrete.”

    One advance came about a year ago, when Stephen H. Howell of the Boyce Thompson Institute for Plant Research in Ithaca, New York, Leon V. Kochian of the USDA's Plant, Soil and Nutrition Laboratory, also in Ithaca, and their colleagues used chemicals to create random mutations in the small experimental plant Arabidopsis, then screened the mutants for aluminum tolerance. They found two that could thrive in soils containing four times the level of aluminum that stunted the growth of normal plants.

    One of the mutants coped with aluminum by secreting organic acids, such as citric and malic acids, which bound the metal ions outside the cell as aluminum malate and aluminum citrate before they had a chance to enter the root tip. It's not the only plant known to employ this strategy for coping with aluminum, Howell says. “Emanuel Delhaize [of the CSIRO in Canberra, Australia] and others discovered about 5 years ago that wheat resistant to aluminum released a variety of organic acids,” he says. But finding this defense in Arabidopsis opens the way to tracking down the gene. Already, Howell and his colleagues have mapped the still-unidentified gene to chromosome 1 of the plant's four chromosomes.

    A second Arabidopsis mutant had a very different way of dealing with aluminum. Plants with this mutation, which mapped to chromosome 4, increased the flux of hydrogen ions into the root tip, alkalinizing the medium outside the root. The slight increase in external pH, by as little as 0.15 unit, was enough to transform Al+3 ions—the form in which free aluminum travels through groundwater—into aluminum hydroxides and aluminum precipitates, which don't enter and harm the root. Howell says the next phase of the research is to isolate and clone the genes—and perhaps introduce them into other plants.

    Two other metals, cadmium and copper, build up in soils contaminated by industry or heavy fertilizer application. They harm plants by producing free oxygen radicals, which damage cells, or by displacing essential metal ions such as zinc from plant enzymes, disabling them. Many plants cope with these metals by binding them in complexes with a class of peptides called phytochelatins and sequestering the complexes inside their cells. Now three groups have isolated genes for the enzymes, called phytochelatin synthases, that make the metal-binding peptides when the cell is exposed to toxic metals. The groups—led by Christopher Cobbett of the University of Melbourne in Australia, Phil Rea of the University of Pennsylvania, Philadelphia, and Julian Schroeder of the University of California, San Diego—identified the genes in Arabidopsis, wheat, and yeast. After searching genome databases, they also found counterparts of the plant genes in the roundworm Caenorhabditis elegans.

    The finding implies that animals may deal with unwanted metals in the same way as plants. “There doesn't seem to be a kingdom barrier,” says Cobbett. The gene sequences also shed light on how the enzymes sense metal levels: They indicate that one end of the phytochelatin synthase molecules contains many cysteine amino acids, residues that bind heavy metals.

    Looking to the future, Schroeder says that scientists would like to fine-tune the regulation of the phytochelatin synthase genes so that they are expressed at the highest levels in the shoots and leaves, rather than in the roots. The resulting plants would make better allies in environmental cleanup, because it is far easier to harvest the aboveground portions than to gather metal-laden roots.

    Richard Meagher of the University of Georgia, Athens, also hopes to manipulate plants to create a green cleanup crew, in this case for mercury, a lethal waste product found at various industrial sites. Microbes in soil and aquatic sediments transform the element into methyl mercury, which is a particularly serious problem because it accumulates in the food chain and causes neurological damage in humans. A few years ago Meagher made use of a bacterial enzyme, mercuric ion reductase, that converts ionic mercury in mercuric salts to the elemental form, the least toxic form of mercury. When this gene was placed in Arabidopsis, canola, tobacco, and even yellow poplar, it allowed the plants to grow on mercury-laden media and release the metal into the air. By eliminating mercury from these sites, the plants block the formation of methyl mercury.

    Some might cringe at the notion of plants emitting trails of mercury vapor, but Meagher argues that, compared to the global pool of mercury in the air, the amount emitted from contaminated sites would be a trace. And he has an additional strategy for cleaning up mercury contamination. Last month, he and his colleagues reported in the Proceedings of the National Academy of Sciences that they had endowed Arabidopsis with a second modified bacterial gene that enabled the plants to break down methyl mercury directly. The gene, encoding an organomercurial lyase, catalyzed the split of the carbon-mercury bond, releasing less-toxic ionic mercury. Meagher hopes that the same enzyme can be engineered into trees, shrubs, and aquatic grasses, allowing these plants to detoxify dangerous methyl mercury. “Our working hypothesis is that the appropriate transgenic plant, expressing these genes, will remove mercury from sites polluted by mining, agriculture, and bleaching, for example, and prevent methyl mercury from entering the food chain,” says Meagher.

    Pakrasi cautions, however, that before transgenic plants can be widely planted on contaminated soils, researchers will need to do extensive field trials. Transgenic plants adept at handling metals in laboratory and greenhouse experiments may not perform as well when the soil, moisture, and climate vary, for example. “There are a lot of unknowns,” Pakrasi says.


    Crop Engineering Goes South

    1. Anne Simon Moffat

    The staple crops of the developed world—wheat, corn, rice, and soybeans—get most of the attention from genetic engineers, who are endowing them with genes for resistance to disease and herbicides. Now some researchers are turning their attention to so-called nonprimary crops, often native to the subtropics or tropics, that have untapped potential for producing food, fiber, fuel, and medicines. One such crop, sweet potato, has already been genetically altered to improve its protein quality and may soon be planted commercially (Science, 18 December 1998, p. 2176). Enhanced versions of other crops that produce food or products for export in the developing world are in the pipeline.

    Bringing the Potato Back Home

    Although potatoes are everywhere in the Western diet—baked, fried, and made into chips and starch thickeners—per capita consumption is relatively low in developing countries, even in South America, where the crop originated. The problem? The huge gains in U.S. yields—a doubling over the last 50 years, achieved via classical plant breeding and irrigation, pesticides, and fertilizers—have been made with potatoes that cater to U.S. tastes and climate. But genetic engineering could allow the developing world to close the gap.

    At last May's China Workshop on Plant Biotechnology in Beijing, Alejandro Mentaberry of the Research Institute of Genetic Engineering and Molecular Biology in Buenos Aires announced that researchers there had modified potato types commonly grown in Chile, Argentina, Uruguay, Cuba, and Brazil. Using Agrobacterium as a gene shuttle, the researchers created 16 transgenic lines, each carrying a different two-gene combination coding for resistance to various viral, fungal, and bacterial diseases. Several of the transgenic lines, which express antimicrobial proteins such as lysozyme and attacin from the giant silk moth, are resistant to the Erwinia bacterium, a serious potato pathogen, and are being field tested in Chile and Brazil.

    A consortium of 13 Latin American and European laboratories now hopes to introduce an array of six genes—for resistance to various fungal, bacterial, and viral pests, for herbicide resistance, and for the natural insecticide Bt toxin—into a single tropical cultivar. Some researchers worry that such a heavy load of foreign genes will depress yields. Indeed, a single gene for an insecticidal toxin may have reduced yields when it was added to another strain of potatoes tested in upstate New York. But Roger Beachy, president and director of the new Donald Danforth Plant Science Center in St. Louis, who has put 14 genes into one rice strain, says: “I'm optimistic about their goals. The precedent for that sort of project is pretty good.”

    A Healthier Cassava

    Known to Western societies as a source of tapioca, cassava is a mainstay of the developing world's diet. The leaves and starchy roots of this shrub, when powdered, boiled, fried, or fermented, make up the world's third largest source of calories, after rice and corn. About 60 years ago, British scientists working in East Africa began a program of plant breeding to increase the size and number of edible roots. Yields improved at first, but over the years the gains plateaued in the face of increasing losses to fungal, viral, and bacterial diseases. Now scientists are trying to push up yields again with the tools of biotechnology.

    The impetus comes from three organizations: the International Laboratory for Tropical Agriculture and Biotechnology (ILTAB), originally based in La Jolla, California, but now moving to the Donald Danforth Plant Science Center in St. Louis; CIAT (the Center for International Tropical Agriculture) in Cali, Colombia; and the Cassava Biotechnology Network, a federation of small companies, scientists, and cassava growers. They joined forces about 10 years ago and, as a first step, set out to map the cassava genome—a continuing effort that has now produced about 300 markers. In 1995 ILTAB researchers also developed techniques for introducing foreign genes into cassava cells with Agrobacterium and gene gun technology, then regenerating transformed cells into whole plants.

    These transforming systems are “not yet routine,” says ILTAB director Claude Fauquet. Still, researchers have already succeeded in transforming cassava with a truncated protein produced by the gemini virus, which makes the plant resistant to the African cassava mosaic virus by an unknown mechanism. They have also created disease-resistant cassava plants by introducing a gene that expresses replicase, an enzyme that disrupts the life cycle of invading viruses.

    If these and other efforts succeed in the field, cassava yields could increase 10-fold, to 80–100 tons per hectare, says Fauquet. But he says the effort is too small to pay off quickly. Cassava, he says, “is a poor crop in terms of those studying it, commanding the attention of only about 50 scientists worldwide. But it is rich in potential for feeding the developing world.”

    Richer Oils From a Palm

    One target of agricultural biotechnology could bolster the developing world's foreign exchange as well as its food resources. Just last month, Massachusetts Institute of Technology microbiologist Anthony Sinskey and colleagues at the Palm Oil Research Institute of Malaysia (PORIM) launched a multimillion-dollar project to genetically engineer the oil palm to produce everything from improved oils to, conceivably, biodegradable plastics.

    The palm, which prospers on plantations in Malaysia, Indonesia, and Central America, produces eight to 10 times more oil per hectare than canola or soybean, the top oil producers in temperate climates. And the market for the oil could expand if genetic engineers could redirect enzymatic pathways to produce an oil richer in oleic acid, which goes into cooking oils, or stearic acid, used as a cocoa butter substitute and a raw material for soaps and shaving creams.

    Over the past decade, Malaysian researchers have learned how to regenerate palms from cell culture, a necessary step in producing a transformed plant from a single genetically engineered cell. And 2 years ago, PORIM's Suan-Choo Cheah and G. K. Parveez successfully shot a gene for herbicide resistance into cultured cells with a gene gun, then generated transformed palm seedlings. The next step, says Sinskey, is to try modifying the biochemical pathways that produce the oil.

    Sinskey, a pioneer in identifying genes for natural polymers, is even thinking of engineering oil palms to produce natural polyester compounds that could serve as biodegradable plastics, as a number of researchers have done with Arabidopsis, soybean, or canola. Because palms are so productive, Sinskey says, they might have a better chance of making plastics economically.

    Still, palms aren't an easy target for genetic engineers. Unlike annual row crops, such as corn and soybean, palms take a long time to show results. Once a seed is planted, it takes 3 to 5 years to get first oil production. “You can't get three plant generations per year, as you do with corn,” says Monsanto's Steve Lehrer, who heads that company's specialty crops research. And palms generated from single cell culture aren't always vigorous. Even so, says Sinskey, “I am quite confident that palms can be genetically manipulated.”

    A Better Banana

    To the Western world, bananas and their close relatives, plantains, are a snack and dessert. But in western and central Africa, they provide more than one-quarter of all food calories, and they feed tens of millions in Central America and Asia, too. Indeed, the United Nations Food and Agriculture Organization ranks bananas as the world's fourth most important food crop. But despite their economic importance, they have received only erratic study. Tropical species have not been a top priority for crop scientists in the developed nations, and bananas are a difficult challenge for traditional plant breeding. Bananas produce fruit without pollination and reproduce vegetatively, and almost all important banana cultivars have three sets of chromosomes, making it difficult to add desirable traits by classical breeding.

    Now genetic engineers are finding that they can bypass those obstacles by slipping new genes directly into the banana genome. Ultimately they hope to create plants resistant to the fungal diseases that threaten this staple, and also to engineer in genes for foreign proteins that, when eaten, act as antigens, creating edible vaccines.

    “Today, banana research is where rice research was in 1990,” says plant biologist Charles Arntzen of the Boyce Thompson Institute for Plant Research (BTI) in Ithaca, New York. Researchers have just learned how to introduce foreign genes into the plants, a step that four groups reported at last spring's First International Symposium on the Molecular and Cellular Biology of the Banana, held at BTI. One group, from Australia, used a “gene gun” to inject DNA coding for several marker genes into tissue from the Cavendish banana—a dessert variety that is widely exported—which grew into plants expressing the genes. Three other international teams used the plant bacterium Agrobacterium, which injects DNA into cells it infects, to insert marker genes into various banana types.

    Researchers at the Catholic University Leuven in Belgium have now gone a step further, introducing genes encoding antimicrobial proteins into banana cells to generate plants resistant to Mycosphaerella fijiensis, the most serious fungal disease of banana. Says Randy Ploetz of the University of Florida, Gainesville, “An engineered banana may be produced in the near future that will resist at least some fungal diseases.”

    Arntzen and colleagues are also hoping to engineer bananas to produce antigens so that they can be used as edible vaccines against diarrhea caused by Escherichia coli and the Norwalk virus. Last year, he, Hugh Mason, and their BTI colleagues demonstrated the principle when they transformed potatoes to produce an E. coli protein, which elicited immune responses in human volunteers who ate the raw potatoes; they now have similar, unpublished results with potatoes and the Norwalk virus. But bananas are a better vaccine medium, Arntzen says, because raw bananas, unlike raw potatoes, are tasty.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution