News this Week

Science  21 Jan 2000:
Vol. 287, Issue 5452, pp. 402

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Windfall Breeds Fresh But Vulnerable Crop of Grants

    1. Jocelyn Kaiser

    The turn of the century has brought some belated holiday cheer for agricultural scientists: a new pile of money for research. Last week, U.S. Department of Agriculture (USDA) Secretary Dan Glickman unveiled a program that will double the agency's spending on peer-reviewed grants—providing an additional $120 million—for work on everything from deciphering the genomes of prize heifers to making automobile fuel from crops. “We're ecstatic,” says Terry Nipp, a lobbyist for land-grant universities. There's a chance, however, that Congress will wipe out the funds and turn the community's newfound joy to grief.

    Called the Initiative for Future Agriculture and Food Systems, the new program first appeared in a 1998 law crafted by the Senate agriculture committee. The panel's chair, Richard Lugar (R-IN), has argued that ag research needs a big boost if the world is to feed its growing population. Lugar came up with an unusual—and controversial—source of funding for the initiative, authorized at $600 million over 5 years: cash returned by the states each year to the U.S. Treasury in savings in the food stamp program. Angered by this end-run around the usual budget review process, House appropriators in 1998 insisted on barring any spending on the new program (Science, 16 October 1998, p. 392).

    USDA can resurrect the program now because the law tying their hands expired last October, while the authorizing legislation had made the money available in 2-year pots. That means the initiative has until 1 October 2000, the end of this fiscal year, to blow its wad of unspent 1999 funds—if the wad isn't taken away. After Congress convenes later this month, House appropriators may try to put the kibosh on the program before the first grants go out the door.

    The new program is meant to complement USDA's existing competitive grants program, the National Research Initiative (NRI), which also plans to spend about $120 million on ag research in 2000. Unlike NRI, which funds mostly basic research, the new initiative will have an applied edge. The law tags five areas for support: sequencing and analyzing genes of livestock, crops, and useful microbes, and assessing the risk of altered organisms; managing natural resources and pests; food safety and nutrition; improving the productivity of small- and medium-sized farms; and developing new products, such as biofuels from corn or soybeans.

    USDA has tentatively divvied up the money in that order of priority, with genomics—perhaps the hottest area in plant research now (see p. 412)—getting the lion's share. Specific areas that could find funding, experts say, might be studying whether altered plants could transfer foreign genes to weeds, for example, or adding bioinformatics training to a genomics project. “But [spending] will be largely driven by the proposals we receive,” says Eileen Kennedy, USDA deputy undersecretary for research, education, and economics.

    Wasting no time, USDA expects to issue a request for proposals next month. Ag researchers should renew friendships with colleagues in other departments and at other universities as soon as they can: Invoking grantmaking watchwords now in vogue, USDA says the proposals must be multidisciplinary and multi-institutional and have a clear connection to stakeholders, says Chuck Laughlin, administrator of the agency's Cooperative State Research, Education, and Extension Service. To help scientists reach out to their communities while drafting proposals, USDA plans to hold several workshops in March at which researchers can brainstorm with farmers about their needs.

    There's a “great need” for this kind of program, says plant biochemist Bob Buchanan of the University of California, Berkeley. He says it might support work that “falls through the cracks” between agencies, such as his own group's clinical testing of milk treated with an enzyme so it doesn't cause allergies. “It's hard to get anybody to pay for that,” he says.

    Muting the celebration, however, is the prospect that Congress may seek to stop the program. A House agricultural appropriations subcommittee staffer says his panel is opposed to funding research without its approval. “The subcommittee does not like the process to be circumvented,” he says. Kennedy, however, feels that the program—which has strong Senate support—is safe. “We would not go forward with this if it were thought that there wasn't a good chance that the money will be there,” she says.

    More doubtful is the program's longevity. The 2000 budget law bars funding the initiative, which could spell death for it after the borrowed time—and borrowed 1999 dollars—run out. “We may be looking at a 1-year opportunity,” says Laughlin. Another worry is that appropriators may exact revenge by cutting NRI funds. “It's the age-old problem. Is it new money or is it coming out of somebody's hide?” asks Duke University's Jim Siedow, speaking for the American Society of Plant Physiologists. If the NRI does feel the blade, he says, “I don't think I'd be too enthusiastic.” But if ag scientists are lucky, Congress budget hawks won't play the Grinch—at least not this year, that is.


    Gamma Ray Satellite Faces Premature End

    1. Govert Schilling*
    1. Govert Schilling writes from Utrecht, the Netherlands.

    NASA may soon deliberately crash a fully operational scientific satellite into the ocean. The reason? If it doesn't send its highly successful Compton Gamma Ray Observatory (CGRO) into a directed suicidal dive through Earth's atmosphere, the mammoth satellite could later plummet in an uncontrolled reentry, posing a potential risk to populated areas.

    NASA's painful dilemma arose in November, when one of the observatory's three onboard gyroscopes failed. These delicate instruments are used to precisely control the spacecraft's orientation. The gyro loss poses no threat to the scientific work of the observatory, says Charles Meegan of NASA's Marshall Space Flight Center in Huntsville, Alabama: “We could do science pretty well even if we lost another one.”

    But the satellite, which is as large as a truck and weighs 17 tons, needs at least two functioning gyroscopes to make a safe reentry once its mission is completed. (Each gyro controls movement along two of the three axes.) “You have to thread a very narrow needle,” says Alan Bunner, the director of NASA's Structure and Evolution of the Universe program. If another gyro were to fail, NASA would run the risk of having some of the pieces crash into populated areas. As a result, NASA's Office of Space Science has proposed bringing the observatory down in mid-March, before it's too late.

    Marshall's Gerald Fishman, the principal investigator of CGRO's Burst and Transient Source Experiment (BATSE), says he “never realized that the loss of one gyro would have such dire consequences.” BATSE has been the spacecraft's star instrument ever since its launch in April 1991, providing astronomers with their first reliable clues on the true nature of gamma ray bursts, the most energetic explosions in the universe. Other CGRO instruments study exotic objects in the universe, like neutron stars, black holes, and active galaxies.

    Engineers at NASA's Goddard Space Flight Center in Greenbelt, Maryland, which handles CGRO's communications systems, haven't given up. They are working hard to find a way to control the spacecraft with only two gyroscopes. If they succeed, CGRO is safe at least until a second gyro fails. According to a Goddard employee who requested anonymity, a solution is nearly at hand, but there may not be enough time to convince NASA headquarters. “We would need to develop completely new flight software,” he says, “which normally would take us 6 months.” And that may be too long to wait.

    Another way to prevent CGRO from making an uncontrolled reentry in the near future would be to boost its orbit well beyond its current altitude of 500 kilometers. But even that's not good enough. Although at an altitude of 800 kilometers CGRO would stay in orbit for more than a century, Bunner says that international regulations “don't let us pose any risk to future generations.” NASA has promised to make a decision by 16 February.


    E.U. Grabs Food Safety by the Horns

    1. Robert Koenig

    From “mad cow disease” to dioxin-contaminated poultry, food safety has flared into one of the most divisive issues in the European Union (E.U.), the world's largest importer and exporter of food products. With no single agency responsible for establishing and enforcing food safety standards across the E.U.'s 15 member states, national governments—often backed by their own research labs and food agencies—are frequently at each other's throats. Take the current “beef war” between Britain and France over conflicting analyses of the threat posed by bovine spongiform encephalopathy (BSE): The European Commission recently had to resort to a lawsuit to try to force France to accept the advice of an E.U. scientific panel and lift its ban on British beef.

    Last week, the commission proposed a slew of measures aimed at strengthening and harmonizing its scientific analyses of European food safety issues (see box). The centerpiece would be a new European Food Authority—a permanent scientific advisory body with its own reseach budget and scientific staff—and a scientific secretariat to coordinate advice from scientists who serve on E.U. advisory panels. The E.U.'s research commissioner, Philippe Busquin, told Science that the new Food Authority “should become the scientific point of reference for the whole E.U.” on food safety issues.


    · A “European Food Authority” to advise the commission, with its own research budget and a staff of several hundred scientists and others

    · A larger “scientific secretariat” for the commission's science panels to help coordinate links to E.U. risk management officials

    · A strengthened “rapid alert” system for food safety problems

    · A system for rapid identification of scientific experts in the E.U. to help with food safety research

    · Stronger links between the new Food Authority and national food safety and research agencies

    Health and Consumer Protection Commissioner David Byrne, who issued the proposals last week in the form of a white paper, contends that the proposed new authority—which must be approved by the European Parliament and the E.U.'s member nations—would establish “world-class food safety standards and systems” once it is in place, possibly in 2 years' time. Byrne sees the commission's current food safety advisory panels as “a core part” of the new organization, but it would have its own budget to commission “ad hoc and targeted research,” in cooperation with the E.U.'s Joint Research Centre and with scientific agencies in E.U. member nations.

    Some European scientists are welcoming the recommendations, but others complain that the new authority would lack teeth: With no direct power to inspect suspect food shipments, punish violators of European food laws, or dictate policy to E.U. member nations, it would have nowhere near the clout of the U.S. Food and Drug Administration (FDA). Some are also disappointed that the proposed organization would play no role in public health policy. The E.U. has a drug-analysis lab—the European Medicines Evaluation Agency in London—but has no equivalent to either the FDA or the U.S. Centers for Disease Control and Prevention.

    As public debate on food safety has raged over the past several years, the commission has struggled to beef up its authority. In 1997, the commission sought to insulate its scientific advisory panels on food safety from outside pressures—especially from industry—by moving them to the consumer directorate. But that shift solved only some of the problems, says physiologist Philip James, director of the Public Health Policy Group, a U.K. think tank, and a member of the commission's Scientific Steering Committee—an independent advisory panel that deals with multidisciplinary issues related to public health. “The commission's demands on scientists have at times been ridiculous, the remuneration for scientists has been too low, and the size of the ‘scientific secretariat’ support staff has been ludicrously small,” he says. Indeed, the white paper itself says “The existing [scientific advice] system is handicapped by a lack of capacity, and has struggled to cope with the increase in the demands placed on it.”

    James and two other members of the steering panel—German toxicologist Fritz Kemper and French food safety expert Girard Pascal—last month issued a 74-page report criticizing the scientific advisory structure and suggesting the creation of a wider European authority that would cover both food safety and public health. They argued that the European public's confidence in scientific and government analyses “has declined because of a perceived bias toward political and industrial rather than consumer interests.” Says Kemper, a Münster University professor who was a key player in developing the 1997 reforms: “We have to restore the confidence of European consumers, which was badly damaged by the BSE and other food safety debates.” The new proposals are “a step toward improving the commission's scientific advice system on food safety—but it is only an initial step,” says James. “Ideally, we would have a powerful agency like the FDA, but in Europe we have to do things one step at a time,” adds Kemper.

    But even this first step may be difficult. The European Parliament gave an initially tepid response to the white paper's proposals, and some consumer and food industry groups criticized the plan. David Barling, a researcher at the Centre for Food Policy in London, sees possible tension between the new E.U. Food Authority and the national food agencies that operate in eight of the 15 E.U. member nations. “There are some obvious fault lines,” he says, including “the potential for future conflict when you create a European Food Authority at the same time that national food safety agencies are emerging.”


    Getting Researchers to Pull Together

    1. Robert Koenig

    Like a fragmented empire of powerful fiefdoms, research in the European Union (E.U.) tends to be driven by the policies and leading laboratories of its 15 member nations rather than by any overarching vision across the whole community. Although that landscape is unlikely to change significantly anytime soon, the E.U.'s new research commissioner, Philippe Busquin, this week proposed steps to make the borders on the map of European research a bit less distinct.

    Decrying the “fragmentation, isolation, and compartmentalization of national research efforts” in Europe, Busquin delivered a white paper policy statement on 18 January that outlined his concept for a “European Research Area.” “There is no real research policy in Europe now, and the coordination of the member states' national policies and the European Commission is insufficient,” says Busquin. “The research effort is often too little, too late, and too much centered on the national context—especially in comparison with our main competitors, the U.S. and Japan.”

    A significant part of Busquin's plan is the creation of a new “council of high representatives” from pan-European research centers such as the CERN particle physics lab, the European Molecular Biology Laboratory (EMBL), the European Space Agency, and the European Southern Observatory. These and other centers have a major influence on European research, but are run independently by different groups of countries, not by the E.U. Busquin's idea is to get them working together, discussing funding sources, international cooperation, and access for non-E.U. scientists.

    Initial reaction to this proposal has been positive. “A substantive council that could identify and follow up opportunities for cooperation with the E.U. and among the centers would certainly contribute to the dynamism of European science,” says EMBL director-general Fotis C. Kafatos. Although the centers focus on widely differing fields, Kafatos says he foresees “the possibility of fruitful collaborations … for example in informatics, in modeling complex systems, and in engineering tasks.”

    The council is the most visible of several proposals Busquin included in his white paper. He also wants to commission a “benchmarking” study of research efforts in all E.U. nations; find ways to network the best European research centers into “virtual centers of excellence”; help to create an “E.U. patent” that would be simpler and cheaper than the current European patent; encourage national research centers to give technical support to start-up companies; and attract outside researchers to work in the E.U. Busquin's vision also includes a “thorough rethink” of plans for the E.U.'s next flagship research program, Framework 6, focusing more on areas that should be tackled at a Europe-wide level. The current framework funds a large number of crossborder projects in many fields, but accounts for only about 5% of public research expenditure in Europe.

    Busquin is hoping his white paper will spark a 6-month debate in European labs and the European Parliament over the best way to make E.U. research more cohesive. He must win the support of both the European Parliament and E.U. member governments to implement many of his proposals, but he aims to ask research ministers to endorse the plan in June.


    Did the Dinosaurs Live on a Topsy-Turvy Earth?

    1. Richard A. Kerr

    Earth's surface is constantly on the move. Over tens of millions of years, the drift of continents carries land from tropics to poles, from poles to temperate latitudes. But another agent of change may be at work beneath our feet, occasionally jerking not just a continent but the whole globe into new climate zones. On page 455 of this issue of Science, two researchers report evidence that 84 million years ago the whole Earth rolled like a ball, turning 15° to 20° in a couple of million years. That would be enough to rotate Washington, D.C., into the tropical latitudes of the Caribbean today. To paleoclimatologists, such a rapid whole-Earth tumble would trigger climate shifts that seem to have come out of nowhere. This “true polar wander,” so called because the geographic poles could actually travel across the globe, would move land and sea 10 times faster than continental drift ever has. And it may be symptomatic of unusual turmoil within the deep interior of Earth.

    Earth's poles are a wobbly crew, as scientists have long known. The magnetic poles ramble around a bit with the vagaries of the churning core that produces the magnetic field. Averaged over tens of thousands of years, though, they stick close to the geographic poles marked by the spin axis, which wobbles predictably while remaining fixedly pointing to the same stars. But despite all this wandering, experts have disagreed for decades about whether the geographic poles ever rapidly shifted their position. “Virtually every test we've done in the past 5 years suggests true polar wander has been overestimated,” says paleomagnetician John Tarduno of the University of Rochester, New York. But paleomagnetician Richard Gordon of Rice University sees something like the 84-million-year event in his own data.

    The principles of true polar wander have been straightforward enough since physicist Peter Goldreich of the California Institute of Technology and mathematician Alar Toomre of the Massachusetts Institute of Technology applied some basic physics to the problem in 1969. A spinning Earth is most stable when its most massive parts, such as an ice sheet on its surface or a lump of particularly dense rock within it, are farthest from its spin axis as marked by the poles—that is, when the extra mass is at the equator. If the mass forms or moves elsewhere, Earth will roll to bring it to the equator, while the spin axis remains pointing at the same stars. In a classic figure, Goldreich and Toomre likened the excess mass to an oversized bug heading from the equator to the pole. If it walks too slowly, the pole will wander away from it, keeping the bug forever near the equator.

    The physics worked, but it fell to paleomagneticians to show whether true polar wander has ever happened. They got the nod because they can track ancient magnetic poles, which coincide with the spin axis poles in the long run. The magnetic field is frozen into rocks as they solidify from lavas, recording the pole location. Paleomagnetician William Sager of Texas A&M University in College Station and geochronologist Anthony Koppers of Scripps Institution of Oceanography in La Jolla, California, have compiled 27 ancient pole locations dated to between 120 million and 39 million years ago by radiometric argon-argon dating. They assumed that each of the Pacific Ocean seamounts—mountain-sized piles of lava—had locked in a single magnetic field whose orientation points to the pole's location at the time the lava solidified.

    After weeding out the lower quality or undated pole positions, Sager and Koppers found an odd situation about 84 million years ago. Seamounts from that period, give or take 2 million years, yield two pole locations 16° to 20° apart. Plate tectonics couldn't have operated quickly enough to shift the poles that far in the few million years allowed, says Sager: “It looks like there was a rapid polar shift 84 million years ago.” By their analysis, the area of the Atlantic Ocean would have moved as fast as 110 centimeters per year southward compared to the top speeds of plate tectonics today of 10 centimeters per year.

    Not everyone finds the data convincing. Sager and Koppers “call it true polar wander,” says paleomagnetician Dennis Kent of Lamont-Doherty Earth Observatory in Palisades, New York, “but maybe it's something else. Maybe it's bad data.” “I think they underestimated the effect of data selection,” adds Tarduno: In Sager and Koppers's study, the 84-million-year event is defined by just four poles, he notes.

    Gordon, on the other hand, thinks something unusual happened in the Pacific around 84 million years ago. He also sees a large, although less abrupt, polar shift about that time in Pacific paleomagnetic data of a different sort: paleopoles derived from the magnetic field locked into long stripes of Pacific Ocean crust. For complete confidence, true polar wander should appear worldwide, by definition. However, the shift is far less obvious in paleopoles determined from lavas elsewhere around the world, Gordon concedes. Tarduno and Kent see that as a serious problem, suggesting a solely regional phenomenon like a redirecting of Pacific Ocean plate motions, but Gordon suspects the data from beyond the Pacific may just not be good enough yet.

    If so, he, like Sager, looks to Earth's interior for the driver of true polar wander. Perhaps a pile of ocean plates that had sunk 700 kilometers down finally broke through into the lower mantle, abruptly shifting the planet's mass distribution and triggering true polar wander. Then plate tectonics would lie behind Earth's tipsiness.


    The Promise of All Weather, All the Time

    1. Andrew Lawler

    Want to watch the weather unfold on your home computer or TV? Next year a Mississippi company called Astro Vision Inc. hopes to launch the first in a series of small satellites that would provide live, high-quality color videos from space of storm fronts, hurricanes, forest fires, and other natural disasters. The satellites have piqued the interest of NASA officials, who hope that the data will be useful to researchers, and they have caught the attention of venture capitalists, who see an opportunity to feed the Internet's insatiable demand for material.

    This week company officials said they have almost finished raising nearly $65 million in private funding for the first phase of their business plan, which calls for the launch of Avstar 1 in October 2001 and a twin 6 months later. The fund raising was aided by the prospect of some real income: Under a 1998 contract, NASA will buy $9.4 million worth of data on how tornadoes form. The company has a broader market in sight, however—the millions of people who watch The Weather Channel and monitor the Web site of the National Oceanic and Atmospheric Administration (NOAA). “We're going to change the way we view Earth's environment and weather,” boasts Michael Hewins, the company's chief executive officer. NASA officials aren't willing to go that far, but they are complimentary. “It's a neat little project, and they've got a good shot at making it happen,” says David Brannon, chief of commercial remote sensing at NASA's Stennis Space Center in Pearl River, Mississippi, who has worked closely with Astro Vision officials for the past 3 years.

    Each Avstar satellite will cost between $20 million and $25 million to build and launch, says Hewins. It would carry both a wide-angle camera with a resolution of 7 kilometers and a second camera with a narrower field to spot objects as small as a half-kilometer across. Although its resolution is poor compared with military spy satellites, it is similar to the 1-kilometer resolution achieved by NOAA's Geostationary Operational Environmental Satellites (GOES) and well-suited for weather watching.

    Astro Vision managers will be able to provide users with an image per minute, rather than the single image a government weather satellite typically produces every 15 minutes to a half-hour. That will enable Astro Vision to stitch together videos of hurricanes evolving, thunderstorms growing, and forest fires spreading. A customer interested in volcanoes could be alerted to an eruption, for example, or forest rangers could monitor the spread of fires. The second phase of Astro Vision's business plan calls for launching a satellite every 6 months or so, with a resolution down to 100 meters. In time, Hewins says, the company will operate a fleet of spacecraft “that can watch everything at once—eventually globally.”

    Fritz Hasler, a research meteorologist at NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland, says the Astro Vision approach could provide researchers greater flexibility than the GOES system, which must choose between viewing Earth from a wide angle or doing rapid scans of a smaller area. “Basically, we can only do one or the other,” says Raymond Steiner, a GOES operations specialist. And because GOES serves a broad community, it has fewer opportunities to do rapid scans for individual researchers. Although NOAA would like an additional GOES satellite that can be devoted to performing rapid scans of storm fronts and hurricanes, there is no budget for it.

    In addition, Brannon says Astro Vision's narrow-angle camera will give researchers a better view—at half-kilometer resolution versus GOES's 1-kilometer resolution—of the tops of storm cells that can produce funnel clouds, and potentially provide up to 15 minutes of additional warning of an impending tornado. Ghassem Asrar, chief of NASA's earth science office, predicts that the agency may be able to do some of its research more cheaply “by purchasing data upon delivery from the private sector instead of developing, building, and launching new satellites.”

    Not everyone is so impressed, however. Dennis McCarthy, the meteorologist in charge of the National Weather Service's Norman, Oklahoma, office, says that a combination of Doppler radar and GOES imagery can, in some cases, already provide forecasters with plenty of warning of a tornado. And he adds that rapid scans—about one a minute—by GOES “already let us see where the storms are developing.”

    Hewins insists that Astro Vision is not merely dedicated to providing data to the government. “We're a video-coverage company, providing entertainment,” he says, noting that television executives “go wild over this.” He anticipates that media, combined with Internet companies such as America Online and Microsoft, will account for roughly two-thirds of their business, with government agencies providing the rest. If so, it could prove a case of science leading to an entertainment spin-off.


    Embryos Attacked by Mom's Natural Defenses

    1. Michael Hagmann

    Ever since the 1950s when Nobel Prize- winning immunologist Peter Medawar first likened the mammalian embryo to a tissue transplant received from a genetically different individual, immunologists have been puzzled by a basic conundrum: How do we manage to procreate? Why doesn't the mother's immune system regard the prospective Dad's share of the embryo as foreign tissue and reject it? Work reported in this issue may help answer that question.

    Immunologists trying to tackle this paradox have focused on identifying factors that might suppress the so-called acquired branch of the body's immune system, which produces T and B cells directed at very specific target antigens. Although they have found some clues, a unifying picture is still elusive. The new findings, which come from rheumatologist Hector Molina and his colleagues at Washington University in St. Louis, Missouri, and appear on page 498, for the first time point to suppression of the immune system's innate branch, which is evolutionarily older and far less specific, as key to embryonic survival.

    The Molina team has found that Crry, a cell surface protein that suppresses a key part of innate immunity called the complement system, is necessary for the embryo to survive pregnancy, at least in mice. The complement system is a group of proteins usually activated during an inflammatory response triggered by foreign invaders. “Although the bulk of immunologists still believe in acquired immunity as playing a major role during pregnancy, this is beginning to confirm the importance of the innate immune system,” says reproductive immunologist Charles Loke of the University of Cambridge in the United Kingdom.

    It remains to be seen whether human embryos use a similar trick to evade a complement attack, but Loke and others think that that is highly likely. Indeed, Christopher Holmes of the University of Bristol in the United Kingdom, who proposed a complement-mediated mechanism for fetal survival in humans about 10 years ago, says it may be interesting to see “if complement inhibition could be a way of preventing miscarriages and spontaneous abortions,” which may end as many as 50% of all conceptions.

    Molina did not set out to study the role of complement in embryonic survival. Instead, he wanted to test whether Crry helps protect the body's own cells against damage during inflammation. He and his colleagues inactivated the Crry gene in mice, expecting to see little effect as there are numerous complement regulators that might help out if one of them is missing. But to Molina's surprise, knocking out the Crry gene had drastic consequences: None of the Crry-deficient embryos made it through pregnancy. They all died in the uterus about 10 days after conception.

    In healthy mouse embryos, Crry is the only complement regulator the team found in abundance on fetal cells called trophoblasts that form part of the placenta, the interface between mother and embryo. So Molina hypothesized that “in the absence of Crry there may be an excessive complement deposition on these cells.” That turned out to be the case: The researchers found activated complement on trophoblasts of Crry-deficient embryos, but not on those from normal embryos. What's more, the knockout embryos showed a massive invasion of inflammatory cells, again in striking contrast to normal embryos. If the mothers were complement-deficient, however, they gave birth to essentially normal pups, further confirming that abnormal complement activation killed the Crry-deficient embryos.

    The picture that emerges, says Molina, is that “a lack of Crry on the trophoblasts leads to complement activation that, in turn, attracts and activates inflammatory cells. Because these cells lack specificity, they destroy the trophoblasts and, ultimately, the embryo.”

    That makes sense to immunologist Polly Matzinger of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland. “An internally grown fetus is no different from any other tissue [in the body] in that it has to protect itself from complement, which is amazingly dangerous stuff. And it does this by expressing a potent complement regulator,” she says.

    The big question now, though, is whether something similar happens in humans. “There are several examples of fetomaternal tolerance mechanisms that are very different between humans and mice, because a human pregnancy is quite a bit more complicated,” cautions Susan Fisher, a reproductive biologist at the University of California, San Francisco.

    One difference is that humans don't have a Crry gene. But, explains Holmes, two other complement regulators “are present in huge quantities at the fetal-maternal interface to form a barrier of sorts” and could perform Crry's vital job in humans. In fact the issue is next on Molina's agenda; he plans to tackle it by looking at complement regulators in women suffering from spontaneous miscarriages.

  8. KOREA

    Laid-Off Satellite Group Sets Up Shop

    1. Michael Baker*
    1. Michael Baker writes from Seoul.

    SEOULFirst Park Sung Dong got mad. Then he got even.

    Park, an engineer at the Satellite Technology Research Center (SATREC) in Taejon, was furious when the Ministry of Science and Technology (MOST) decided that most of his university-based institute should be absorbed by a larger government contractor. But after threatening a mass resignation last month to protest the decision, Park and his 55 colleagues have vowed to show government bureaucrats just how valuable their work can be: They have started a company that will hire researchers as they are laid off from SATREC over the next 3 years and market their skills to developing countries.

    SATREC was created in 1989 to develop Korean technical skills while doing exploratory research on microsatellites, those weighing about 100 kilograms. It is based at the prestigious Korean Advanced Institute of Science and Technology (KAIST), sometimes called Korea's MIT. But last summer MOST decided that much of SATREC's work overlapped with that of the Korean Aerospace Research Institute (KARI), which designs large commercial satellites costing several hundred million dollars, and that satellites should be developed at a government institute, not a university. Universities are places where professors teach students, officials explain, and the 20% that remains of SATREC will be better suited to filling a traditional educational role. “KAIST is not a research institute. It's a graduate school,” says Choi Jong Bae, an assistant director at MOST.

    Access to space is a priority for the Korean government, which last month announced a 5-year plan to build a rocket capable of launching small satellites. But SATREC workers feel that the government has a one-size-fits-all attitude toward satellites and is blindly consolidating two very different programs. “[KARI's] approach is to guarantee reliability and higher performance,” says Choi Soong Dal, founder and director-general of SATREC. “We are doing small microsatellites for experimental purposes.” Its third and most recent satellite, for example, cost just $10 million. It included experiments measuring Earth's magnetic field, solar particles, and the degradation of electric circuits from radiation in space.

    Choi Soong Dal mounted a vigorous lobbying effort to fight off a similar attempt by MOST 5 years ago to eliminate his institute, but this time he says he's resigned to accept whatever happens. KARI officials declined to comment on the consolidation, which was proposed last summer after a new science minister took office. Given the frequent reshuffling of the Korean cabinet, sources speculate that opponents may be counting on the possibility that the plan may not outlive his tenure.

    In the meantime, the first seven engineers to leave SATREC pooled $300,000 of their own money and on 11 January launched a venture company called Satrec Initiative. Their business plan targets developing countries with a limited budget and a hunger for technologies unavailable from advanced countries with laws that prevent certain tech transfers. They also hope to commercialize spin-off technology, says Park, noting that electro-optical systems for astronomy are similar to those used in semiconductor manufacturing and that materials shielding satellite circuits from radiation in space could benefit the nuclear power industry.

    Park looks forward to cashing in on such technology. That incentive didn't exist for him as a fixed-salary employee at SATREC, which didn't pursue commercial applications. “I'd like all of our engineers to become millionaires,” he says.


    How Rotavirus Causes Diarrhea

    1. Ingrid Wickelgren

    In a healthy adult, the diarrhea caused by common intestinal viruses such as rotavirus is largely an unpleasant nuisance. But to infants and toddlers, particularly those in the developing world where medical care may be scarce, rotavirus infections are a major cause of mortality, killing some 600,000 children worldwide. The virus is so deadly because it causes the intestine to secrete copious amounts of fluid, leading to death by dehydration in highly vulnerable infants. How rotavirus triggers the excess fluid secretion has long been mysterious, however. Now, on page 491, gastrointestinal physiologist Ove Lundgren at Göteborg University in Sweden and his colleagues provide a solution.

    They've shown that, in mice, the virus activates the enteric nervous system, the nerves that control the intestine's movements and its fluid absorption and secretion. The activated nerves apparently stimulate cells of the intestinal lining to boost their water secretion, resulting in diarrhea. The toxins released by some bacteria, including the one that leads to cholera and pathogenic strains of Escherichia coli, cause diarrhea this way. But, says gastroenterologist Don Powell of the University of Texas Medical Branch in Galveston, “nobody knew this mechanism was involved in rotavirus diarrhea. It's a nice piece of work.” Understanding what causes the excess fluid secretion in rotavirus diarrhea, he adds, is an important first step in figuring out how to block it with drugs.

    Currently, there's no way to prevent the infections. A vaccine against rotavirus had to be withdrawn last fall from the U.S. market, the only place it was sold, because it caused serious intestinal blockages that killed a small number of children. Children who become infected are treated with oral or intravenous salt and sugar solutions to prevent dehydration. But the treatment does not halt the diarrhea itself. So a drug that does, and could be added to the rehydration therapy, would be a big advance, Powell says.

    Despite the evidence of the enteric nervous system's role in the diarrhea caused by the cholera and E. coli toxins, few researchers considered the idea that the same mechanism might underlie virus-induced diarrhea. Then in 1996, Lundgren, whose own lab helped prove the case for cholera toxin, read a paper indicating that little was known about how rotavirus might produce abnormal fluid secretion in the intestine. So he teamed up with microbiologist Lennart Svensson of the Swedish Institute for Infectious Disease Control in Solna and several other colleagues to determine whether intestinal neurons might be involved.

    The researchers applied three types of compounds known to block neurotransmission in the gut to rotavirus-infected intestines from newborn mice that they maintained in solution. All three compounds greatly suppressed fluid secretion by the infected mouse intestines. The researchers found, for example, that fluids from infected intestines had a higher concentration of an unabsorbed radioactive compound after treatment than before, indicating that the intestines were secreting less fluid. But the inhibitors had little, if any, effect on fluid transport in uninfected mouse intestines, indicating that the rotavirus was behind the excessive neural activity. The researchers confirmed these observations with studies in living mice. Whereas 14 of 15 rotavirus-infected mice developed diarrhea, only 6 of 14 did when the team treated the animals by injecting the anesthetic lidocaine into the abdomen.

    By quantifying the effects of lidocaine and other nerve-suppressing agents on the isolated intestines, the researchers estimate that fully two-thirds of the intestine's secretory response to rotavirus is mediated by neurons. The “simplest” explanation for this, says Lundgren, is that rotavirus infection triggers the release of chemicals in the gut that activate nerve endings beneath the intestinal lining. The activated nerves then turn on secretory reflexes in cells of the intestinal lining that discharge chloride ions into the bore of the intestine. This action is thought to draw water into the bore by osmosis.

    Researchers still need to fill out the details of this mechanism. In particular, they don't yet know the identity of the nerve-stimulating substance released during rotavirus infection–information that would be important to anyone trying to develop drugs that block the neural activity, says Helen Cooke, a gastrointestinal physiologist at Ohio State Medical School in Columbus.

    Also unclear is whether pharmaceutical companies will take up the challenge of developing such drugs. They tend to shy away from investing in disorders such as rotavirus diarrhea that are important in developing nations but do not represent a big market elsewhere. Still, Lundgren says, his team's work suggests that many forms of diarrhea are caused by enteric nervous system stimulation. “If we are right,” he adds, “we could use one drug for every type of diarrhea.” And the drug companies might find that a far more attractive prospect.


    Black Holes Begin to Lose Their Mystery

    1. Robert Irion

    ATLANTAFrom the fringes of the universe to nearby reaches of our own galaxy, black holes are becoming more than obscure figures on the cosmic stage. Several findings released here last week at a meeting of the American Astronomical Society offer insights into the many forms that black holes can assume. Those guises, it appears, are determined largely by the ages of the black holes and their dietary habits.

    Astronomers have long suspected that most, if not all, large galaxies harbor black holes, including our own Milky Way (Science, 7 January, p. 65). In the distant universe, the most powerful black holes shine as quasars—highly energetic cores of galaxies fed by streams of hot gas spiraling into the holes. NASA's Chandra X-ray Observatory now may have unveiled the precursors to those voracious black holes—remote objects that emit torrents of x-rays but whose visible light may be shrouded by thick veils of dust. Tens of millions of the objects would dot the sky, although Chandra thus far has scanned for them only within a patch one-fifth the size of the full moon.

    Chandra identified these remote blips as some of the main sources of a previously mysterious glow of x-rays, first seen during a rocket flight in 1962, that fills the sky. The objects may have dominated the centers of galaxies that formed within a billion years of the big bang, speculate astronomers Amy Barger of the University of Hawaii, Honolulu, and Richard Mushotzky of NASA's Goddard Space Flight Center in Greenbelt, Maryland. If so, they would easily rank as the most distant black holes yet detected.

    In our galactic neighborhood, the identification of black holes has become almost routine. The Hubble Space Telescope has fingered 20 such objects at the cores of nearby galaxies, including three new ones described at the meeting by astronomer Douglas Richstone of the University of Michigan, Ann Arbor. Powerful gravitational fields around these holes, which carry perhaps 50 million to 100 million times more mass than our sun, pull stars into tight whirling orbits that Hubble easily resolves.

    But these black holes are dim shadows of the quasars that blazed billions of years ago, according to research by astronomers Neil Nagar and Andrew Wilson of the University of Maryland, College Park, and their colleagues. The team studied about 100 nearby galaxies with the Very Large Array, a set of 27 radio telescopes near Socorro, New Mexico. About 30% of the galaxies emitted weak levels of radio signals similar in detail to the emissions from quasars, Wilson says. Further analysis by the Very Long Baseline Array, a chain of radio telescopes stretching from Hawaii to the U.S. Virgin Islands, showed that the sources of those emissions were extremely compact. “The only known way to do that is with an accreting black hole,” Wilson says.

    Wilson regards such black holes as “dying quasars,” because they consume perhaps 1000 to 100,000 times less matter than their bright ancestors in the distant universe. Indeed, the key difference may be that most galaxies around us today are mature and stable, says astronomer Virginia Trimble of the University of California, Irvine. Black holes in the centers of these galaxies apparently go hungry, she says, because stars and gas no longer travel on the perturbed orbits that typically push matter toward the core of younger and more vigorously churning galaxies.

    Even closer to home, another team of astronomers may have spotted the first known black holes in the Milky Way that eat nothing at all. Two of these wanderers apparently revealed themselves when their gravity acted as lenses in space, temporarily warping and amplifying light from more distant stars (Science, 7 January, p. 67). Astronomer David Bennett of the University of Notre Dame in South Bend, Indiana, and other astronomers in the Massive Compact Halo Object (MACHO) collaboration estimate that each invisible object is a few thousand light-years away and about six times more massive than our sun, fingering them as likely black holes.

    The work by the MACHO team “looks very plausible,” says astrophysicist John Bahcall of the Institute for Advanced Study in Princeton, New Jersey, although he awaits more such lensing events for confirmation. Because black holes of this mass probably emerge naturally from the violent deaths of many giant stars, the Milky Way may brim with them. Bahcall says that these tiny brethren of the monster at our galaxy's heart could supply a sizable chunk of the elusive “dark matter” that makes the Milky Way far more hefty than it appears.


    Reaping the Plant Gene Harvest

    1. Trisha Gura

    As the databases fill up with a wealth of new plant gene sequences, researchers are turning to several innovative techniques to decipher the genes' functions

    Plant scientists are on the verge of reaping a bounteous harvest—not of golden grains of wheat or corn, but of raw data. Just last month, for example, researchers reported the first complete sequences of two chromosomes from Arabidopsis thaliana, a tiny mustard widely used as a model plant, and they expect to decipher the rest of its genome by the end of this year. Projects aimed at sequencing the genomes of major crop plants, including rice, are also beginning to bear fruit. But as these efforts pack DNA databases with a bumper crop of sequences, scientists are facing a mind-boggling challenge: how to figure out what all the new plant genes they are discovering actually do.

    In some ways, the challenge looms even greater than that facing animal researchers, whose own coffers are fast filling with genes of unknown function. Plant genomes tend to be even bigger and more complicated than those of mammals, but the scientists studying them lack the funding and attention doled out to their colleagues through high-profile ventures such as the Human or Mouse Genome Projects. Nevertheless, plant scientists are employing new techniques and innovative strategies to process their cornucopia of sequence information.

    Some of these techniques have been borrowed from mammalian genomics, such as the “microarray” technology now coming into vogue for analyzing how gene expression patterns change as conditions vary (Science, 15 October 1999, p. 444). Other methods apply only to plants: For example, researchers are using snippets of DNA known as transposable elements to generate wholesale lots of mutant plants that can then be screened for interesting trait changes. Both of these technologies can help identify genes that plants turn on or off in response to stresses such as drought or salty soils—information the biotech industry welcomes eagerly.

    But plant researchers are already venturing beyond that into the vast frontier of “metanomics.” The field tracks the effects of particular mutations or environmental changes on a plant's entire metabolic repertoire. To begin, scientists generate countless profiles or fingerprints of a plant's metabolites, such as the cadre of sugar derivatives produced during starch biosynthesis in potatoes, under various conditions. Later, researchers can get a quick idea of what biochemical pathway an unknown gene might perturb, or the side effects of a particular gene alteration, by comparing the metabolic profile of a genetically altered plant with those already logged into the databases.

    “These advances in tools and technologies are going to have a tremendously positive impact on the basic understanding of plant biology,” predicts Susan Martino-Catt, a plant molecular biologist at Pioneer Hybrid in Johnston, Iowa. Already, she says, “the applications are allowing us to make progress that we've not been able to make until now.”

    View this table:

    Getting started

    The first step in trying to figure out what a newfound gene does is to feed its sequence into a database. If the new gene matches the sequence of a gene whose function is known, it probably performs a similar role. This so-called BLAST searching can be a powerful technique, but the chances of finding a useful match will depend, in part, on the number of sequences already deposited in a database and what investigators know about them—which is one reason plant researchers are excited about the current progress with Arabidopsis.

    Two of the plant's chromosomes, about 30% of the genome, have already debuted—the results appear in the 16 December issue of Nature—while the other three are expected to come out by the end of the year. But although the Arabidopsis genome may serve as a Rosetta stone for deciphering other plant genomes, the lowly mustard will inevitably come up short in some respects. For one, 40% of the genes on the two completed chromosomes have no assigned functions. And for another, Arabidopsis sports a very tiny genome—roughly 110 megabases—which is easily dwarfed by the genomes of plants such as maize, which carries 4500 megabases of DNA.

    Fortunately, plant researchers have many other options besides BLAST searches. A quick and dirty way to guess the function of an unknown gene relies on a phenomenon called synteny in which genes devoted to particular functions, such as leaf development, tend to cluster in the genome and maintain their chromosomal organization from one organism to another. In fact, synteny is turning out to be so pervasive in grasses—including the important crop plants, wheat, rice, and maize—that researchers can draw a map in which they position the chromosomes of some seven different grass species in concentric circles and then draw straight spokes to connect similar genes in similar places on those chromosomes (see diagram). “We are not talking about related genes, but the same genes,” says Jeffrey Bennetzen, a plant geneticist at Purdue University in West Lafayette, Indiana.

    Still, wheat is not maize, and rice is not millet. “Clearly there are differences as well,” Bennetzen adds. And there is another limitation to comparative genetics: Researchers can often tell what class of protein a gene encodes, but they still may have no clue about the specific role it plays. For example, cells contain numerous kinase enzymes. They all add phosphate groups to other proteins, but act in a myriad of disparate biochemical pathways. “Five years ago, it would have been, ‘Wow, we found a new kinase,’” says Charles Arntzen at the Boyce Thompson Institute for Plant Research at Cornell University. “Today it is, ‘Oh no, we've found another new kinase.’”

    To get a handle on more specific information, genetic engineers are approaching the problem from a different direction: mutating plant genes and seeing what effect that has on the plant. Investigators already have developed a panoply of techniques, some of which have pinpoint specificity, while others create mutations in large numbers of genes. On the high-specificity end sits the fledgling technology dubbed chimeraplasty, which uses hybrids of synthetic DNA and RNA—chimeras—to create specific single-letter mutations in the genes of plants and other organisms (Science, 16 July 1999, p. 316). Researchers, including Arntzen and Gregory May of the Samuel Roberts Noble Foundation in Ardmore, Oklahoma, have successfully used chimeraplasty to mutate plant genes whose functions are already known, and they are now applying it to more obscure genes. “We are pursuing this with vigor,” May says.

    Martino-Catt's team at Pioneer Hybrid, which also reported success with chimeraplasty, has taken an additional, more broad-brush approach: They are creating random mutations in maize with a so-called transposable element—a bit of DNA that can jump into or out of genes in maize, interrupting their sequences, and effectively inactivating them. The Pioneer group has already let the transposable element, called mutator, loose in thousands of fertile maize plants, and have stored the resulting seeds. To find out whether mutator has interrupted any interesting genes, the researchers simply grow the seeds and then screen the resulting plants for changes, such as drought tolerance or sweeter kernels.

    With probes specific for mutator, the group can pull out and clone the DNA sequences containing the transposable element and, thus, figure out which genes it disrupted to produce a new trait. “It's a very robust system,” says Martino-Catt, who estimates that Pioneer can easily grow plants and fully screen about 100 genes a year using the approach. But although it works well in maize, mutator targets few other plants. And even in corn, the transposable element can't easily expose genes with less obvious functions, such as the production of a novel metabolite that might turn out to have medicinal properties.

    A more general approach to mutating plant genes involves a technique called “activation tagging.” The tags are stretches of DNA containing transcriptional enhancers—naturally occurring sequences that permanently turn on genes when inserted on the front or back end of the gene or even just nearby. “The beauty of enhancers is that they don't have to be in any specific place within a gene in order to turn on expression,” says May, whose group is among those using the technique.

    But enhancers don't jump into genes the way mutator does in maize—and that initially thwarted hopes of using them. However, researchers have been able to engineer the plant-infecting Agrobacterium tumefaciens to shuttle viral enhancers into plant cells. For example, May has used an engineered Agrobacterium to create over 300,000 lines of the alfalfa relative Medicago truncatula with inserts in approximately 95% of the plant's genes. His team is now screening for resulting trait changes. The enhancer elements, like the mutator sequence, then also serve as flags to identify the genes responsible for the changes.

    In a similar vein, a team led by molecular biologist Guy della-Cioppa of Biosource Technologies Inc. in Vacaville, California, is using tobacco mosaic virus (TMV) to shuttle genes into plant cells to trace their function. In one type of application, the researchers have created “libraries” by separately cloning thousands of genes from a plant, such as Arabidopsis, into TMV. They then infect tobacco plants in the greenhouse with the altered TMVs and screen the resulting plants for changes, such as disease or drought resistance, conferred by the transplanted gene. Della-Cioppa says Biosource researchers have gleaned clues to various gene functions, but he declines to discuss the details for proprietary reasons.

    The technique can also be used to inactivate tobacco plant genes that are counterparts of a gene from another plant. In this case, Della-Cioppa's team clones the transferred genes backward into the TMV, so that the messenger RNAs (mRNAs) they produce in the resulting plants will bind to the mRNAs produced by tobacco genes that have a similar sequence, effectively silencing the genes. Screening for trait changes should then shed light on the silenced gene and, ultimately, the inserted one. “It's a neat new aspect of virology to modify DNA sequences that we don't yet understand,” Cornell's Arntzen says.

    Questions remain about how useful the technique will be, however. For example, because TMV is used to infect only full-grown plants, the modified viruses may reveal little or nothing about the genes involved in early plant development. And there are concerns about the virus contaminating nearby plants in field tests, although Della-Cioppa says that the group uses a disabled virus that is less virulent than normal strains.

    Looking at the big picture

    All these techniques involve direct manipulation of the plant genes themselves. But researchers are also stepping back to get a broader picture of how a plant alters its patterns of gene expression or biochemistry over time or in response to changing environmental pressures. Some are borrowing techniques from the Human Genome Project, such as microarrays—chips containing hundreds or thousands of gene snippets laid out in precise arrays that provide quick snapshots of the expression of whole suites of genes simultaneously.

    To find out which genes are expressed in a plant cell or tissue, a researcher isolates the mRNAs produced by the cell's genes, copies them into DNAs (called complementary DNAs or cDNAs), adds a fluorescent tag for tracking purposes, and then washes a solution of the labeled cDNAs over a chip. Each DNA snippet on the chip will only pick up the cDNA from the corresponding gene. Hence, by measuring the fluorescence of the chip DNAs, a researcher can learn which genes crank up or slow down their expression in response to a particular change, say the introduction of a gene of unknown function into a strawberry plant.

    If a series of genes wink on and off together, they are likely to be operating in the same pathway. And if an unknown gene either over- or underexpressed in a plant affects the members of such a pathway, the unknown gene can be assigned to that pathway as well.

    Although microarray techniques can place new genes into known pathways, they may not reveal exactly what they do in those pathways. What's more, finding that a gene is making its mRNA doesn't necessarily mean that the mRNA is actually making the ultimate protein product. Thus, some scientists have turned to “proteomics,” which compares patterns of proteins, not mRNAs, under different conditions. “You have to catch the protein to know what metabolic pathway a gene is involved in,” says geneticist Hervé Thiellement, whose team at the University of Geneva in Switzerland is among those using such techniques.

    Proteomics uses two-dimensional (2D) gel electrophoresis to separate the proteins in a cell or tissue by size and pH characteristics, followed by mass spectrometry to help identify each component of the resulting gel pattern. Using this technology, researchers are building protein expression pattern databases for Arabidopsis, rice, maize, and pine trees. So far, these include comparisons of protein patterns from different plant organs and tissues, and they also show how the patterns change as a result of seasonal variations or water restriction. Visitors to the databases can call up pictures of 2D gels with links to previous publications about a protein of interest, the plant tissue from which it was derived, and genetic data such as sequence information and the localization of the corresponding genes on a linkage map.

    The 2D gel technique can be time-consuming and technically challenging, however. And, in the case of proteins with general functions such as kinases, changes in protein patterns can't always reveal exactly what a specific protein does in a particular pathway of the plant. Some answers may come, however, from the newest technique turning heads in the plant genomic community: metabolic profiling. The idea has its roots in toxicology and blood screening, where chemists have long used state-of-the-art gas chromatography to separate components of a liquid, followed by mass spectrometry to identify and quantify them.

    Plant researchers vary the application slightly: They want to create metabolic “maps” or generalized profiles of the metabolites produced by various plants and plant tissues. Profiles are likely to change when the plant is subjected to various environmental conditions or when an unknown gene is mutated, for example. Thus, after logging the results in a database, researchers can then compare how the maps change when they mutate a plant's own genes or introduce new ones.

    It was a problem with such a gene transfer experiment 3 years ago that prompted biochemist Richard Trethewey, then at the Max Planck Institute for Molecular Plant Physiology in Golm, Germany, to develop “metanomics,” as he calls it. His team was trying to reduce the sugar levels and boost the starch content of potatoes, an alteration considered desirable by makers of potato chips and by those who want to improve crop yields. To do that, Trethewey and his colleagues introduced the yeast gene for the enzyme invertase and a bacterial gene for another enzyme, glucokinase, into potatoes. Invertase converts the disaccharide sugar, sucrose, into its two constituents, fructose and glucose, which can in turn be converted to starch with the aid of the glucokinase. But much to the researchers' surprise, the transgenic tubers lost both sugar and starch.

    Not until the German team put the potatoes through an exhaustive chemical and physical analysis did they solve the puzzle: The excess invertase and the sugar molecules it produced had perturbed the potato plant's metabolism. The plants ended up rerouting the sugar molecules to create a host of alternate metabolites—none of which could ultimately fuel starch formation. Thus, the group was stuck with potatoes drained of both sugar and starch but abounding in unusual metabolites. Although the original experiment was seemingly a failure, the results drove home the realization that tracing such perturbations could provide another way to get insights into gene function.

    In collaboration with the German pharmaceutical company BASF, Trethewey and colleagues then founded a company, also called Metanomics, located in Berlin. There his team is fast and furiously cataloging metabolic profiles from Arabidopsis plants exposed to various conditions. The team is also working with mutated Arabidopsis lines to glean the function of unknown genes.

    Others at larger agro-pharmaceutical companies are also quietly and quickly creating their own version of metanomics. “Metabolic profiling will definitely play a considerable role in the future,” says Egon Moesinger at Novartis in Basel, Switzerland. Martino-Catt at Pioneer Hybrid agrees: “I see this as becoming more and more important as we move forward.”

    Moving forward in genomics may actually carry plant researchers back to their more biological roots where they can learn more about the metabolic pathways of plants. “When you have the chance to stand back and look at all the genes at one time,” Martino-Catt says, “you learn so much more by letting the plant teach you what it is doing.”


    New Age Semiconductors Pick Up the Pace

    1. Robert F. Service

    Long hampered by poor electrical qualities, low-cost electronics are now moving fast toward real applications, with the help of some clever chemistry

    If prices for consumer electronics seem cheap now, just wait. Researchers around the globe are closing in on a Holy Grail of electronics: making circuits out of cheap, throwaway materials like plastics. And if they succeed they'll likely launch a whole bevy of new applications, such as wall-sized flexible plastic displays and computerized merchandise tags that would allow shoppers to walk an entire cart of goods through a scanner and have their purchases recorded automatically.

    These experimental electronics have been on the drawing boards since the mid-1980s. That's when researchers discovered polymers able to conduct electric currents under certain conditions and not others—just as silicon and other inorganic semiconductors do. Since then, research into cheap, large-area electronics has been dominated by organic materials, which can be easily layered over large surfaces. But until recently, devices made with organics have been painfully slow in their ability to shuffle an electric current, a big problem in meeting today's demands for lightning-fast computation.

    Now, in a series of journal articles and meeting reports over the past few months, researchers have come up with several swifter alternatives, comparable in speed to the type of low-grade silicon transistors used to run laptop displays. Some are organics, others either incorporate inorganic components or are entirely inorganic. “There's a new richness of chemistries being brought to bear on this,” says chemist Howard Katz of Bell Laboratories, the R&D arm of Lucent Technologies in Murray Hill, New Jersey.

    The improvements have not just been in speed. Researchers have also made headway in creating electronic materials that would be reliable, stable over long periods, and cheap and easy to manufacture—essential attributes for commercial devices. “There has been a lot of progress in this field,” says Bell Labs chemist Zhenan Bao. Still, no one group has managed to put all these attributes into one material. “At this point, people are trying to put all the pieces together,” says Cherie Kagan, a materials scientist at IBM's T. J. Watson Research Laboratory in Yorktown Heights, New York. Nevertheless, there's growing optimism in the field that progress is beginning to gather pace. “It's starting to happen,” says Ananth Dodabalapur, a device physicist at Bell Labs who has helped push the field since the beginning.

    That's a big change for an area that until recently was struggling to find organic materials that could pass juice at even one 10-thousandth the speed attained by silicon, the stock material of the electronics age. Crystalline silicon, the material used to make the processing chips that serve as the brains of computers, can push charges at a blinding speed—1000 square centimeters per volt second (cm2/Vs), the standard measure of mobility in electronics. But the material is delicate and expensive to make, it must be handled in clean rooms to avoid contamination, and it can't be used to cover large areas such as displays. Amorphous silicon is cheaper and can be spread thinly to control laptop displays, but it only conducts at between 0.1 and 1 cm2/Vs. And films of this material must be grown in a vacuum, making it too expensive for cheap, throwaway circuitry.

    Organic semiconductors first burst on the scene in the late 1980s. They appeared to offer researchers the promise of electronic circuitry without the hassle and cost of silicon, because organics could be dissolved in solution and spun out into thin films on virtually any substrate. But the performance of early materials was anything but impressive. The first semiconducting polymer transistors conducted at only 10−5 cm2/Vs.

    Since then, researchers have created a wide variety of more promising semiconducting organics. Among the first of this new breed was a polymer called regioregular polythiophene that could move charges as fast as 0.1 cm2/Vs, just matching the pace of the slowest amorphous silicon. And researchers soon found that if they moved from long-chain polymers to shorter organic molecules, such as one called pentacene, they could reach speeds of nearly 2 cm2/Vs.

    But these newer materials were far from perfect. Over time, polythiophene is contaminated by air. Oxygen reacts with the material and converts the polymer from a semiconductor, in which the electric current can be turned on or off, into a rather poor conductor with no switching ability. That means the material cannot be used to make transistors without going to the added expense of encapsulating finished devices in a sealant. Pentacene brought added costs as well. To make the best semiconductors, pentacene molecules must be lined up in perfectly ordered crystals, which can only be done in a vacuum, making it a poor choice for cheap, large-area electronics.

    That left researchers wondering how to make a material with the speed of pentacene and the ease of handling of polymers. Independent teams at Bell Labs, IBM, and the Massachusetts Institute of Technology (MIT) have recently reported that they are well on the way, but each group took a very different path to reach this goal.

    Perhaps the widest ranging of this new generation of materials is the one from Bell Labs. Katz reported at last month's Materials Research Society meeting in Boston that he and his colleagues have come up with a new approach to making short organic molecules that can be processed in solution like polymers. They turned to a pair of organics that harbor carbon- and sulfur-containing thiophene rings used to make the polythiophene polymer semiconductors. Abbreviated as dihexyl alpha 5T and 6T, these organics contain a core row of five or six thiophenes that are polar, meaning they are capable of separating positive and negative charges. The ring chains are flanked on either end by short nonpolar hydrocarbon tails. When spun from solution into films, the molecules tend to stand on end. And because polar and nonpolar groups tend to separate, the film automatically forms alternating layers of thiophene rings and hydrocarbon chains. After experimenting with various solvents and processing conditions, Katz's team found a set of conditions that gave them high-quality films that they used to make devices with mobilities up to 0.1 cm2/Vs. What's more, notes Katz, these organics are stable in air and thus are compatible with cheap, large-scale processing. And like inorganic semiconductors, they switch automatically from nonconducting to conducting when primed with a bit of juice from a control electrode in the heart of a transistor.

    The IBM team decided that, rather than modify their organics, they would enlist the help of inorganic materials. In the 29 October 1999 issue of Science (p. 945), Cherie Kagan, David Mitzi, and Christos Dimitrakopoulos reported creating transistors harboring a new semiconductor made from a hybrid of organic and inorganic materials. The team's goal, says Dimitrakopoulos, was “to get the best of both worlds,” with the ease of processing from organics and the high charge mobility of inorganics.

    They achieved just that by choosing inorganic and organic components that assemble themselves into layered sheets. The organic component, called phenethyl ammonium, is deficient in electrons, creating vacancies in its electronic structure that behave like positively charged particles, while the inorganic portion—tin iodide—has an excess of electrons. The result is that when the two materials are mixed in solution at room temperature and allowed to settle as a film, the negatively and positively charged components assemble themselves into alternating negatively and positively charged layers that hold each other in place. And in devices made with the films, charges seem to be taking the fast lane through the inorganics: They've measured current speeds up to 0.6 cm2/Vs, “which is really getting into the heart of where you want to be,” says Kagan. What's more, Dimitrakopoulos notes that previous work has shown that transistors made with just tin iodide can reach mobilities as high as 50 cm2/Vs. “We're just scratching the surface with these materials,” he says.

    The IBM hybrid, too, is still a work in progress. These films also quickly degrade in air and therefore must be protected. But Kagan and Dimitrakopoulos say they're not worried. “We can change the organic and inorganic layers to improve the properties of the films,” says Dimitrakopoulos, adding that such attempts are already under way.

    The MIT team, led by physicist Joseph Jacobson, went full circle, using organics to help them make films of high-mobility inorganics that they then used for their transistors. To make thin crystalline films, inorganics such as silicon traditionally must be deposited from a gaseous vapor. But Jacobson's team wondered whether they could get good electronic properties by putting down a layer of nanometer-sized crystallites on a surface and then heating them up so that they bonded together. Much to their delight, as the team reported in the 22 October 1999 issue of Science (p. 746), the approach worked. The team surrounded nanoclusters of cadmium and selenium—each of which had just 1000 or so atoms—with organic groups that allowed them to dissolve in an organic solvent. That enabled them to lay down a thin liquid film of the clusters. When the researchers then heated their film up to 350 degrees Celsius, the organic groups essentially burned off and neighboring clusters sintered together in larger aggregates 10 to 15 nanometers across.

    Jacobson and his colleagues reported that devices made with the films moved charges at 1 cm2/Vs, 10 times as fast as Bell Labs' new organic material. Moreover, the MIT researchers have devised a potentially cheap and easy scheme for printing circuits made from this material. “This is nice work and could be a good track to follow,” says Francis Garnier, a pioneer in conducting organics at the CNRS Laboratory of Molecular Materials in Thiais, France.

    Still, the approach has its drawbacks, concedes Jacobson. To burn off the organics and sinter the nanocrystals requires heating the devices to a point that would melt transparent plastic substrates envisioned for use in displays. And, like many other such materials, they must remain isolated from air.

    Improving the speed and stability of these novel electronic materials aren't the only goals being pursued. Another Bell Labs team, led by physical chemist John Rogers, reported at the meeting that they are making heady progress in printing complex circuits of plastic electronics. To do the printing, Rogers and his colleagues wet preformed stamps with a solution containing gold and then pressed the stamps onto a layer of semiconducting polymer, which itself sits atop a plastic substrate. And although the work remains in its early stages, they've already succeeded in printing a circuit with 300 transistors that they plan to use to control light emission from a flat-screen display. “We've yet to make a functioning display, but we've built the electronics and been able to show that [the transistors] have good characteristics,” says Rogers.

    For now, Rogers and his colleagues have been making their circuitry out of a variety of semiconductors such as polythiophene. That material has a modest charge mobility, but because variations on the stamping technique can make such small features—down to about 100 nanometers—they can create ultrashort “channels,” or pathways for the electrons to follow in the transistors. With a shorter distance to travel, the overall speed isn't such a drawback.

    That experience drives home a key lesson that is dawning on researchers in the field, say Bao, Rogers, and others. For any next-generation semiconductor to succeed commercially, it must meet so many demanding criteria that the one that wins may not necessarily be the one that pushes current along the fastest. But then again, it may be. Says Rogers: “It makes for a good horse race.”


    Setting the Benchmark for Plastic Electronics

    1. Robert F. Service

    If you made an ideal organic conductor, pure and virtually free of defects, how fast could it conduct charges? Physicists Bertram Batlogg, Jan Hendrik Schoen, and Steffen Berg and crystal grower Kloc Christain—all researchers at Bell Laboratories, the R&D arm of Lucent Technologies in Murray Hill, New Jersey—decided to find out. At the Materials Research Society meeting last month in Boston, the team reported that they had grown nearly perfect crystals of some 10 different organic materials, all made from small-molecule components rather than spaghettilike polymers, and put them through their paces. Such materials wouldn't be suitable for commercial devices—they would be too expensive and delicate—but they could help define the limits of the possible for cheap plastic circuitry.

    According to Batlogg, the best crystal they came up with—made from pentacene—moved charges at about 3 cm2/Vs. And the others were not far behind. By contrast, notes Batlogg, crystalline silicon can reach 1000 cm2/Vs. The difference, he says, seems to be in the bonding between neighbors in the material. Silicon atoms form strong covalent bonds with their neighbors. This allows the quantum mechanical waves of electrons on neighboring atoms to overlap, which helps grease the pathway for charges to flow. By contrast, even in nearly perfect pentacene crystals, individual molecules are only weakly bonded to one another. The result is that the heat at room temperature causes them to jostle back and forth. “It's a jumble,” says Batlogg. In that jumble, the overlap of quantum mechanical waves varies, “so the probability of [a charge] hopping from one molecule to the next is reduced,” he says.

    Francis Garnier, a pioneer in the field of organic electronics at the CNRS Laboratory of Molecular Materials in Thiais, France, calls the new work “very important,” because it helps researchers in the field know how much higher speeds they can hope to achieve. The good news is that researchers working with small organics like pentacene as well as polymers have already produced devices that show speeds between 0.1 and 2 cm2/Vs, not far from the best case scenario. Some would say, however, that this is also the bad news, as it means there's not much room for improvement with organics. In the last few months, researchers at IBM and the Massachusetts Institute of Technology have begun experimenting with making easily processed devices with inorganic materials or organic-inorganic hybrids, which have the potential to push current much faster. These new results might wind up encouraging more researchers to follow that lead.


    New Incentives Lure Chinese Talent Back Home

    1. Dennis Normile*
    1. With additional reporting by Li Hui in Beijing.

    China is dangling higher salaries, bigger research budgets, and improved management practices to lure top young scientists now working abroad

    BEIJINGAt the age of 37, Chinese-born She Zhensu had already carved out a place in the upper ranks of the U.S. research community. A tenured professor of mathematics at the University of California, Los Angeles (UCLA), who studies turbulence, he published regularly in leading journals. But last year, after 12 years in the United States, She returned home to lead a multidisciplinary experimental group at Beijing University that has also been designated a state key laboratory. The salary “cannot be compared with what I earned in the States,” She admits, but he has no complaints about the move: “It is an opportunity for me to contribute to higher education and research in my motherland. I think I made the right choice.”

    That attitude is exactly what Chinese officials were looking for when they selected She to be among the first group of Changjiang Scholars. It's one of several new programs that offer premium salaries, generous research budgets, and other perquisites (see table) to try to bring back—or retain—top-level Chinese researchers who can provide the mentoring that young scientists now often seek overseas. So far, the numbers are small. The special grants have gone to some 150 researchers like She who have shown significant promise overseas, plus dozens of scientists who trained and worked abroad before returning home. But the government sees them as a core that will elevate Chinese science for decades to come. “They are role models for both productivity and good research practices,” says Bai Chunli, vice president of the Chinese Academy of Sciences (CAS), which runs the nation's premier research labs.

    These incentive programs are just one part of a broader strategy to reverse the tidal wave of scientific talent that has flowed out of China in the past 2 decades. The Ministry of Education estimates that upward of two-thirds of the 300,000-plus scientists and engineers who went abroad to study during that period remain abroad. In addition to the special incentives, the government is improving research conditions and adopting performance rather than seniority as the basis for funding and professional advancement. The strategy appears to be working. Last year 564 researchers returned to CAS from overseas while only 432 left for foreign positions—the first-ever positive flow. “We're finally turning the tide,” says Peng Liling, who oversees the incentive programs at CAS.

    The prototype for the new programs was CAS's so-called Hundred People Program, which has given out $32 million to 177 scientists since it began in 1994. The central government supplemented it in 1998 by the 300 Talents Program, a $72.5 million program spread over 3 years. Last year the Ministry of Education began its Changjiang Scholars Program with a plan to spend $15 million annually for 3 to 5 years. Roughly half the cost is being picked up by Li Ka-shing, a Hong Kong business tycoon and philanthropist. In addition to these efforts, some of the bigger universities, including Qinghua and Beijing, have their own incentive programs, as do several cities.

    Participants in these programs can do quite well. Changjiang scholars, for example, get a yearly salary of 100,000 yuan (about US$12,000), a hefty premium over the typical professor's salary of 9600 to 24,000 yuan. When combined with housing subsidies, She says, “it allows for a fairly comfortable living in China.” Research support is even more generous. The CAS programs provide about US$242,000 over 3 years to set up and equip a researcher's lab. “It's a good amount of money,” says physicist Xue Qikun, who last year joined CAS's Institute of Physics in Beijing to continue his work on semiconductor epitaxy techniques after 7 years at Japan's Tohoku University.

    Money isn't the only incentive, however. “For me, returning to China was just returning home,” says Yuan Ya-xiang, 39, a computational mathematician at the CAS Academy of Mathematics and System Sciences in Beijing. Yuan came back from the United Kingdom in 1988, long before the first incentive programs. Others say that working in China allows them to avoid a crowded career ladder that may be biased toward local researchers. “It's difficult for foreigners to become professors in Japan,” says Xue. Even in the United States, where foreign-born scientists are common, some researchers say it's still difficult for them to hold top management posts. “At universities, you see a tremendous number of Chinese-Americans holding distinguished chair professorships, but you find relatively few Chinese-Americans in senior administrative posts,” says Feng Da Hsuan, a professor of physics at Drexel University in Philadelphia, who is on leave as general manager of SAIC, a scientific consulting firm.

    View this table:

    In contrast, career paths in China are wide open, thanks in large part to the 10-year hole in higher education caused by the Cultural Revolution. “We have an urgent need for young blood,” says CAS's Bai. That demographic void was a boon to Wang En Ge, who returned to China in 1994 at age 37 under the Hundred People Program after 9 years in Europe and the United States. Last year he became director of CAS's Institute of Physics and also head of the institute's Center for Condensed Matter Physics, both in Beijing. “I am trying to do my best to provide a good environment for younger researchers,” he says.

    However, moving up quickly also can have a downside. “We've been promoted 10 or 15 years earlier than we should be,” admits Yuan, who holds the titles of vice president of the Academy of Mathematics and System Sciences, director of the Institute of Computational Mathematics and Scientific/Engineering Computing, and director of the State Key Laboratory of Scientific and Engineering Computing in Beijing. These administrative duties drain time from his research on nonlinear optimization at a time when his generation should be making a major contribution to new knowledge, he says, but he feels a sense of responsibility to the community. “Chinese scientists in a position [like this] also can do a lot,” he says.

    Returnees face other challenges, including readjusting to China's notorious red tape and thin infrastructure. “For the first 9 months, I had no time for work as I was filling out various forms,” says Changjiang recipient and Beijing University professor Ou Yangqi, a 44-year-old chemical physicist who came back to Beijing University in 1998 after more than a decade in France and the United States. Physicist Wang says the equipment, while adequate, may not match what was available overseas. His institute, for example, will get its first scanning tunneling microscope system this spring. But acquiring the technology isn't the problem, he says: “We felt we first had to find the people [who can use such equipment] and then get the money for it.”

    The Center for Condensed Matter Physics is a good example of how a critical mass of returned scientists can boost output. The center published 290 papers in peer-reviewed journals in 1998, more than any other Chinese institute. Wang attributes the output in large part to the tone set by returnees, including seven recruited through incentive programs, who together account for more than 100 of the center's 180 researchers and two-thirds of its 44 group leaders. He says his rate of publication is “almost the same as when I was in the States.”

    But not all fields enjoy such success. Nearly all of the hundreds of graduates of the physics department at Beijing University, the nation's most prestigious school, have headed overseas in the past 2 decades for graduate or postdoctoral training, for example, and only 10 so far have returned to positions at their alma mater. Officials say it's particularly hard to attract scientists in the computer and life sciences because of the opportunities overseas and competition from China's growing private sector. Yuan says the Academy of Mathematics and System Sciences has managed to attract just three returning scientists under the incentive schemes, and less than a quarter of the institute's 200-plus researchers have held regular jobs overseas. Many Chinese mathematicians and computer scientists who do return from abroad head straight for private labs like Microsoft's new research lab in Beijing (Science, 13 November 1998, p. 1235), he says, or to Internet start-ups. “The [incentive] programs are not really doing as well as we had hoped,” says Yuan. Even She, who is on leave from UCLA and still teaches there part of the year, intends to “keep in touch with U.S. institutions.”

    Of course, luring back a scientist is no guarantee of success. CAS officials also admit privately that some of the researchers in the Hundred People Program have been disappointments. To improve their chances, they have tightened standards for the 300 Talents Program to the point where meeting its goal in 3 years seems unlikely. Still, even if the incentive programs work spectacularly well, their very existence points to a deeper problem that officials say will take a long time to solve: “We need to be able to have our own students educated in China, doing research in China, at a world level,” says Yuan. “We've still got quite a long way to go to achieve that.”


    Integrating the Many Aspects of Biology

    1. Elizabeth Pennisi

    ATLANTA—At the annual meeting of the Society for Integrative and Comparative Biology, held here from 4 to 8 January, researchers studying a wide range of organisms described new insights into old topics, such as why dolphins have blubber and why leafy greens are good for you.

    Blubber Springs Back

    The conventional wisdom holds that the thick layer of fat under the skin of marine mammals such as dolphins and whales provides insulation and improves the buoyancy of creatures living in cool oceans. But data reported at the meeting by functional morphologist Ann Pabst of the University of North Carolina, Wilmington (UNCW), suggest another, less conventional function for this fat layer, known as blubber.

    Over the past few years, Pabst and her colleagues have poked, probed, modeled, and examined blubber in minute detail. They found that it's more than just a jumble of fat cells. The tissue contains a mix of collagen and elastin fibers that alter the mechanical properties of blubber at different places along the body. Now, studies of dolphins trained to swim in laboratory setups suggest that these mechanical properties turn the dolphin's tail into one long spring that helps it swim efficiently. “There's a mechanical method to the madness of this tissue, both in its fat cells and its mechanical weave,” comments John Long, a biomechanist at Vassar College in Poughkeepsie, New York. “[Pabst] is really changing our view of how whales and dolphins swim.”

    Researchers have known for some time that terrestrial animals often make use of springs: A kangaroo, or even a human, stores energy in leg tendons that, when released, helps propel the body into the air. Many biomechanists have hesitated to consider that springs might aid swimming, however, as gravity plays a role in the springlike activity of legged organisms and gravity has little effect on organisms suspended in water. Instead, muscles were thought to provide all the power for swimming. But work in some invertebrates, such as squid, and a few fish suggested otherwise, and now the dolphin can be added to the list.

    The idea that the dolphin tail could provide elastic energy springs in part from work by UNCW graduate student Jonna Hamilton. Working with Pabst, she found that the blubber midway along the bodies of dolphins and porpoises contains collagen and elastin fibers in a diagonal crossweave pattern. As a result, the researchers found, the blubber there is relatively stretchy. In contrast, the fibers in the blubber near the end of the dolphin tail, or fluke, tend to be closely packed, with one set running parallel to the long axis of the dolphin and another set perpendicular to it. Consequently, the blubber there is three orders of magnitude stiffer than that in the middle of the dolphin body. By studying the engineering principles that underlie the design of humanmade composite materials, the researchers realized that this range of properties set the stage for the tail to work like a spring.

    To test that idea, Pabst and UNCW colleague William McLellan teamed up with marine mammalogist Terrie Williams of the University of California, Santa Cruz. Williams has two dolphins trained to push against force plates as they swim. The researchers painted nontoxic dots along one dolphin's tail and videotaped it swimming against a force plate. They observed that the stiffer blubber closer to the fluke hardly bends at all, whereas the blubber closest to the dorsal fin bends quite a lot, reaching its maximum distortion at the bottom of the tail's downstroke and then bouncing back. “At one end the blubber functions as a spring, but at the base it's [stiff] and absorbs unwanted energy,” comments Robert Full, a comparative biomechanist at the University of California, Berkeley. “That's really neat.”

    In addition to helping biologists better understand how whales and dolphins get along in their watery world, the results are also interesting because “they tell us about how to make things,” says Steve Nowicki, a comparative integrative biologist at Duke University in Durham, North Carolina. For example, Steve Wainwright of Duke University and his colleagues are building new kinds of flexible propellers, called propulsors, that resemble the dolphin's fluke. The studies of the properties and movement of the blubber and associated skeletal and muscle components near the fluke are now helping the team figure out the best way to attach this newfangled propeller to the rest of an underwater craft. And the U.S. Navy is interested, too, Pabst and others note, because of the potential to build quieter, more fuel-efficient submersibles.

    Yet Another Role for the Versatile NO

    Add this to the long list of reasons why you should eat your vegetables: They are rich in nitrates. Nigel Benjamin, a pharmacologist at St. Bartholomew's Hospital in London, reported at the meeting that harmless bacteria living on our tongues convert nitrates to nitrites, which in the acid environment of the stomach produce the potent pathogen-fighting chemical nitric oxide (NO). This, he proposes, helps protect the body against harmful microbes taken in with our food. Moreover, by harnessing this natural antimicrobial strategy, Benjamin's team has cured long-standing fungal infections on human skin, prompting him and his colleagues to begin developing a new line of disinfectants and antimicrobial treatments.

    Benjamin was alerted to the antiseptic power of vegetables by his studies of NO in the body. Over the past 20 years, biomedical researchers have discovered that this ubiquitous gas has many critical roles. It is a neurotransmitter, for example, and certain white blood cells churn it out to destroy invading bacteria, fungi, and viruses. But about 5 years ago, Benjamin began to piece together another way in which the body uses NO—one that could explain how early humans were able to survive the continual onslaught of food-borne pathogens in the raw meat they ate.

    The first clue came about 7 years ago, when Benjamin was trying to quantify the amount of NO in the body by measuring nitrate excretion, which was at first thought to result primarily from NO. He realized he first needed to know how much nitrate enters the body as part of the diet and how that nitrate is used. Several years of experiments, both in the test tube and with people, revealed that the digestive tract is quite adept at absorbing nitrate and that at least some of it winds up as nitrite, concentrated up to 10-fold, in saliva. Nitrates and nitrites came under fire in the 1970s as food additives because they can be converted to nitroso compounds, which can be potent carcinogens. But Benjamin reasoned that the body wouldn't concentrate nitrite in saliva to the degree that it does if it didn't also play a beneficial role.

    His group subsequently showed that in the rat mouth, an enzyme called nitrate reductase converts nitrates to nitrites. Curiously, the rats themselves weren't producing the enzyme: The researchers could only find a bacterial form, and germ-free rats, it turned out, had no trace of the enzyme at all. Two years later, the team demonstrated that most of the nitrite is churned out by benign species of Staphylococcus bacteria, primarily from hideaways in the deep clefts at the back of the tongue, where oxygen is relatively scarce. The bacteria use the nitrate as an electron acceptor for anaerobic respiration, then release the resulting nitrite into the saliva. There, Benjamin says, it is swallowed, in “up to a liter of saliva per day” for a person.

    At the meeting, he described the fate of this nitrite once it hits the stomach's acids: By sticking tubes down the throats of volunteers fed a helping of lettuce (a high-nitrate vegetable), the researchers detected a rise in NO concentration from 15 parts per million to 400—a concentration high enough to “kill any bacteria that we have swallowed,” says Henry Trapido-Rosenthal, a molecular biologist at the Bermuda Biological Station for Research. Having a sterile stomach explains “why we and other vertebrates didn't die out from diarrheal disease,” he notes. “There are very important implications for medicine.”

    Indeed, Benjamin and his colleagues are already exploring some of those implications. He and Tony Ormerod of the Aberdeen Royal Infirmary showed that a combination of acidic and nitrite creams can clear up persistent skin infections, such as athlete's foot, which is caused by a fungus. It also combats viral infections, such as molluscum contagiosum, which causes lots of bumps on the skin. The results make nitrite products “an intriguing possibility for [treating] fungal infections,” says Esther Leise, who studies NO function at the University of North Carolina, Greensboro.

    Benjamin and his colleagues are hoping that acidified nitrite will find its way into the market, both as a treatment for fungal infections and, in another form, as a new type of disinfectant. “[These products] will be ecologically safe,” much more so than chlorine-based disinfectants, he predicts. As for dietary advice: Benjamin can't recommend adding bacon to meals, as smoking produces a range of nitrogen-based compounds, some of which may be toxic to the body. But he does have this word of advice for lovers: “Eat salad before you kiss.”


    Researchers Cash In on Personalized Species Names

    1. Sabine Steghaus-Kovac*
    1. Sabine Steghaus-Kovac writes from Frankfurt, Germany.

    Looking for that extra-special gift for a loved one who seems to have everything? Help is at hand. In return for a donation to biodiversity research, you can have a previously unknown species of orchid, or mosquito, or sea slug, named after them and recorded in the scientific literature for perpetuity. The scheme for generating the zoological equivalent of vanity plates, dubbed BIOPAT, was registered last week as a nonprofit organization in Germany. Its founders hope it will become a valuable source of funding for the unglamorous fields of systematics and taxonomy, as well as supporting conservation in the new species' home countries.

    Recent research suggests that one-tenth or fewer of the species that exist on Earth today are described in the scientific literature. Although 10,000 new species are described every year, thousands lie nameless in museum drawers. “In our collections we have 500 or 600 new animal species,” says Gerhard Haszprunar of the State Zoological Collection in Munich, one of the institutions behind BIOPAT. The problem, he says, is “getting the money to work on the material.”

    A critter's scientific name—the genus and its individual species name—is usually defined by the taxonomist who first describes the species and often refers to its appearance or where it was found, but can immortalize the collector or whoever supported the research. These traditions are not always followed, however: Recently a marine snail of the genus Bufonaria was named borisbeckeri after the German tennis player, while a Colombian tree frog was granted the moniker Hyla stingi by a fan of the British rock star. Once a name is published together with the species' first description in a scientific journal, it becomes internationally recognized.

    German taxonomists, led by the Federal Agency for Technical Co-operation, decided it was time to cash in on the endless name giving. Interested amateur naturalists, or even multinational corporations, can browse a photo gallery of unnamed species on the BIOPAT Web site ( After spotting their adoptee, they can bestow a Latin name of their choice for a donation starting from $2800 for individuals, or more for corporations. Half of each donation will go to the institution where the species was studied; the rest will be spent on protecting biodiversity in its country of origin.

    Some researchers fear that money may sully the scientific process of defining species. “A researcher might put himself under pressure to describe species as new, which may have been described before already,” says Rüdiger Krause of the Zoological Museum at Dresden. To reduce such temptations, BIOPAT will not offer an unnamed species unless a researcher first has a description accepted for publication in a peer-reviewed journal or approved by a BIOPAT advisory committee. If the scheme catches on, the organizers hope to enlist more research museums in Germany and abroad. Says BIOPAT president Claus Bätke of the Federal Agency for Technical Co-operation: “We already have some definite orders.”