News this Week

Science  11 Jun 1999:
Vol. 284, Issue 5421, pp. 1742

    NIH Urged to Fund Centers to Merge Computing and Biology

    1. David Malakoff

    The National Institutes of Health (NIH) should fund a new network of interdisciplinary research centers that would boost the number of computer-savvy biologists, an advisory panel recommended last week. The advice, which drew a positive reaction from NIH chief Harold Varmus, is contained in a report that lays out a road map for the $16 billion agency in biocomputing, an emerging interdisciplinary field that the panel said has been neglected by academia. But it does not spell out how much NIH should invest in the field, nor how the money should be spent.

    The use of computers in biomedical science has grown explosively over the last decade, with researchers relying on the machines to do everything from browse the technical literature to model protein folding. But few biologists have the computing expertise needed to fully tap the flood of new data being generated by gene studies and clinical trials. A typical gene lab, for instance, can produce 100 terabytes of information a year—a database equivalent to 1 million encyclopedias. At the same time, few computer scientists know enough biology to figure out the best ways of sifting valuable nuggets from the data deluge. “You can count on the fingers of one hand” the number of researchers with topflight training in both fields, says geneticist David Botstein of Stanford University in California, a co-chair of the advisory panel.

    To help NIH figure out how to attack the problem, Varmus last year appointed a 16-member working group led by Botstein and computer scientist Larry Smarr, director of the National Center for Supercomputing Applications at the University of Illinois, Champaign. After polling researchers working in disciplines from neuroscience to population genetics, the panel came up with four recommendations in a 20-page report* that it hopes will influence the 2001 budget request now being drawn up.

    The panel's cornerstone recommendation is that NIH create between five and 20 biocomputing centers at universities and independent research institutes as part of a “national program of excellence.” The competitively funded centers would range in size from a handful of researchers at a single institution to a multi-institutional consortium with a budget of up to $8 million a year. The funding would flow from a new NIH biocomputing program that would make research grants through one or more of NIH's disease-oriented institutes, with contributions from the host institution.

    A second recommendation calls for NIH to take a more active role in shepherding the growing flock of biomedical databases, which hold everything from gene sequences to drug trial results. Although the agency funds a variety of bioinformatics research efforts, the panel noted, none is dedicated to organizing and curating databases, which researchers believe could yield important insights if indexed and integrated. “The goal is a system of interoperable databases,” the panel wrote.

    Panelists also asked NIH reviewers to be more supportive of requests from individual scientists for funds to hire biocomputing talent, including highly priced programmers. “It is not a pretty picture down in the study sections when you [ask for] three programmers at $85,000 each,” bemoans Botstein. In the future, says the panel, NIH needs to ensure that a larger share of its bread-and-butter R01 grants to individuals “may be used for biomedical computation.”

    The fourth recommendation is for NIH to help build all kinds of computing resources. In particular, Smarr believes that midlevel networks—computing systems more capable than a single desktop machine but less powerful than a supercomputer—would be a good way to test experimental software. The panel warned, however, that more scientists are needed who know how to set up and operate such networks.

    Indeed, Smarr says the panel “kept coming back to people as the limiting factor.” He sees the proposed training and research centers as “watering holes” that will use research funds as a lure to attract both biologists and computer scientists to core problems, such as database organization. He and others also hope that NIH's backing will knock down disciplinary walls and bureaucratic obstacles to producing —and hiring—more biocomputing faculty. Funding the centers “will send a powerful message, both in academe and within the NIH community itself, about the importance of computation,” the panel wrote.

    Some biocomputing researchers say an NIH endorsement would help stanch the flow of academic biocomputing talent to industry. The report is “good news” and “should cause universities to pay attention,” says computational biologist Larry Hunter, a researcher at NIH's National Cancer Institute in Bethesda, Maryland, and president of the International Association for Computational Biologists. “It could help reverse a situation in which the [academic] rewards for doing technology development are not very good,” adds Chris Overton of the University of Pennsylvania, Philadelphia, one of a handful of schools to take steps to create new biological computing programs.

    Because money is a sure-fire attention getter, the academic community is already eager to see how much NIH officials will decide to invest. Although some funds may be forthcoming in the 2000 fiscal year that begins in October, NIH's real response will be contained in its 2001 budget request that emerges early next year after negotiations with the White House.

    • *The Biomedical Science and Technology Initiative, prepared by the Working Group on Biomedical Computing, 3 June.


    Scientists Block NIH Plan to Grant Ph.D.s

    1. Bruce Agnew*
    1. Bruce Agnew is a writer in Bethesda, Maryland.

    National Institutes of Health (NIH) director Harold Varmus would seem to have everything he needs to create a world-class graduate school in biomedical science—about 1150 tenured and tenure-track researchers, the country's largest clinical research center, and scores of experts in the hot new discipline of bioinformatics. Only one element is missing: acceptance of the idea by outside scientists. But that may be a deal-breaker.

    Last week, for the second time in 6 months, opposition from several scientists on Varmus's influential Advisory Committee to the Director forced him to withdraw a plan to establish a small graduate school on the NIH campus. He and Deputy Director for Intramural Research Michael Gottesman will advance the idea again only if they can find a way around “some central crevasses,” Varmus told his advisory panel on 3 June. “Negative votes here count pretty heavily,” he said.

    Gottesman and Varmus were seeking the advisory panel's endorsement of a Ph.D.-granting program in disease-oriented, integrative biology that would enroll 15 students a year for a 5-year course of study. The curriculum is still “in a rather preliminary stage,” Gottesman said, but instruction would be mostly tutorial or in small classes, and the program would emphasize areas of NIH strength in bioinformatics, clinical research, and genomics.

    NIH already has about 700 graduate-level students on campus under a variety of training and outreach programs. But Gottesman said the course preparation and teaching involved in a degree-granting program would “add to the intellectual excitement in our laboratories,” foster more interdisciplinary contact, and help NIH recruit top-level academic scientists who want to keep teaching. “We don't intend to use any extramural funds,” Gottesman pledged. But that didn't mollify the critics. They called the grad school plan inadequately thought out and the wrong move at a moment when universities are already turning out too many life sciences Ph.D.s.

    “We right now in this country have an excess of Ph.D.s trained in the biological sciences,” said molecular biologist Shirley Tilghman of Princeton University. “For the NIH to turn around and start a graduate program sends the wrong symbolic message to the community.” Tilghman has long been concerned about worsening job prospects for newly minted Ph.D.s. She chaired a National Research Council study last year that urged universities to freeze the size of graduate programs and forgo developing new programs “except under rare and special circumstances” (Science, 11 September 1998, p. 1584).

    Philip Needleman, chief scientist of Monsanto Co. in St. Louis, said the NIH plan sounded neither focused nor unique. Cell biologist Marc Kirschner of Harvard Medical School in Boston not only echoed those concerns but also stated bluntly a consideration that usually goes unspoken at NIH advisory council meetings. “It's hard to see NIH proposing a program in these very important areas at the same time that NIH is not supporting, for example, a [training] program that we had at Harvard, which basically had this terrific cohort of graduate students,” Kirschner said.

    Neurobiologist Eric Kandel of Columbia University in New York City chided Kirschner a few minutes later. “I think our function on this committee is to make sure that NIH is as strong as possible, not that Harvard or Columbia is as strong as possible,” said Kandel, who enthusiastically supports an NIH grad school. “I think we all sense that, from an academic point of view, this place could be stronger as a result of a Ph.D. program.”

    On a show-of-hands vote, Varmus won what he called “a partial vote of endorsement.” A majority of the panel voted that he and Gottesman should keep trying to develop a grad school plan, particularly a more detailed curriculum. Tilghman, Needleman, and Kirschner voted “no.”

    Varmus won't try to cram the idea down the throats of extramural scientists. He noted that if U.S. research universities object, Congress would likely raise questions, too. “This is not the issue over which we would want to put the entire NIH at risk,” he said. But he indicated he may make one last try by naming a subgroup of the advisory committee to help refine the proposal. It might not be a bad idea for him to invite Tilghman to join in. At the end of the day, she left a possible opening. Although NIH's plans at this point don't look “terribly different” from other graduate programs, Tilghman said, “I would be extremely enthusiastic about a true Ph.D. in bioinformatics.”


    Slide Into Ice Ages Not Carbon Dioxide's Fault?

    1. Richard A. Kerr

    As an agent of climate change, the carbon dioxide in the atmosphere gets a lot of respect. It's famous as the force behind the predicted greenhouse warming fueled by human activities. And in research circles, falling levels of carbon dioxide are the presumed culprit behind the recent 100 million years of climatic cooling—a long, bumpy slide that started in the balmy age of the dinosaurs and plunged the world into an ever-deepening chill, culminating in the ice ages of the past 2 million years. But two independent studies now raise questions about whether this big chill really can be blamed on carbon dioxide.

    In this issue (p. 1824), paleoceanographers Paul Pearson of the University of Bristol in the United Kingdom and Martin Palmer of Imperial College, London, report that about 43 million years ago, when the world was perhaps 5ºC warmer than today, carbon dioxide levels were not dramatically higher than they are now. An independent study by paleoceanographer Mark Pagani of the University of California, Santa Cruz, and his colleagues in this month's Paleoceanography comes to a similar conclusion for a warm spell about 15 million years ago. Even when climate did shift, Pagani's team says, carbon dioxide levels stayed fairly constant.

    The new studies suggest that “we may have to think harder about what's driving the [climate] system on these long time scales,” says paleoclimatologist Thomas Crowley of Texas A&M University in College Station. “It could be the whole carbon dioxide paradigm is crumbling,” at least when it comes to explaining very long-term climate change. Carbon dioxide is still a powerful driver of climate, Crowley and others say—powerful enough, researchers think, to warm the world in the coming century—but over millions of years, other factors such as changing ocean circulation may have warmed or chilled the planet.

    Lacking any way to sample air directly from tens of millions of years ago, researchers often seek clues to past carbon dioxide levels in ancient marine sediments. One technique uses the carbon isotope composition of the organic matter from tiny algae called phytoplankton, which like all plants tend to incorporate the lighter isotope of carbon, carbon-12, over carbon-13. The phytoplankton can afford to incorporate more light carbon when there is plenty of carbon dioxide in the water—and therefore in the air. Such analyses have tended to support the carbon dioxide-climate link, putting carbon dioxide levels 65 million years ago and for many millions of years thereafter as high as five to six times today's values.

    However, other factors can also bias the carbon isotope ratio, including the species of plankton, their shape and growth rate, and contamination by organic matter from land. So Pagani and his colleagues Michael Arthur and Katherine Freeman at Pennsylvania State University, University Park, refined the technique. They focused on a single type of organic molecule—long-chain ketones called alkenones—that are made exclusively by a type of phytoplankton called coccolithophorids. These plankton lived in nutrient-poor waters and presumably grew at low and constant rates.

    With this method, Pagani found that 17 million years ago, when much of the ocean was up to 6ºC warmer than today, carbon dioxide was actually lower than 270 parts per million (ppm), its value just before the industrial buildup began. And when the ocean abruptly chilled and ice began piling up on Antarctica 14.5 million years ago, carbon dioxide rose slightly rather than falling. Between 25 million and 10 million years ago—when climate was generally warmer than today—atmospheric carbon dioxide varied between 260 and 190 ppm.

    Pearson and Palmer found a similar decoupling of carbon dioxide and temperature when they monitored ancient carbon dioxide levels using a different set of isotopes: isotopes of boron in the skeletons of plankton called foraminifera. To build their skeletons, forams take up carbonate ions from seawater, but they sometimes incorporate boron instead by mistake. The isotope composition of that interloper boron depends on the relative proportions of borate and boric acid in seawater, which depends in turn on pH. And the ocean's pH depends, among other things, on the amount of carbon dioxide dissolved in the seawater as carbonic acid.

    When Pearson and Palmer applied the boron technique to 43-million-year-old marine sediments from the tropical Pacific—a time of such warmth that the waters around Antarctica were perhaps 16ºC warmer than now—they found that carbon dioxide levels were somewhere between 180 ppm and 550 ppm, with the most likely value being 385 ppm—well below the earlier reports of six times the present level. Either the climate system is extraordinarily sensitive to small changes in carbon dioxide, Pearson and Palmer conclude, or something other than carbon dioxide drove the 50-million-year cooling.

    Taken together, the two studies suggest that “we need to reconsider the prevailing dogma,” says paleoceanographer Edward Boyle of the Massachusetts Institute of Technology. No one is challenging carbon dioxide's status as a potent agent of climate change; there's little doubt, for example, that carbon dioxide rose and fell in step with climate over the 100,000-year cycle of recent ice ages. But researchers will be looking at other factors to help explain the long-term chill of the past 50 million years.

    The leading alternative is a reorganization of the ocean currents that carry heat to high latitudes. When the ocean was cooling 15 million years ago, shifting continents were probably opening the way for the circumpolar Antarctic current to develop, isolating the Antarctic from tropical heat and tending to push the world into a colder climate, notes Pagani. Other climate-altering circulation shifts might have come when the eastern end of the now-vanished Tethys Sea between Asia and Africa closed around 15 million years ago and when the rise of the Isthmus of Panama separated the Pacific and Atlantic oceans 2 million to 4 million years ago—all times of cooling.

    Of course, it's also possible that the methods behind these two studies are in error, as both techniques still require some assumptions. The boron technique assumes that seawater composition remained roughly constant over millions of years, and the alkenone method assumes that coccolithophorids grew at a steady rate. To check the validity of the methods, says Crowley, the techniques must be applied to a time of known carbon dioxide concentrations, such as the past few hundred thousand years. Pagani is even now marching his carbon analyses up a sediment core toward the present. If the technique proves reliable, carbon dioxide may have to share its role as prime mover of long-term climate change.


    Elusive Interferon α Producers Nailed Down

    1. Michael Hagmann

    Almost everyone agrees that recycling is a good idea. Indeed, even our immune system seems to have picked up the idea for one of its most valued assets—cells. A paper in this issue of Science suggests that the body uses the same set of cells to perform two immune functions, one when the cells are young, and a second after they mature.

    On page 1835, a team of immunologists led by Yong-Jun Liu at the DNAX Research Institute in Palo Alto, California, reports pinning down the origins of a key component of our immune defenses. They've isolated the hitherto elusive cells, known as natural interferon-producing cells (IPCs), that churn out huge amounts of interferon α (IFN-α). This so-called cytokine has a variety of immune stimulatory effects that help protect cells against viral and bacterial infections, and it also curbs tumor growth. The IPCs turn out to be the immature forms of a special type of dendritic cell (DC), an immune system sentinel that engulfs foreign proteins, or antigens, chops them up, and displays the pieces to other immune cells, the T cells. This kicks off a fierce, specific immune response directed at the triggering antigen.

    The finding links the two branches of our immune system, the innate, more primitive immunity triggered by a wide variety of pathogens and the more sophisticated, adaptive immunity based on antigen-specific cells, which can be tailored against almost any intruder. “The study suggests that this cell type—which had previously been implicated in adaptive immunity—has the potential to also be an early player in the innate part of the immune response. This enriches the capacity of DCs to control immunity,” says dendritic cell expert Ralph Steinman of The Rockefeller University in New York City. And because the IPCs seem to be involved in a variety of illnesses, from AIDS to cancer to autoimmune diseases, “having identified this cell type opens an immense amount of possibilities” to try to treat or control these conditions by manipulating the cells, says immunologist Jacques Banchereau of the Baylor Institute for Immunology Research in Dallas.

    IPCs first appeared on immunologists' radar screens in the early 1980s, when it became clear that only a special, very rare type of white blood cell is capable of producing huge amounts of IFN-α. But immunologists did not identify a good candidate for the job until about 3 years ago, when Gunnar Alm and his colleagues at the Swedish University of Agricultural Sciences in Uppsala showed that IPCs “have all the characteristics of an immature DC,” as Alm puts it. They found, for example, that cells that stain positive for IFN-α also bear characteristic DC surface markers.

    At the same time, Liu and his colleagues isolated an odd cell type from human tonsils and blood, which they couldn't classify. Yet when Liu cultured the cells, they developed into a new type of DC with unique T cell- stimulating properties. He also realized that the cells, which he designated pDC2, bore a striking resemblance to Alm's. So he wondered whether the tonsil DC2 precursors and the natural IPCs were one and the same.

    To test this idea, the researchers purified more pDC2s from human blood, not a trivial endeavor given that there's only one pDC2 in every 1000 white blood cells. Under the electron microscope, Liu saw that the pDC2s have a prominent protein secretion apparatus, suggesting that the cells can produce huge amounts of protein. When the team stimulated the cells in culture with inactivated herpes simplex virus, they found that their purest pDC2 preparation did in fact make up to 1000 times more IFN-α than the same number of unpurified white blood cells. That response shows, says Liu, that the pDC2s are the natural IPCs.

    Many in the field are especially taken by the proposed versatility of the pDC2s and their mature version, the DC2s. “The cell seems to serve two masters at different stages of its [lifetime], which is quite unusual for immune cells. It's almost as if nature doesn't want to have cells just sitting around,” Banchereau says. Liu adds that the shift in responsibilities as the cell ages makes sense. “If a virus invades, you need a quick response; otherwise you may die. And that's what [the pDC2s] do [by producing IFN-α] within only a few hours. After that you'd want to call in the adaptive immune system for help—and that's the job of the DC2s.”

    Still, Paola Ricciardi-Castagnoli, an immunologist at the University of Milano in Italy, points out that no one has shown yet that pDC2s produce the same skyrocketing amounts of IFN-α in the body as they do in cell culture. Also unknown, says Ken Shortman, a developmental immunologist at the Walter and Eliza Hall Institute in Melbourne, Australia, is whether “pDC2s ever turn into mature DCs in the body.”

    If the pDC2s are indeed the long-sought IPCs, their isolation may yield significant medical benefits, as researchers look for ways to stimulate them or rein them in. Boosting IPC activity could be beneficial in AIDS, which seems to correlate with a drastic drop in IPC count, as indicated by, among other things, a drop in the patients' production of IFN-α, and perhaps also in cancer. Conversely, Alm has recently found evidence that IPC activity could contribute to the abnormal immune attacks of autoimmune disorders, such as systemic lupus erythematosus, suggesting that curbing the cells might be valuable there.

    The next challenge, everybody agrees, is to learn more about them. “Now we need to know how to produce these cells in large amounts and then how to modulate their function,” says Banchereau. “This is going to be one of the hot spots of the future,” he predicts.


    Great Smokies Species Census Under Way

    1. Jocelyn Kaiser

    For most hikers or picnickers, flies are a minor nuisance dealt with by a firm swat. For Brian Wiegmann, however, they bring the kind of delight only a dipterist can feel. When a slender black fly with orange spots alighted on Wiegmann's knee last month in Great Smoky Mountains National Park, a fellow fly hunter knew right away they were looking at a flower-pollinating species never seen before. By the end of the Memorial Day weekend, Wiegmann and several colleagues had collected at least five new species. “It was pretty exciting,” says Wiegmann, an entomologist at North Carolina State University in Raleigh.

    This was no casual fly safari: Wiegmann and gang were taking part in the kickoff of the All Taxa Biodiversity Inventory (ATBI). Led by the National Park Service and a nonprofit called Discover Life in America, the ambitious project, now in a 2-year pilot phase to hash out methods, is inviting scientists to tally every species that calls the park home. It's a tremendous undertaking, considering that scientists so far have identified only 9800 of an estimated 100,000 species (excluding bacteria and viruses) in the 225,000-hectare park, which straddles the border of Tennessee and North Carolina.

    Besides being a taxonomist's dream, the project aims to shed light on why some regions have a richer array of life-forms than others and how quickly species are going extinct. “It would be nice to have a chunk [of land] where we know everything that occurs,” says taxonomy group leader Don Wilson, a mammalogist at the Smithsonian Institution. However, accruing such knowledge carries a hefty price tag: Adding 90,000-odd branches to the tree of life could take up to 15 years and $100 million, according to ATBI organizer John Pickering, an entomologist at the University of Georgia, Athens. (Not everyone thinks the cost will be that high.) “There's lots of excitement in the scientific community, but not lots of money,” says Mike Sharkey, an insect systematist at the University of Kentucky, Lexington, and ATBI participant. Sharkey and others admit they don't know if they can raise that kind of money for a species census.

    The original plan, conceived 6 years ago by University of Pennsylvania ecologist Daniel Janzen, was to carry out an ATBI in a swath of rainforest in Guanacaste, Costa Rica. The idea resonated with academics, thanks in part to enthusiasm sparked by INbio—a novel institute, run by the Costa Rican government with support from the pharmaceutical giant Merck, that prospects in the rainforest for candidate drugs. But this incarnation of the ATBI, expected to cost $90 million, fell apart after Costa Rican officials opted for a limited survey (Science, 9 May 1997, p. 893).

    Bowed but not beaten, ATBI adherents revived the idea a couple of years ago, settling on the Great Smokies park as the venue because it's one of the most species-rich temperate areas in the world, and it's much easier and cheaper for U.S. scientists to reach than Central America. Also, in contrast to the cool reception researchers encounter in most parks—where getting a permit to collect even a single species can be an uphill battle—Smokies officials welcomed the opportunity to have waves of scientists bearing down on them. “We have a management team that thinks science is important,” says park biologist Keith Langdon, an ATBI organizer. The park, he says, has pledged to open up to ATBI researchers a $3 million lab it hopes to build in 2001.

    Project scientists are still working out the mechanics of their whole-earth survey. For instance, Langdon's staff has laid out 20 1-hectare plots to help scientists sample the park's various habitats. The project has a Web site logging bugs, salamanders, and other verified park denizens; it will eventually include data on each species' range, behavior, and population dynamics (

    Impressive, maybe, but will the taxonomy community at large get fired up over a species quest in Tennessee? “The Smokies is not as sexy a place” as Costa Rica, admits Wilson, who isn't counting on seeing any new charismatic species, like mammals or birds. “From the standpoint of the scientific community, there's maybe less hoorah.” Nevertheless, organizers do have a bird in hand: $150,000 this year (and perhaps future years) from the Smoky Mountains Natural History Association, as well as some matching funds. How many birds are in the bush is anyone's guess. Organizers plan to submit proposals to the National Science Foundation and other agencies and nonprofit foundations starting later this year, after they can make a more persuasive case based on data from this summer's fieldwork. “I'd say there's a huge number of taxonomists out there” who are interested, Pickering says. “We've got to convince them we've got the organization and the money.”


    An Oversized Star Acts Up

    1. James Glanz

    CHICAGO—Is it a distress signal or just a boisterous how-do-you-do? A star called Eta Carinae is flashing a mysterious message, and theorists are struggling to make sense of it. Visible to the naked eye in the southern sky, Eta Carinae is one of the biggest stars known to astronomers; shrouded in gas and dust, it is perhaps 100 times the mass of the sun, and so bright that if it gets any brighter it should blow itself apart. But observations announced at a meeting of the American Astronomical Society here last week show that over the last 2 years, it has indeed brightened —by a factor of more than two—while remaining intact. And whereas such a star would be expected to expand and cool when it brightens, Eta Carinae has heated up.

    “Here's a very massive star doing some weird stuff,” says Craig Wheeler, an astrophysicist at the University of Texas, Austin. So erratic is the behavior that some astronomers are speculating that, rather than being on the verge of blowing up because of its own brilliance, the mammoth star may be about to collapse, triggering an even bigger explosion called a supernova.

    Astronomers at the University of Minnesota and the NASA Goddard Space Flight Center in Greenbelt, Maryland, picked up the brightening in four measurements made by the orbiting Hubble Space Telescope's Imaging Spectrograph (STIS) between December of 1997 and February of 1999. The star, which Kris Davidson of Minnesota says “reminds us a little of a geyser,” has acted up before. A tremendous eruption in the 1840s belched up several solar masses of material that formed a dumbbell-shaped cloud around the star, which astronomers call the Homunculus. That event, during which the entire structure was about 10 times as bright as it is now, was followed by a smaller burp in the 1890s and a gradual brightening in this century, probably because the central star is shining through more and more clearly as the Homunculus expands and its veil of gas and dust thins.

    But this time around the star itself has brightened. When the brightening turned up in the STIS spectra, “we started questioning, ‘Is this real or is it an instrumental effect?’” says Goddard collaborator Theodore Gull. The team was reassured when they checked lower resolution images by other telescopes and discovered that ground-based astronomers had missed a smaller, but rapid, overall brightening, says Roberta Humphreys of the University of Minnesota.

    The brightening remains mysterious, however, because the star is thought to be very close to its “Eddington limit,” where light exerts so much outward pressure that gravity is just barely able to hold the star together. So any further brightening should produce an outrush of material. But an expanding burst of gas—although still too small to be seen directly—would cool like gas rushing out of a spray can. The cooling would strengthen the star's infrared signal and turn down the ultraviolet. But the full STIS spectra showed just the opposite pattern.

    “One explanation is that the star got hotter” without changing size, says Humphreys —although theorists don't know how a star could do that. Whatever the cause, astronomers are wondering what comes next. Perhaps Eta Carinae is about to pop off as it did in the 1840s, or perhaps it is about to collapse and blow up as a supernova. Stars of that mass are also the conjectured progenitors of hypernovae—even larger explosions that might produce the cosmic blasts called gamma ray bursts. “It really is a Rosetta stone of some kind,” says Wheeler. “We just don't know of what.”


    Germany Waves a Flag for Science

    1. Robert Koenig

    German scientists aren't known for blowing their own horns. Compared to their U.S. colleagues, they tend to be a bit shy about publicizing and explaining their research. Now, in response to an apparent deterioration in the German public's regard for science, resulting from conflicts on issues such as genetic engineering, animal rights, and nuclear power, Germany's science establishment is getting behind a major new effort to improve the public's understanding of—and, in theory, its appreciation for—scientific research. The initiative, called “Public Understanding of the Sciences and Humanities” (PUSH), is being financed by an initial $280,000 grant from the Association for the Promotion of German Science, an industry-funded organization based in Essen that is spearheading the program and offering grants for scientists.

    This month, the leaders of Germany's major scientific organizations—including the Max Planck Society, the DFG granting agency, the University Rectors' Conference, the Helmholtz Association of National Research Centers, the Fraunhofer Society applied-research organization, and the Science Council—signed a memorandum supporting the PUSH initiative and took part in a forum in Bonn to publicize the concept. Detlev Ganten, chair of the Helmholtz association and head of Berlin's Max Delbrück Center for Molecular Medicine, says today's German scientists should follow the examples of Albert Einstein, Alexander von Humboldt, and other renowned researchers who helped explain science to the masses. At best, he says, only about 6% of the public now understands what physicists and chemists do. The PUSH program should try to “deepen the understanding of how much science affects all areas of society—and will have even more impact in the future,” he says.

    The German science organizations have pledged to devote more resources to efforts by their scientists to improve dialogue with the public and news media and also to take such efforts into account when they evaluate those researchers. In addition, the memo says “an incentive system is to be developed that will be suitable to offer the prospect of rewards to those scientists who are actively engaged in fostering dialogue with the public.” The Science Promotion association has already posted a grant application form on its Web site ( for grants ranging from $11,500 to $35,000 for scientists' programs that would help explain research to students, teachers, churches, local groups, and the news media. The grant recipients will be chosen by a jury, led by Joachim Treusch, chair of the Jülich national research center, and including prominent German science journalists.

    The German initiative parallels similar efforts in the United Kingdom and the United States, which Germans believe have helped connect science and society. In addition to the PUSH grants, Treusch is leading an effort to organize a major science festival in Berlin in 2001 which he says might take some pointers from the annual meetings of the American Association for the Advancement of Science (the publisher of Science) and its British counterpart on focusing attention on science. Says Treusch: “We have the obligation to give German science a major step forward into the new century with this PUSH.”


    Outlook Improves for Research Funding

    1. David Malakoff

    Funding for defense-related research has languished since the Cold War, even as some civilian research budgets have spurted ahead. Now Congress is moving to slow the trend, proposing to erase cuts in military science that were requested by the Clinton Administration. But some administrators and lobbyists worry that the gains may not hold in an especially uncertain budget year.

    In February, the White House submitted a 2000 budget request that shrank the Department of Defense's (DOD's) $4.3 billion basic and applied research accounts by 5%. Although the Clinton budget would raise overall defense spending by a hefty $12.6 billion, it held the Pentagon's basic research account steady at $1.1 billion and trimmed the applied account by more than $230 million, to $2.9 billion. If approved by Congress, the cuts would have pushed DOD research spending to its lowest level in 35 years when adjusted for inflation.

    That prospect greatly worries university administrators. DOD is the third-largest source of academic research funds (after the National Institutes of Health and the National Science Foundation), with more than 350 U.S. schools getting defense dollars. Some disciplines are especially dependent on military support: The Pentagon provides 70% of federal funding for electrical engineering, 60% for computer sciences, and about one-third for math and oceanography, for example.

    In an April response to the threat, 19 university groups, scientific societies, and business groups formed a Coalition for National Security Research. The lobbying effort—coordinated by Liz Baldwin of the Optical Society of America and Peter Leone of the American Association of Engineering Societies, both in Washington, D.C.—bore fruit late last month, as the full Senate and the House Armed Services Committee separately recommended defense spending levels that are friendlier to research. Lawmakers suggested spending $7 million to $15 million more on basic research than the White House request, and they nearly reversed the cut in applied science with a proposed budget of $3.1 billion. A Senate appropriations subcommittee—which actually approves spending—did even better, voting an even smaller cut in applied research and a $35 million boost for basic science.

    Congressional staffers say that lawmakers eager to fund specific initiatives, such as one to develop an antimissile laser and another to combat bioterrorism, fueled the increases. But the concerns raised by university presidents and the coalition also played a role. “We felt their pain,” says one House staffer. Indeed, both Armed Services committees scolded DOD for its paltry request, with the House panel saying “it does not believe DOD has a coherent R&D funding strategy.”

    Although the numbers are preliminary, some coalition members say they are a good omen. “It was heartening to see that the members were concerned enough to up the numbers,” says Leone. But he and others admit they are far from the coalition's goal of a 2% overall R&D boost this year. That reality “is disappointing,” says Greg Schutz of the American Chemical Society in Washington, D.C. who worries that any cut in applied science budgets could threaten some physical chemistry labs.

    Schutz and others also fret that success could be ephemeral, pointing to the rising costs of the Kosovo conflict and mounting pressure to beef up other portions of the defense budget. “The tide has been going in and out on the budget process all year,” he says.


    NIH Ethics Office Tapped for a Promotion

    1. Eliot Marshall

    A government watchdog that monitors the treatment of patients and animals in federally funded research may be about to develop a more powerful bite. A panel recommended last week that the unit, called the Office for Protection from Research Risks (OPRR), be moved up the federal hierarchy. It currently resides in the office of National Institutes of Health (NIH) director Harold Varmus, and the panel urged that it be shifted to the Department of Health and Human Services (HHS). Varmus agreed that this would be appropriate to avoid an apparent conflict between NIH's dual roles as funder and regulator of clinical studies.

    The proposal to give OPRR higher status was discussed on 3 June at a meeting of Varmus's scientific advisory committee and was approved so quickly that some observers felt this was exactly what Varmus wanted. “It looked like a done deal,” says one non-NIH expert on bioethics who has followed the process closely. He thinks NIH may have decided to make a change after media and congressional attention focused on recent lapses in the treatment of human subjects. Last year, for example, witnesses at a congressional hearing blasted OPRR—which is supposed to keep tabs on research funded by 17 federal agencies—and others for lax enforcement of rules designed to protect volunteer research subjects (Science, 19 June 1998, p. 1830).

    Not long after that hearing, Varmus tapped an independent panel to consider the future of OPRR. Leading the six-person team were co-chairs Nancy Neveloff Dubler, director of the bioethics division of the Montefiore Medical Center in the Bronx, New York, and Renee Landers, an attorney at Ropes & Gray in Boston. After interviewing 15 officials and reviewing files, they reached their conclusions in May.

    The panel noted that the size of the clinical research enterprise is growing, as is public concern about the welfare of human subjects. But it warned that OPRR, as the public's guardian, has a problem: Because it is located in a research agency, there is a “perception that OPRR's actions will be biased in favor of research interests and will provide insufficient protection to research subjects.” And this, the panel found, contributes “to the public distrust of the research enterprise.”

    The panel found that such concerns are “not just abstract or hypothetical.” OPRR's recommendations sometimes seem “compromised” by its location deep in the bureaucracy, Landers said in a telephone interview. “It isn't able to speak with a clear voice” on policy initiatives, according to Landers, because NIH's process of collegial review tends to filter out disagreements. This process can muffle signals, preventing HHS higher-ups from hearing an important message, Landers said. At the same time, the panel suggested that NIH might have trouble supervising OPRR, because any attempt to exercise control might be interpreted as improper meddling.

    The report recommends that OPRR be moved to HHS, preferably in the office of the surgeon general or assistant secretary for health. In addition, the report says the OPRR chief should be elevated to the Senior Executive Service, opening the way to recruiting a widely respected national leader. (If the civil service rank were changed, the current director, Gary Ellis, would have to reapply for his job.) And the panel urged that OPRR be given an advisory panel, a larger budget, and a more active role. It currently has a staff of just 28 people. But Landers is careful to add, “It was not our intent” to encourage a “more aggressive, prosecutorial approach.”

    The initial response was muted. One member of Varmus's advisory panel—William Brody, president of The Johns Hopkins University in Baltimore—worried about the “profound implications” of turning OPRR into a “free-ranging organization.” But biomedical and clinical research groups have not objected. Ellis commented that “OPRR feels thoroughly examined,” but did not disagree with the report. Mark Yessian, a staffer in the office of the HHS inspector general who studied this issue last year, said the report “makes a good case” for relocating OPRR but added, “I just hope it isn't seen as 80% of the answer” to improving oversight. Yessian wrote a report last year that found that the network of local institutional review boards (IRBs) that monitor human subject research at universities and other institutions is “in jeopardy.” He thinks the IRB system needs to be revamped as well.

    Varmus, who foresaw “political trouble” for NIH if it retained authority in this area of bioethics, seemed happy to let it go. He said that he would send the new report to HHS Secretary Donna Shalala right away and urge her to implement it.


    Berkeley Crew Bags Element 118

    1. Robert F. Service

    Step aside, element 114; there's a new heavyweight champ. Physicists at the Lawrence Berkeley National Laboratory in California announced earlier this week that they have created two new superheavy elements, tipping the scales at 118 and 116 protons. The new heavyweights come as something of a surprise, as standard theories had suggested that the technique used to create them—fusing two medium-weight nuclei at a relatively low energy—should top out at 112. The team's success suggests that the method may produce weightier champs, with atomic numbers 119 and beyond. “It's a very exciting result,” says Ron Lougheed, a heavy-element physicist at Lawrence Livermore National Laboratory in California. “I suspect it will lead to a flurry of new isotopes in this region.”

    Ever since the early 1940s, when Glenn Seaborg and his Berkeley colleagues created the first handful of artificial elements beyond the 94 that exist in nature, physicists have vied to forge the next heaviest element. The most recent milestone came in January when researchers at the Joint Institute for Nuclear Research in Dubna, Russia, won a race to create the long-sought element 114 (Science, 22 January, p. 474). Element 114 was a special prize because its 30-second lifetime seemed to confirm predictions of an island of stability—a realm of superheavy elements including 114 and its neighbors, whose nuclei have an internal structure that makes them more stable than heavier and lighter isotopes.

    Although the 114 work has yet to be duplicated, the success marked an unexpected renaissance for a previously successful technique known as hot fusion, in which a beam of light isotopes is smashed into a heavier target, such as plutonium. Prior to that success, the technique of choice had been cold fusion, a gentler collision of medium-sized isotopes. Researchers at the Institute for Heavy Ion Research (GSI) in Darmstadt, Germany, used the technique to lay claim to five elements from 107 to 112 since the early 1980s. Conventional theories suggested that neither technique would be able to form elements as big as 118 without them instantly breaking apart, or fissioning.

    The Berkeley team's big break came at the prodding of Robert Smola«nczuk, a visiting theorist from the Soltan Institute for Nuclear Studies in Poland, who suggested that there may still be a little warmth left in cold fusion. His calculations suggested that bombarding a lead target with krypton ions would have reasonable odds of producing a few atoms of 118 after all: The compound nucleus, he found, was less likely to fission than previously thought. “We didn't really believe it,” says Ken Gregorich, who led the 15-member Berkeley team. “But it was one of those experiments where there was little to lose and a big upside. We tried it and were surprised to see something.”

    They saw a lot of somethings. After an accelerator flung the krypton ions into the lead, the impact debris was swept into another machine that separated detritus from atoms of potential interest, which were channeled to a radiation detector. The detector measured a distinct pattern of alpha-particle emissions as the sought-after heavyweight shed pieces of itself in search of a more stable configuration. “During 11 days of experiments, three such alpha-decay chains were observed indicating production of three atoms of element 118,” says Gregorich. As an added bonus, the first alpha decay in each case produced an atom of element 116—also never before seen. And the time course of the decays lent support to the theory of the island of stability around element 114.

    Next up, Gregorich says, his team plans to switch the lead target for one of bismuth atoms, which harbor an extra proton. If they fuse with krypton, the group will have yet another champ at 119.


    No Winners in Patent Shootout

    1. Marcia Barinaga*
    1. With reporting by Robert Koenig in Bern, Switzerland.

    A high-stakes patent trial has seen scientists challenge each other's veracity and reputations potentially tarnished, yet their testimony was not even central to the legal case

    A trial that has made headlines internationally and threatened the reputations of two distinguished scientists ended last week without a verdict. A federal jury deadlocked on a claim by the University of California (UC) that South San Francisco biotech giant Genentech had infringed the university's patent on the gene for human growth hormone. Genentech, whose synthetic version of human growth hormone, Protropin, has racked up sales of more than $2 billion, promptly declared victory. But it was hardly decisive. The nine-member jury upheld the validity of the UC patent, which Genentech had challenged, and split 8 to 1 for UC on infringement. The lone holdout juror saved Genentech from damages that could have been as high as $1.2 billion. At a hearing scheduled for 22 June, UC, which has already invested $20 million and 9 years in the case, is likely to request a new trial.

    The scientists who have been drawn into this bruising fight can hardly relish a rematch. The trial took a heavy personal toll on UC witness Peter Seeburg, a former UC San Francisco (UCSF) postdoc and Genentech employee and now a director at the Max Planck Institute for Medical Research in Heidelberg, Germany. Seeburg testified that he and his Genentech colleagues used research materials he had removed from his former UCSF lab during a clandestine New Year's eve visit. And he said they avoided revealing the use of these materials by misrepresenting some data in a 1979 paper in Nature. His public admission has prompted the Max Planck Society to open an inquiry into these events of 20 years ago, focusing on possible scientific misconduct. Seeburg's former Genentech colleague, David Goeddel—now president of Tularik Inc. of South San Francisco—has also been bruised by the trial. In his own testimony and subsequent public statements, he has vehemently contested Seeburg's version of events and defended the accuracy of the Nature paper.

    The sensational revelations and the spectacle of two highly regarded scientists publicly questioning each other's truthfulness created a drama that eclipsed the underlying legal issues. A bitter irony in this case is that Seeburg's testimony was not even central to UC's arguments, and it appears to have had little influence on the jury. UC wasn't suing Genentech for taking or misusing its research materials. The suit was for patent infringement, and the legal reason that Seeburg's testimony could figure in the case at all lay in a loophole created by the way UC's patent was worded.

    The scientific dispute

    The story began in 1977, when Seeburg was a postdoc with Howard Goodman at UCSF. He was part of a team that pulled off a huge victory in the competitive new field of biotechnology: cloning the cDNA that encodes human growth hormone. UC filed for a patent based on that work, naming Seeburg and Goodman among the co-inventors. In November 1978, Seeburg took a job at Genentech to lead an effort to engineer bacteria to produce human growth hormone. The goal was to create an “expression vector,” a ring of DNA containing the growth hormone cDNA spliced to sequences that would allow it to be expressed in bacteria, turning the organisms into growth hormone factories.

    Seeburg's career move would be routine for a postdoc today, but at the time he was a pioneer, one of the first postdocs to leave academia to join the brand-new biotech industry. He and his new employer quickly found that the rules governing such transfers were far from clear. Shortly after Seeburg joined Genentech, according to evidence in the trial, company president Robert Swanson wrote to Goodman asking for some of the clones Seeburg had developed at UC. No response from Goodman was admitted as evidence in the trial, but UC attorney Emily Evans of the Palo Alto law firm Morrison & Foerster says the request was not granted.

    When that approach failed, Seeburg testified that he went to UCSF with a Genentech colleague, Axel Ullrich, shortly before midnight on New Year's Eve 1978 to retrieve a portion of the DNAs he had worked on. Seeburg bristles at the notion that what has come to be known as the “midnight raid” was a theft. He timed the visit, he says, to avoid an unpleasant confrontation with Goodman, his former adviser, with whom he had had a falling-out. And although he acknowledged in an interview with Science that “in a legal sense,” his action was wrong, he says he felt at the time that he “had a right to take half the material” that he had worked so hard to create and use it for his future research.

    UC took a less charitable view. When university officials learned of the transfer, they fired off a letter to Swanson putting Genentech on notice that the company did not have permission to take the samples or to put them to commercial use. Swanson responded that Seeburg's “entitlement” to continue research with the materials he developed at UC was “in keeping with scientific and university custom of long standing.” UC shot back: “[T]here is no ‘custom,’ ‘long-standing’ or otherwise, which countenances such conduct for private commercial enrichment.” The issue was apparently settled in June 1980, when Genentech agreed to pay UC up to $2 million in royalties for the clones, although UC retained full patent rights.

    Just what Seeburg and Goeddel did with those clones is a matter of bitter dispute. Seeburg testified in April that the Genentech team had trouble isolating a growth hormone cDNA complete enough to use in its expression vector, so he says he used DNA from the UC clone. What's more, he testified, Goeddel was in on the alleged scheme. “We agreed that we would use it but not tell anyone,” he said.

    Goeddel vigorously denies any knowledge that Seeburg used the UC clone. “There was never any agreement,” he testified. “I couldn't believe that he could come up with such a story.” He testified that Genentech's team independently isolated a growth hormone cDNA that he inserted into the expression vector, which subsequently directed bacteria to make the hormone. That widely hailed accomplishment was reported in Nature on 18 October 1979. Genentech received patents for the work and won Food and Drug Administration approval in 1985 for Protropin.

    Seeburg testified that the growth hormone cDNA clone described in the Nature paper, pHGH31, never existed, and the DNA sequence attributed in the paper to pHGH31 came from the UC clone. The battle over the truth of that testimony focused on Goeddel's and other Genentech employees' lab notebooks. Genentech researchers and expert witnesses saw plenty of evidence in the notebooks of independent cloning of the growth hormone cDNA, and isolation and insertion of the right piece into the expression vector. UC experts found the notebook entries sketchy and incomplete, indicating abandoned cloning efforts followed by the sudden appearance of a DNA fragment “prepared by P. Seeburg,” which Goeddel used.

    Genentech conceded that the notebooks contain no record of the determination of the DNA sequence of pHGH31. Goeddel says he recalls analyzing a sequence obtained by another employee, but he told Science he isn't surprised that a piece of raw data is unaccounted for: “I think if you take any paper after 20 years and ask where is all the primary data, it might be hard to come by.”

    Goeddel and the other authors of the Nature paper wrote letters to the editors of Science and Nature, denying Seeburg's allegations and inviting Nature to examine the notebooks, which Genentech has posted on its Web site at In responses published along with the Genentech letters, Seeburg points out that “the scientific results and conclusions” of the Nature paper are “unambiguous and correct,” and what he calls the “technical inaccuracy” is limited to one step in the construction of the expression vector. Seeburg told Science he does not condone “fudging data” and regrets the flaws in the Nature paper, but he considers them a “misdemeanor” rather than fraud. Nature seems to share that view, declining in an editorial to investigate the paper. But the Max Planck Society has taken a more stern stance, initiating a formal inquiry into the affair, to be led by an independent legal expert, Walter Odersky.

    The legal case

    Ironically, given the fallout from Seeburg's testimony and the attention it has received, several patent law experts Science consulted see it as tangential to the legal issues. And some even suggest it shouldn't have been allowed into the trial at all. UC's accusation of patent infringement did not depend on whether Genentech had directly used UC's cDNA, but only on whether the company's Protropin expression vector had DNA sequences that were claimed in UC's patent. The Seeburg story, in fact, was relevant only because of a quirk in the way that patent was written.

    UC's patent covered not only the sequence that encodes growth hormone, but also 48 nucleotides of noncoding sequence that follow it in the UC clone. UC patented this entire sequence as part of a “transfer vector” for introducing DNA into bacteria. To infringe on a patent, a product must contain everything claimed in the patent. Because Seeburg and Goeddel cut off 39 of the 48 noncoding nucleotides when making the expression vector, says UC lead attorney Gerald Dodson of Morrison & Foerster, their vector does not literally infringe UC's patent. If UC had claimed just the coding region, says biotech patent attorney Adriane Antler of the New York patent law firm Pennie & Edmonds, it may have had a case for literal infringement.

    It's unclear why UC's patent included the noncoding sequence. “That was early days on writing applications covering DNA sequences,” notes the patent's author, attorney Lorance Greenlee, now of the patent law firm Greenlee, Winner and Sullivan in Boulder, Colorado. “Nobody really knew what the patent office would accept or not accept.” Greenlee says he probably just wanted to be complete in claiming the whole sequence.

    Whatever the reason for its inclusion, that noncoding sequence meant that UC could only claim infringement under the “doctrine of equivalents,” a provision of patent law that says a product may infringe a patent if it has elements that are equivalent to each element of the patent's claims. To make its case, UC had to show that differences between its patented sequence and that used by Genentech were inconsequential, and that Genentech's expression vector is also a transfer vector and so covered by UC's claim. Genentech countered with expert testimony that its vector was significantly different from UC's and that cutting off part of the noncoding sequence was a substantive change, done to improve expression of the hormone in bacteria.

    UC introduced Seeburg's scientifically explosive testimony to provide an indirect line of support for its legal arguments. Judge Charles Legge allowed the testimony because, as he said in his jury instructions, if UC could show that Genentech copied UC's invention, that would “suggest that the differences between [UC's claims] and the corresponding features in Genentech's [vector] are insubstantial.” As evidence of direct copying, UC lawyers pointed to Seeburg's claim that Genentech used the actual DNA from UC to make its vector.

    Several patent experts consulted by Science say, however, that the legal basis for Legge's reasoning is shaky. Patent attorney Richard Osman, of the Science & Technology Law Group in San Francisco, notes, for example, that in a 1997 decision, the Supreme Court took a dim view of the relevance of intentional copying to the doctrine of equivalents. “The better view, …” the court wrote, “is that intent plays no role in the application of the doctrine of equivalents.” Martin Adelman, director of intellectual property at George Washington University in Washington, D.C. says he believes the judge made an error in admitting testimony apparently intended “to prejudice the jury.”

    But it didn't seem to have that effect. “We didn't put major amounts of weight on [Seeburg's testimony],” juror Don Ladue, a supervisor with Pacific Gas and Electric Co. in San Francisco, told Science, adding that he and the seven other jurors who found for UC “felt the university proved its case without his testimony.” The jury's discussions focused on the patent claims and Genentech's production vector, says Ladue. “If you looked at [UC's claims] versus [Genentech's vector], you could see that under the doctrine of equivalents, it did infringe.”

    The fact that the scientific testimony had little influence on the jury will be scant consolation to Seeburg, Goeddel, and the other researchers whose activities 20 years ago have been publicly dissected and their motives questioned. And now they face the daunting prospect that, if there is a retrial, they may have to go through it all over again.


    From the Bioweapons Trenches, New Tools for Battling Microbes

    1. Joseph Alper*
    1. Joseph Alper is a writer in Louisville, Colorado.

    SEATTLE, WASHINGTON—Some 5700 biotech buffs gathered here from 16 to 20 May for the annual meeting of the Biotechnology Industry Organization. Most sessions focused on business, but one drew big interest from scientists: a showcase of projects funded by the U.S. Defense Advanced Research Projects Agency's antibioterrorism initiative.

    Warp-Speed DNA Sequencing

    Star Trek fans know that a tricorder can sniff out and identify molecules or pathogens in a matter of seconds. With such a device, officials could identify pathogens quickly enough to save lives during a bioterrorist attack, and physicians could rapidly type tuberculosis strains, for example, to determine which combination of drugs would work on which patient. Unfortunately, current techniques for rapid DNA decryption—using electrical separation of nucleotides or mass spectrometry (Science, 27 March 1998, p. 2044)—don't come close. “We need something that's off the wall, an approach to sequencing that's completely different from the way we think of doing this now,” says chemist and gene sequencer Richard Mathies of the University of California, Berkeley.

    Tomorrow's tricorder?

    A novel bacterial ion channel, inserted in a membrane, can quickly decipher DNA sequences.


    Just such a wild idea was aired at the meeting by cell biologist Daniel Branton of Harvard University. He and biophysicist David Deamer of the University of California, Santa Cruz, have built a device that can read nucleotides from single strands of DNA as they pass through a well-studied bacterial ion channel called α-hemolysin.

    The duo embedded the 1.5-nanometer-wide channel in an artificial membrane that splits a buffer-filled chamber. A voltage across the membrane keeps the pore open and pulls DNA—which carries electrical charge—through. Double-stranded DNA is too wide to fit, but the molecule's strands unzip and zip up at various spots all the time. Like a frazzled sneaker lace jammed into an eyelet, single-stranded DNA segments pass through the pore and the rest of the molecule unravels. As each nucleotide crosses the opening, it produces a characteristic dip in current. So far the researchers have tried discerning only long stretches of the same nucleotides, such as 30 adenines followed by 70 cytosines.

    Mathies is impressed by the early results. “They are on to something very interesting,” he says. Getting the machine to work for real DNA, however, will require impeding the strands so they pass through a pore more slowly. Right now they whip through at about a base per microsecond—so quickly that several nucleotide signatures blur together. Branton estimates that they will have to be slowed to about a millisecond per base for the signature of each nucleotide to register separately. One strategy is to add short pieces of single-stranded DNA, which bind temporarily to the DNA of interest, to the buffer on the input side. A string of deoxythymines, for example, slowed the transit of a 100-nucleotide stretch of deoxyadenines from 320 microseconds to 4400 microseconds.

    As they work on DNA braking techniques, Branton and Deamer are also each developing ways to insert the α-hemolysin pores into materials more durable than membranes that could be easier to scale up for commercial use. To gouge narrow holes, Deamer and his Santa Cruz colleagues have turned to a 30-year-old technique called the nucleopore method, which relies on highly energetic fission products from uranium or californium to break bonds in a target material, in this case mica. A brief wash with hydrofluoric acid removes the defects, leaving 6-nanometer-wide holes in the mica. “While 6 nanometers is too big for sequencing, we may be able to use such a structure to provide a robust support for the α-hemolysin pore,” said Deamer.

    Branton's Harvard team, meanwhile, has taken a different tack. By modifying a standard lithographic technique, they build up cones of silicon nitride on a scaffold, then cover the cones nearly to their tips with a proprietary silicon-based material. Using a laser to remove the silicon nitride leaves tiny, round pores, which Branton's group has shrunk to about 4.5 nanometers wide. He believes that the 2 nanometers or so needed to sequence DNA is within reach.

    The researchers predict that a single device with 500 pores—snipped from a bacterium or punched into silicon—would be enough to sequence a bacterial genome quickly. The plan, then, would be to use standard restriction enzymes to cut up a target genome (DNA or RNA) into 500 or so large pieces, with software available today quickly assembling the sequence data and identifying the microbe at hand. At an expected throughput of 1000 bases per second per pore, such a device could sequence a bacterial or viral genome in under a second—making the tricorder dream a reality.

    Lobbing Nanobombs at Pathogens

    “What do you do to an F-16 that's been contaminated by anthrax?” goes an old military joke. Answer: Crash land it on the enemy, because the severe measures for cleaning up the killer bacterium—and popular bioweapon—would turn the plane into an oversized paperweight. Today, the only practical methods to destroy anthrax spores are to incinerate them or to blast them with bleach and formaldehyde. Neither approach leaves an airplane's electronics—or contaminated personnel—in working order.

    A more gentle solution may be a unique emulsion of oil, water, and two common lab detergents that form nanometer-sized droplets capable of fusing with and destroying not only tiny anthrax spores, but also other gram-positive bacteria, gram-negative bacteria, and most viruses. “We've basically created nanometer-sized bombs that attach themselves to and blow up most every pathogen known to man,” said James R. Baker Jr. an immunologist and director of the Center for Biologic Nanotechnology at the University of Michigan. Influenza and Ebola—one of the most lethal known viruses—are dead within minutes after being doused with the mixture.

    Experts say the emulsions could become a valuable addition to a growing arsenal of antimicrobial substances, including classes of peptides such as defensins and magainins, that kill by rupturing cell membranes. Trial and error led Baker's team to two formulations that demonstrate potent antimicrobial action. An emulsion including the detergents Triton X-100 and tributyl phosphate took out gram-positive bacteria and the vast majority of viruses sheathed in protein envelopes. (Naked RNA viruses are not susceptible, because they have no membrane for the emulsions to disrupt.) A second preparation killed a different spectrum of bugs: gram-negative bacteria and fungi. Combining the armaments yielded a potent killing machine. Exposing a variety of fungi, bacteria, and enveloped viruses to a 1000-fold dilution of the double-barrel emulsion for 15 minutes annihilated the life-forms, the researchers concluded from the absence of colonies in suitable growth media.

    To Baker's initial surprise, the emulsions proved effective against bacterial spores—a form of suspended animation in which the bacteria produce a hard protective coat—which tend to resist all but the harshest chemicals. “It appears that the oil acts as a nutrient that tricks the spores to start producing cell membrane, a process that the emulsions disrupt quite easily,” said Baker. In one experiment, he and his colleagues infected skin wounds on mice with spores of Bacillus cereus, a cause of food poisoning and severe infections. One hour later, the wounds were rinsed with either a 10% solution of the two emulsions or with salt water. The wounds in the treated mice healed, while those in the control animals festered.

    Sperm and red blood cells are the only animal cells that Baker's team has found to be susceptible to the emulsions. Other cells are studded with carbohydrates that appear to somehow prevent the emulsion droplets from fusing to the cell membrane. This gentleness is a nice surprise considering that membrane-disrupting antimicrobial peptides have shown unexpected toxicity in animal tissues, says microbiologist Jill Adler Moore, director of the Institute of Cellular and Molecular Biology at California State University in Pomona. The bottom line, says Baker, is that the emulsion mixture is a drug candidate mainly for external uses, such as for treating skin ulcers.

    This expectation will soon be put to the test. The National Institute of Child Health and Human Development in Bethesda, Maryland, is planning a clinical trial to see if the emulsions will work as a vaginal contraceptive cream that wards off sexually transmitted diseases. And the U.S. military intends to try to detoxify contaminated equipment by hosing it down with the emulsions, a procedure that could save the equipment from becoming expensive scrap—or paperweights.


    New Clues to How Neurons Strengthen Their Connections

    1. Marcia Barinaga

    New results point to the AMPA receptor for glutamate as playing a key role in the changes underlying long-term potentiation in brain neurons

    Neurobiologists who study how the brain adapts and learns have long known that synapses—the specialized regions where one neuron receives chemical signals from another—are where the action is. For example, learning seems to be associated with an increase in the strength of those synaptic connections. Now, three teams—two of which report their results in this issue of Science, while the third published in the May issue of Nature Neuroscience—implicate a new player in the biochemical changes underlying a type of synapse strengthening known as long-term potentiation (LTP).

    The neurons that undergo LTP respond to the neurotransmitter glutamate. Their synapses contain two kinds of glutamate receptors, but researchers studying LTP have largely focused on the one known as the NMDA receptor. That's because glutamate binding to this receptor is the first step in LTP. Exactly what happens after that is unknown—and the subject of fervent study and debate. The new work fingers the other, less famous glutamate receptor, the AMPA receptor, as a player in those synapse-strengthening events.

    Previously, neurobiologists had thought that AMPA receptors are present at relatively unchanging levels in the vast majority of synapses on glutamate-sensitive neurons. But that no longer appears to be the case. Two of the teams, led by Roberto Malinow of Cold Spring Harbor Laboratory on New York's Long Island and Robert Malenka at the University of California, San Francisco, show that AMPA receptors move into and out of synapses as synaptic connections strengthen and weaken. The third team, led by Peter Seeburg and Bert Sakmann of the Max Planck Institute for Medical Research in Heidelberg, Germany, provides indirect evidence that the movements are needed for LTP to occur. Taken together, says Richard Huganir, who studies receptors at Johns Hopkins University School of Medicine in Baltimore, the results give “incontrovertible evidence” that “the regulation of AMPA receptors in general is going to be very key” to modulating synapse strength.

    The current findings are also likely to influence a long-standing debate over whether the changes that occur in LTP take place postsynaptically, that is, in the cell that receives the signal, or presynaptically, in the cell that dispenses it. They imply that at least part of the changes are postsynaptic. Ironically, however, the new findings trace back to an experiment done 9 years ago that was long viewed as strong evidence for presynaptic change.

    At that time, the NMDA receptor had already been implicated in LTP, a task for which it is remarkably well suited. Brain neurons usually have thousands of synapses for receiving signals from other neurons. NMDA receptors become activated only if a glutamate signal arrives when the receiving neuron has just been activated by a signal from another source. That gives NMDA receptors the ability to strengthen synapses that receive closely timed signals, a trait thought to be important for learning. Once NMDA receptors have been activated and LTP triggered, the synapse is stronger, meaning it allows more ions to flow into the neuron in response to incoming signals, even if they don't activate NMDA receptors. But exactly how that happens was unknown.

    One possibility is that the presynaptic neuron releases more glutamate into the synapse every time it fires. This could then trigger the AMPA receptors, also presumed to be present in the synapse. In 1990, two teams provided evidence for that model in the form of statistical analyses of the responses of individual synapses in brain slices or cultured neurons in response to low-level stimulation (Science, 29 June 1990, p. 1603). One of the studies, by neurobiologist Richard Tsien at Stanford University School of Medicine and Malinow, who was then a postdoc in his lab, also contained the first hints of another possibility, however.

    This was an analysis of what they called “failures of transmission,” when synapses don't transmit a signal at all. Malinow and Tsien interpreted those failures as cases in which no glutamate was released. The failures dropped dramatically once LTP was triggered, suggesting that the probability of glutamate release had risen.

    But that logic was based on the assumption that every synapse had a steady level of AMPA receptors that would respond to glutamate if it were there. Later, Malinow and others began to question that assumption. If, contrary to expectations, some synapses had only NMDA receptors, they would appear to be “silent,” not responding to glutamate until conditions were right to trigger the NMDA receptors and LTP. The number of silent synapses would drop if LTP caused AMPA receptors to move into those synapses. Indeed, an influx of AMPA receptors would strengthen any glutamate synapse, not just those that had been silent before.

    A handful of labs began recording from neurons of the hippocampus, the brain area where LTP is most often studied, looking for synapses that were silent under normal conditions and only responded to glutamate under conditions that activate NMDA receptors. In 1995, Malinow's and Malenka's labs reported evidence for such synapses, followed in 1996 by a similar paper from Arthur Konnerth and his colleagues at the Universität des Saarlands in Homberg, Germany.

    But there were alternative explanations for those data, such as the possibility that the amount of glutamate released into the silent synapses was simply too low to trigger AMPA receptors that were there. Then more recently, another line of research appears to have clinched the case for AMPA-less synapses: In the past 2 years, five research groups have used microscopy and differential staining techniques for AMPA and NMDA receptors and found synapses in the hippocampus that have only NMDA receptors.

    Even so, a bigger question remained: Could AMPA receptors move into any synapses—silent or not—quickly enough to account for any part of LTP? The latest work answers that question in the affirmative. “The big advance” made by Malinow's and Malenka's groups in the present papers, says Huganir, “is to show that [AMPA receptor content] can be modulated rapidly.”

    Malenka's group used cultured neurons for its work. LTP can be difficult to induce in cultured neurons, so the team instead studied long-term depression (LTD), a synapse weakening akin to neuronal forgetfulness that is also triggered by NMDA receptors and may serve to weaken synapses when the conditions for LTP are no longer met. The researchers stimulated the cultured neurons in a way that induces LTD, and 15 minutes later stained the neurons with antibodies to the AMPA receptor and with other antibodies that highlight synapses. As they report in the May Nature Neuroscience, they found that LTD brought about a decrease in the percentage of synapses containing AMPA receptors. “This is a cool way of adjusting synaptic strengths, being able to throw AMPA receptors in there or take them away,” says Malenka, who recently moved to Stanford's medical school.

    Malinow's team, whose report appears on page 1811 of this issue, used a relatively new microscopic technique known as two-photon laser scanning microscopy, which allows researchers to see inside living cells structures as small as dendritic spines, the little knobs that form the receiving end of synapses. The team created a gene consisting of a sequence coding for one of the subunits of the AMPA receptor fused to a sequence encoding a fluorescent protein, and used a virus to insert that gene into neurons in cultured rat brain slices. In neurons expressing the gene, the team saw fluorescent-labeled AMPA receptors clustered at the base of the dendritic spines—“as if,” says Malinow, “they are waiting for something.”

    LTP seems to be what they are waiting for. Within 15 minutes after stimulation to induce LTP, labeled AMPA receptors flooded into some of the dendritic spines. What's more, closer inspection showed the receptors inserted into the membrane at the tip of the spine, where the synapse is found. Malinow notes that the AMPA receptors don't just move into spines that appear empty of AMPA receptors, but also into those that already contained AMPA receptors. “We … have always suggested that LTP would involve delivery into both types of synapses,” he says.

    Indirect evidence that the movement of AMPA receptors into synapses is in fact necessary for LTP comes from the Max Planck's Seeburg and Sakmann and their colleagues. In work described on page 1805, the group bred engineered mice that lack GluR-A (also known as GluR1), one of the four protein subunits that make up the AMPA receptor. The four subunits are interchangeable, so the mutant mice still had AMPA receptors composed of the other three subunits, and these seemed to function normally under non-LTP-inducing conditions. But the team could not induce LTP in the CA1 subset of hippocampal neurons where LTP is commonly studied.

    The researchers argue that the effect can't just be due to the absence of GluR-A, because other types of hippocampal neurons still undergo LTP in the mutant mice. Instead, there appears to be a shortage of AMPA receptors in the CA1 neurons, where the GluR-A subunits normally make up a particularly high percentage of the available subunits, says Seeburg. That, he says, suggests that the neurons suffer from “a need for spare AMPA receptors to make LTP go.”

    These new findings are far from the whole story on AMPA receptors and postsynaptic changes during LTP. Huganir's group at Johns Hopkins, as well as Thomas Soderling and his colleagues at the Vollum Institute in Portland, Oregon, has shown that kinase enzymes activated during LTP add phosphate groups to the GluR1 subunit of the AMPA receptor. This increases the ease with which ions flow through the receptor's channel, a change that should enhance the strength of the synapse. Huganir says his lab has unpublished data that other AMPA subunits are phosphorylated as well. “It is clear that the AMPA receptors are getting highly regulated” at several levels, he says, adding that he suspects phosphorylation may turn out to help with the transport of the receptors to the synapse as well.

    Other recent work suggests that LTP may create striking postsynaptic changes in the form of whole new synapses. Two teams, one led by Malinow and Karel Svoboda of Cold Spring Harbor, and the other by Tobias Bonhoeffer of the Max Planck Institute of Neurobiology in Munich, Germany, recently reported that within 20 minutes after the start of LTP, tiny new structures appear in the postsynaptic neuron that may become new dendritic spines (Science, 19 March, p. 1923, and Nature, 6 May, p. 66). The work is preliminary and the fate of the structures isn't certain, but Bonhoeffer believes they will turn out to be new spines. “Eventually those newborn spines will each have a synapse,” and the movement of AMPA receptors triggered by LTP may eventually fill those new synapses as well.

    Although it remains to be seen how all these pieces will fit together, the case for migrating AMPA receptors playing an active role in modulating synapses is “very compelling,” says Stanford's Tsien. But he and others say that doesn't mean that all of LTP will be accounted for by changes in the postsynaptic neuron. LTP occurs within a minute, and the receptor movements have not been confirmed to occur that quickly. That means, says Tsien, that there are likely to be “other mechanisms taking place to fill in what is happening” during the first moments of the process.

    Unpublished work from his group suggests that one of these is increased glutamate release occurring within the first minute after LTP has been triggered. This result also bolsters the view that there will be presynaptic and postsynaptic contributions to the synapse-strengthening process. “I think what the field is telling us is it is both” pre- and postsynaptic, says Richard Scheller, who studies synapses at Stanford's medical school. But wherever the balance of pre- and postsynaptic mechanisms turns out to be, the idea that AMPA receptors modulate synapse strength is likely here to stay.


    Efforts to Boost Diversity Face Persistent Problems

    1. Jeffrey Mervis

    More groups are going to bat for underrepresented minorities in science and engineering. But can they do better than past efforts to make a difference?

    Time is a precious commodity for Diann Brei, an assistant professor of mechanical engineering at the University of Michigan, Ann Arbor. With her biological clock ticking, the 35-year-old Brei and her husband decided last year to have a second child even though she knew it might cost her a chance at tenure when she comes up for review in 18 months. “The informal rule is that one child before tenure puts you at risk,” says Brei. “I've heard I'm the only woman in the engineering school to have had two children before tenure.” Last month the Alfred P. Sloan Foundation came to the rescue, awarding her its first pretenure faculty fellowship. The money will allow Brei to attend meetings, retain a graduate student whose industrial funding had ended, and resume a full research schedule as quickly as possible.

    Mariana Loya isn't thinking about children or tenure just yet. Instead, the 20-year-old materials engineering major divides her time among classes at the University of Washington, an undergraduate research project on biomaterials, and tutoring inner-city kids. Loya, whose father is Latino and Native American and whose mother is Asian, also talks up the importance of science and math at scores of public appearances as Miss Washington, the state's representative in last fall's Miss America pageant. Such outreach brought her to the attention of the National Academy of Engineering (NAE), which showcased her at its 2-day summit last month on women in engineering. Officials hope that Loya's glamorous image will help attract other minority women into what NAE president William Wulf calls a “pale, male profession.”

    The Sloan program and the NAE summit represent two fresh efforts to tackle the chronic underrepresentation of women and non-Asian minorities in the scientific work force. They are joined by a new federal Commission on the Advancement of Women and Minorities in Science, Engineering and Technology. On 24 June, at the National Science Foundation (NSF), the commission will hold the first of two public hearings to collect information for a report to Congress. NAE's follow-up to its summit, which drew 200 leaders from academia, industry, and government, is a new Committee on Diversity in the Engineering Work Force that will hold a workshop next month to discuss how diversity can boost a company's bottom line.

    These activities are meant to continue chipping away at a problem that, experts say, begins with negative messages in elementary school, continues through undergraduate and graduate programs that erect barriers—financial, academic, and cultural—to all but the best candidates, and persists into the workplace. And the stakes are getting higher all the time, says one commission member. “If we don't solve this problem soon, we'll reach a crisis point in our ability to compete in a global economy,” says George Campbell, president of the National Action Council for Minorities in Engineering (NACME).

    A new NSF report on women, minorities, and persons with disabilities, the ninth in a biennial series, describes the uneven progress to date ( In some areas, gains have been substantial. For example, women now receive 12% of the doctoral degrees awarded in engineering—up from a mere 2% in 1978. Even so, Texas A&M University associate dean Karan Watson has calculated that such an output provides only enough women Ph.D.s for each of the 1500 U.S. engineering departments to hire a new female faculty member once every 6 to 12 years.

    And the changes in the percentages of underrepresented minorities in most disciplines have been slight, although Asians have increased their presence significantly in most fields (see chart). Minority women in particular are in short supply as they battle the twin obstacles of gender and race, according to a 1996 analysis by NACME. “In the entire history of the United States,” notes Campbell, a physicist, “only 20 African-American women have received doctorates in physics.”

    No single solution can remedy all the various deficits. Young faculty women like Brei, who already have a Ph.D. and a job, still need a boost, says Sloan's Ted Greenwood, who thought up the pretenure fellowship program after noticing that the increasing number of women with doctoral degrees was not translating into more tenured faculty. The strain of juggling family and work duties seemed like a big part of the problem, he says, and the fellowship, which is open to tenure-track faculty members returning from leave taken for family reasons or for those reentering the work force, “was something that we could do that had a chance to make a real difference.”

    Brei couldn't agree more. “It takes away a big part of the stress of getting back on track,” she says about Sloan's $20,000 grant, which is matched by the university. “It's hard to travel when you're pregnant, and this will let me go to a few conferences and show people that I'm not planning to slow down after the baby.”

    Enticing young people, particularly minorities, to take up science and engineering requires a different tack. Participants at the NAE summit decided that one short-term answer is a public relations campaign. “The image of a scientist or engineer as a geek, or a madman, is not an attractive one for most teenagers,” explains computer scientist Carol Kovac, vice president for services, applications, and solutions at IBM's T. J. Watson Research Center in Hawthorne, New York, and a summit participant. “And none of the scientists they know look like Miss Washington. Even more important is the message that science is a way for them to improve the world.”

    Loya, who became interested in engineering “because I didn't want to be left behind by technology,” delivers the same message to groups of all ages. Yet she illustrates a part of the problem: “I want to go to graduate school, but I'm thinking about an MBA because it's easier to move up the corporate ladder,” she says. “I love research. But I'm a people-oriented person, and I don't know if engineering is the right match for me.”

    Indeed, retaining women and minorities who express interest in scientific careers is a big challenge. “Image is important,” notes Campbell, “but our studies show [financial aid] is the biggest factor in retention rates.” NACME has calculated that it would require $250 million a year in scholarships and other financial support for the nation to produce enough minority engineering graduates each year—20,000 instead of the current 6000—to equal their proportion of the college-age population. Currently, Campbell says, the federal government, nonprofits, and industry spend only a small fraction of that amount to address the problem.

    Once those new scientists and engineers graduate, they will need to find a supportive work environment to keep them from fleeing to greener pastures—something not always in evidence. “I was sitting next to the dean of an engineering school with no foreign faculty and no women,” says Linda Skidmore, executive director of the new federal commission, describing a breakout session at the NAE summit. “He said he couldn't understand the fuss because he had all the good people he needed.” The chair of the new NAE panel, engineer and former utility executive Cornell Reed, hopes to “see if we can make an economic argument for the value of diversity” at a July workshop for some two dozen corporate executives. “Until a few years ago, I was the only one of 23 corporate officers who was African American, and all of us were engineers from the Midwest,” says Reed, who retired from Chicago's Commonwealth Edison in 1997. “That makes for some pretty narrow thinking.”

    But even after opting for greater diversity, some institutions may not know how to achieve it. That's why the new congressional commission plans to compile and disseminate a list of “best practices” and model programs for all sectors—government, academia, and industry. “Each underrepresented group faces different problems, at different points along the way,” explains Elaine Mendoza, chair of the commission and head of a Texas software company she founded. “We hope to identify commonalities and offer suggestions in each area.”

    However, some observers are worried that the commission, conceived as a forum to examine women's issues and appointed by members of Congress and the National Governors Association, may not be sufficiently knowledgeable about, and attuned to, the concerns of other minorities in the scientific workplace. They note with unhappiness that Campbell was the only African American, and the lone man, at the commission's first meeting. (Raul Fernandez, the only other male panel member—and one of two Hispanic business leaders—was absent.) They also worry that the commission may not focus on the areas of greatest need. “Minorities are the biggest problem, by far, in biomedicine,” says Ruth Kirschstein, deputy director of the National Institutes of Health, who briefed the panel at its initial meeting in April. “And they will miss the boat if they look mainly at barriers to women.”

    NAE's Wulf says he's very conscious of the need for sustained, tangible results. He's also frustrated that the corporate engineering community has not been more supportive: NAE was forced to spend $130,000 out of its meager pot of institutional funds to put on the summit, although corporate contributions topped $200,000 for both the meeting and a Web site created last summer.

    Wulf believes that NAE, as the profession's most prestigious body, must play a leading role in any campaign to foster diversity. Only then, he reasons, will programs like Sloan's and the words of spokespersons like Loya take root and trigger permanent changes. But altering the views of such a body will be hard work, he admits. “One of the biggest challenges is the NAE membership, whose average member is 70. It may be like the old saying about how to change a university. You do it one grave at a time.”


    A Second Chance to Make a Difference in the Third World?

    1. Robert Koenig

    Later this month, the World Conference on Science will seek to raise the profile of research worldwide. Third World scientists wonder if it will do much to help them

    Twenty years ago, when hundreds of experts gathered for a global science-policy jamboree in Vienna to help the developing world boost its science and technology, genetic engineering was in its infancy and an embryonic Internet connected a few hundred U.S. computers. This month, a somewhat different cast of characters will gather just 240 kilometers away in Budapest for another international talkfest, the World Conference on Science. In the 2 decades separating the events, the world has changed dramatically: The Iron Curtain is now only a memory, cloning has become commonplace, and the Internet now connects tens of millions of computers across the globe. Yet, despite the contrasts, one aspect remains depressingly familiar: The quality of, and support for, research in the world's developing nations still lags far behind that of industrialized countries.

    In fact, with a few exceptions, that divide may have deepened since 1979. “The research gap is widening for the majority of developing countries, especially the least developed countries,” says Sudanese mathematician Mohamed Hassan, executive director of the Third World Academy of Sciences and president of the African Academy of Sciences. “The frontiers of science have advanced so fast and the research costs risen so rapidly in recent years that it has become difficult for some countries to keep up.”

    According to the United Nations Educational, Scientific and Cultural Organization's (UNESCO's) World Science Report 1998, only about one-tenth of the $470 billion invested in R&D in 1996 was spent by the developing world, where about 80% of the world's population resides. And scientists in North America, Western Europe, and Japan accounted for about 84% of all scientific papers published in 1995. With the exception of a handful of “third-tier” nations whose economies have greatly expanded—including China, Brazil, and South Korea—most Third World nations have fallen behind the North's pace. “Scientific research in the developing world is way behind. Even countries like India have great difficulty in competing,” says C. N. R. Rao, president of the Nehru Center for Advanced Scientific Research in Bangalore, India.

    The Vienna conference was full of good intentions to shrink the North-South science gap. It pledged to set up a fund for science and technology projects in developing nations, and the U.S. delegation even proposed setting up a national aid agency to support Third World research. However, the U.S. Congress torpedoed the agency proposal, and Northern governments in general did not come through with their promised commitments to the fund. Indian plant geneticist M. S. Swaminathan, who headed a UN advisory committee in 1980 to follow up on the action plan, says he was “extremely disappointed at the lack of commitment on the part of the industrialized nations, both in terms of finance and intellect, to achieving the goals set at the Vienna conference.” Swaminathan, who will give a keynote speech at the Budapest conference, adds: “I can only hope that the Budapest plan of action will not meet with a similar fate.”

    On the slide.

    Publication output between 1990 and 1995 has been stagnant or slumped in most developing nations.


    The organizers of this year's meeting—UNESCO and the International Council for Science—have learned some lessons from the Vienna conference. They have broadened the agenda and shifted the focus to reflect changes in strategies for fostering science in the Third World. “We want to discuss the role of science in society. We do not want to focus—as did the 1979 conference—mainly on the question of how to help developing countries with science and technology,” says Maurizio Iaccarino, an Italian molecular biologist who is UNESCO's assistant director-general for natural sciences and the chief organizer of the Budapest conference.

    Iaccarino concedes, however, that North-South issues are bound to come up in a gathering of 2500 scientists, research ministry officials, and science managers from nearly 200 nations. “I don't expect major international aid commitments at this year's conference,” says Hassan, who has promoted many collaborative science programs among developing countries. “But I do hope that some industrialized nations might announce more support for R&D efforts in the Third World.” The main focus, however, will be on strengthening the developing world's own research. “Compared to 20 years ago,” says Iaccarino, “developing countries now realize that they need indigenous research. And now there is far more emphasis on ‘South-South’ cooperation within the developing world.”

    A growing number of international development agencies have come to similar conclusions. They now view support for Third World science not just as an end in itself but as a catalyst for economic development. In a major report last fall, for example, the World Bank—the world's biggest development organization—declared that “knowledge is critical for development” and signaled a shift away from its traditional emphasis on public-works projects—such as dams and roads—and toward efforts to help developing nations acquire the knowledge they need to grow. The bank, which had closed its science office 15 years ago, is now moving to help bolster science in developing nations, including plans to help fund “Millennium Institutes”—centers of excellence that will aim to give the best Third World scientists the chance to do world-class research.

    Although it is not prepared to offer any deep-pocket commitments in Budapest, the influential U.S. delegation, led by White House science adviser Neal Lane, seems eager to encourage more scientific cooperation with the developing world. Biochemist Bruce Alberts, president of the U.S. National Academy of Sciences (NAS), which is doing much of the planning, says that “just making everyone feel good about science is not going to be enough.” He wants the U.S. group to help convince research ministers to beef up their own support for science. “It's important to get developing-country governments excited about the opportunities for science in their own countries, to let them know the benefits that science can provide if you support it,” he says.

    Alberts also has two personal goals: trying to connect all scientists through the Internet (see sidebar) and setting up an InterAcademy Center science advisory panel that would be an international version of the NAS's National Research Council, producing high-level science-policy reports. He says that “American science is willing to contribute more if we can make the right kind of links,” but he insists that developing countries should take the initiative: “It has to be a pull; it can't be a push.”

    But it is difficult to predict who will be pulling whom, and which national delegations will be pushing what, at the Budapest conference. The central focus of the conference will be the debate on, and likely approval of, two draft documents: a “Declaration on Science” that calls on governments to make more of a commitment to science and scientists to focus more on society's needs; and a “Science Agenda—Framework for Action” that outlines a raft of actions to help realize the declaration's goals. Although some critics contend that those draft documents are too bland, Iaccarino says they are important to help develop “a new philosophy about the relationship between science and society. This is closely related to science for development.”

    One of the concrete initiatives likely to emerge from the meeting will be a series of new “centers of excellence” to train researchers in the developing world—similar to the World Bank's plans. Iaccarino says he is expecting various countries to announce at least 10 such initiatives to create centers in developing nations that would train scientists in fields such as biotechnology, mathematics, hydrology, and renewable energies: “We want more centers of excellence for the training of scientists from the developing world. And I am confident that several new institutions will be set up.”

    Although the conference's direct impact may be modest, some scientists believe that informal “hallway meetings” and networking in Budapest will catalyze initiatives that could have a long-term impact on international science. Says Alberts: “Formal agreements are signed all the time that turn out to be meaningless. What we really want to do is to make connections with the research ministers and others from developing countries and convince them to foster science.”


    Third World Researchers See Internet as an Entry Ticket to Mainstream Science

    1. Robert Koenig

    In Guadalajara, Mexico, geneticist J. M. Cantu is setting up the Mexico Network of Molecular Biomedicine, with the aim of linking researchers at 100 research institutes and hospitals throughout Mexico via the Internet. At Nigeria's Obafemi Awolowo University, scientists are working on a project—with the help of the Abdus Salam International Centre for Theoretical Physics in Trieste, Italy—to use Internet links to bolster research in the physical sciences. And in southern India, the M. S. Swaminathan Research Foundation is hooking up about two dozen isolated villages to the Internet to provide scientific knowledge on issues such as health and agriculture.

    Although use of the Internet is limited in developing countries, largely because of poor communications, many see it as a key factor in fostering Third World science. The Internet could weld together isolated centers into “virtual laboratories” and ensure they keep in science's fast lane by accessing journals via the Web. While Internet access doesn't feature prominently on the agenda of this month's World Conference on Science in Budapest, it seems likely to dominate many hallway discussions. “I would like to see the birth of a global movement in this direction,” says plant geneticist M. S. Swaminathan, founder of the south Indian institute, who will deliver one of the conference's main speeches. Adds Cantu, who also will speak in Budapest: “The Internet is the best hope for globalizing science.”

    Bruce Alberts, president of the U.S. National Academy of Sciences, says he would like to see the world's major scientific organizations develop programs aimed at connecting all scientists to the World Wide Web, and then develop “knowledge resources” that would be made available through the Internet to all scientists. “We'd like to see a focused effort by all these organizations—UNESCO, the International Council for Science, the Third World Academy of Sciences [TWAS], and others—to better connect scientists worldwide to the Internet,” Alberts told Science. “The problem is to convince the government organizations that this would be useful for them.”

    For scientists in developing nations, simply getting access to the Internet at all is often the biggest problem. Some governments either do not allow such links, or their telecommunication systems are not geared up for it. In a recent study, Enrique Canessa and two colleagues at the Abdus Salam center found that sub-Saharan Africa is the farthest behind on Internet connections for scientific use. Many African scientists don't even have access to computers, and when they do, congestion on the few available telecommunication links makes Internet access slow and expensive.

    Another study notes, for example, that in Angola, 5 hours of Internet access per month for a year would cost an estimated $1740—more than the average Angolan's annual income. And scientists in Mozambique, Zaire, and Congo currently have no Internet access at all. Even in India, where access is more widespread, D. Balasubramanian, research director of the Hyderabad Eye Research Foundation, complains that some scientists there “wait for almost an hour before getting hooked onto the Net.” He adds: “To the extent that the Internet is available, it has been a great boon. The problem in several countries, however, is the speed.”

    Such problems “reduce drastically the effectiveness of the Internet as a working tool and will delay the creation of South-South virtual laboratories,” say Canessa and his co-authors. However, they conclude that—in regions where the Internet is functioning—virtual labs eventually “could become a valuable device for combating the brain drain from South to North.” In Mexico, Cantu says that “science should get ahead of politics on this issue” and push hard at the world conference for an international program to link scientists in the North and South. Adds Mohamed Hassan, executive director of TWAS: “Every scientist should have the right of access to the Internet.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution