News this Week

Science  29 Sep 2000:
Vol. 289, Issue 5488, pp. 936

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Scientists Weave New-Style Webs to Tame the Information Glut

    1. David Voss

    Physicists collaborating on a new generation of big experiments may drown in a data waterfall unless they find a way to channel the flow. A consortium of 16 universities has just received an $11.9 million federal grant to build a shared computational network, or data grid, that they hope will serve as the right sort of pipeline—and lead to even better science.

    The idea behind data grids is to allow users to tap into a universe of electronic information, regardless of its location or origin. The grids are often compared to the popular music file sharing program Napster, which enables Internet surfers to exchange files. But Napster still relies on a central server to keep track of which music clip is on whose PC. A better comparison is a rival program called Gnutella, which allows users to share any file format in a totally decentralized system.

    University researchers want to do the same trick with supercomputers and large data sets. To do so they've created a consortium, funded in part by the National Science Foundation, called the Grid Physics Network or GriPhyN (pronounced “griffin”). “GriPhyN will solve problems more demanding than any individual can solve,” says Ian Foster, a computer scientist at Argonne National Laboratory in Illinois and co-principal investigator of the GriPhyN project. Biologists and medical researchers have also seen the value of peer-to-peer networking (see sidebar) and want to make their data available over grids, too.

    Right now, physicists can share big databases, but it is a nightmarish task. “We've been doing this for a long time, but it requires a lot of special expertise,” says Fabrizio Gagliardi, a CERN physicist heading DataGrid, a European project that will join with GriPhyN. “Right now you have to know the exact locations and access procedures for each computer system.” He compares it to e-mail 15 years ago: “When I was working at Stanford, I had to log in to five different machines just to read my mail at CERN.” Data grids will make global data sharing painless, Gagliardi says.

    View this table:

    GriPhyN is arriving just in time to serve several large physics projects. Initially it will join the Sloan Digital Sky Survey (SDSS), the Laser Interferometer Gravitational Observatory, and two experiments at CERN, called ATLAS and CMS, that will run on the Large Hadron Collider (LHC). Each project offers the type of challenge that GriPhyN hopes to conquer: oceans of data that thousands of collaborators around the world must analyze to pick out painfully small signals from a noisy and cluttered background.

    When the LHC comes online in 2005, for example, the collisions of its subatomic particles will generate a data stream of 5 petabytes every year. One petabyte is roughly equivalent to the capacity of a million personal computer hard drives or a stack of CD-ROMs nearly 2 kilometers high. Particle physicists have always had to deal with reams of data flooding from their detectors, but nothing like LHC. “The current experiments at CERN's Large Electron-Positron collider generate a few terabytes per year,” says David Strickland, a Princeton physicist working at CERN. “The new experiments will create 1000 times more data than that”—data that thousands of collaborators will need to find and use.

    The researchers also need massive computer muscle to crunch the numbers. In SDSS, for example, an astronomer trying to puzzle through a possible case of gravitational lensing might need to sort through 10 million galactic objects in order to find an effect, using sophisticated statistical wizardry and careful mathematical filtering. “The resources required, for economic or political reasons, just cannot be created at any single location,” says Foster.

    Physicists now spend heaps of time locating data files and getting them processed. In searching for images representing how new particles created in a collision were slamming around the detector chamber, for example, a researcher may have to punch in a horrendous chain of computer commands to translate the raw numbers into a useful picture of the results.

    GriPhyN and DataGrid will also work together with the Particle Physics Data Grid (PPDG) at the California Institute of Technology (Caltech) in Pasadena to provide a connected computational lattice for big physics experiments. Harvey Newman, a physicist at Caltech, is a principal researcher on PPDG and also a senior member of the GriPhyN group. “We'd always planned to study data transfer and file caching in the short term, then build a longer life system,” Newman says. “Then GriPhyN came along.” Farther afield, the European Commission's Information Society Technologies program has just invited DataGrid to apply for EU9.8 million (about US$8.6 million) to build research grids in Europe.

    The international culture of physics fosters such grand sharing, but pitfalls may loom. “We're all nervous about this,” says physicist Paul Avery, co-PI of the project at the University of Florida, Gainesville. “My experience in large software projects is that unless you sit on top of this all the time, you do diverge.” Newman was also nagged by doubts initially after hearing people discuss the idea at a meeting. “Some people said, ‘We'll build our national grids and then make them work together.' But this does not work,” Newman says. “Fortunately we haven't built anything yet, so there is a good chance that we'll all build the same thing.”

    Strickland, who is not directly involved in the grid construction, says that the grid builders appear to be taking the right tack by funding software engineering rather than just buying lots of new hardware: “They seem to be throwing the right resources at the problem.” But that alone is no guarantee of success, he cautions. “Obviously, we've got a long way to go.”


    Downloading the Human Brain, With Security

    1. Eliot Marshall

    Neuroscientists collect huge quantities of data on the human brain. But compared with their colleagues in physics, they are traditionally much less likely—for professional and personal reasons—to want to share them (Science, 1 September, p. 1458). Now a group at Rutgers University in Newark, New Jersey, is proposing a two-step, encrypted process for sharing information that would open the door to all legitimate researchers while imposing tough safeguards on its use.

    The Rutgers group is writing software for a Napster-like Web site that would make possible “peer-to-peer” sharing of brain-scan images, but would not contain images itself. Instead, the site would house an index of available data sets and a protocol for accessing them. Researchers willing to share brain images would register with the site and describe what was available and under what conditions, says Benjamin Martin Bly, a cognitive neuroscientist at Rutgers who, along with his boss, cognitive scientist Stephen Hanson, is developing the project. “You might look up language localization [in the brain],” Bly explains. “The server would respond with a list of 100 results,” each describing an experiment and the data, along with the conditions on access imposed by the donor. If the seeker agreed to the conditions, the central server would forward a request to the donor. If the donor also agreed, submitting an encrypted signal, the server would automatically trigger a “handshake” that would download the file.

    Bly says the system will “separate the information about what exists—which can be shared easily—from the actual experimental data.” The operators of the central site would never handle the raw data. This kind of protection, he adds, would reduce concerns about breach of confidentiality—always an issue in clinical studies—and increase donor confidence.

    The protocol would differ from Napster's, Bly points out, by letting donors control the release of data and keeping a record of each handshake. If a person claims to have data of a certain type but refuses to share it, “this protocol makes it immediately obvious.” Bly is designing the software with encouragement from the National Institute of Mental Health and hopes to have a test or “beta” version ready by January.


    Misconduct Alleged in Yanomamo Studies

    1. Charles C. Mann

    E-mail has been ricocheting among anthropologists as they nervously await the publication of a new book that charges some prominent researchers with professional misconduct—and much worse—in their studies of the Yanomamo, a native people in the Brazilian and Venezuelan Amazon. Written by journalist Patrick Tierney, Darkness in El Dorado (W. W. Norton)—an excerpt of which is scheduled for publication in The New Yorker next month—accuses anthropologists of creating a false picture of the Yanomami, manufacturing evidence, and perhaps setting off a fatal measles epidemic. “This is the Watergate of anthropology,” says Leslie Sponsel of the University of Hawaii, Manoa. “If even some of the charges are true, it will be the biggest scandal ever to hit the field.”

    Although few anthropologists have actually read the book, which will not be published until mid-November, it has already stimulated an enormous reaction. The American Anthropological Association (AAA) has promised to hold a session on the book at its upcoming annual meeting. Napoleon Chagnon, a prominent Yanomamo specialist now at the University of California, Santa Barbara, whose research is challenged by Tierney, has already refused to participate in what he calls “a feeding frenzy in which I am the bait.” (Instead, he is consulting libel lawyers.) Meanwhile, other researchers are recruiting statements to defend the late James V. Neel, a University of Michigan geneticist whom Tierney charges with distributing a measles vaccine in 1968 that may have worsened or even caused an epidemic that led to “hundreds, perhaps thousands” of deaths, say those who have read galleys of the book.

    “Yanomamo anthropology has been a battleground for years,” says one anthropologist with extensive experience in the area. “But the scale of these allegations is far beyond anything I've ever heard of before.” The researcher, who requested anonymity for fear of being drawn into litigation, adds, “The prime rule for anthropology is not to harm the people you're working with. … This book is apparently saying that researchers have grotesquely violated those standards for 30 years.”

    The debate over Darkness in El Dorado is the latest, biggest skirmish in a decades-long battle over the Yanomamo. Living in more than 200 small villages near the headwaters of the Orinoco River, the 24,000 Yanomami are among the least Europeanized people on Earth. Although missionaries contacted them in the 1950s, the first long-term anthropological study of the Yanomami was not published until 1968, when Chagnon published Yanomamo: The Fierce People. Based on his University of Michigan dissertation, the book was quickly acclaimed as a classic, selling almost a million copies and becoming fodder for introductory anthropology courses across the globe. Meanwhile, Chagnon entered into collaborations with Neel, who was beginning a long-term study of Yanomamo genetics, and Tim Asch, a documentary filmmaker. (They eventually made 39 films, several of which won awards.) Both collaborations dissolved in the 1970s, partly over Chagnon's belief that his work was not receiving proper credit. Asch died in 1994, Neel early this year.

    Even as Chagnon continued his research, other researchers began to question his description of the Yanomamo as aggressive and “liv[ing] in a state of chronic warfare.” The dispute grew heated in 1988, when Chagnon published an article (Science, 26 February 1988, p. 985) dismissing the common view that groups like the Yanomamo fight over scarce natural resources. Instead, he said, Yanomamo battles are mostly about women. Moreover, the killers—unokai, in the language—end up with dominant social positions that entitle them to more female partners, who provide them with more offspring, suggesting a genetic payoff for violence. At least three books attacked this sociobiological conclusion.

    Among other points, Darkness argues that Chagnon's picture of the Yanomamo is not only wrong, but that some evidence for it was manipulated. Tierney—who spent more than a decade researching the book, including 15 months in the field—alleges that the anthropologist staged many of the fights recorded in his films with Asch. Worse, Tierney claims, some of these phony wars turned into real wars, as Chagnon introduced steel goods that led to deadly violence.

    “There is no credible evidence to support Tierney's fantastic claims. …,” responds Chagnon, who rejected The New Yorker's offer to “submit to an interview.” “Intelligent people base their judgements on evidence. Only believers in conspiracy theories and a large number of cultural anthropologists from the academic left leap to conclusions that are not only not supported by the available scientific evidence but contradicts and thoroughly refutes them.”

    Tierney's investigation of a 1968 measles epidemic has drawn the most attention. In a research trip to the area early that year, Neel, Chagnon, Asch, and the other members of the University of Michigan team vaccinated many Indians with Edmonston B measles vaccine, which was discontinued in 1975 and was already being replaced by vaccines with fewer side effects. Because the epidemic seems to have started at the places the research team vaccinated, Tierney suggests that the vaccine may have contributed to what became a terrible epidemic. Afterward, Neel apparently gave contradictory accounts about the way the epidemic started and did not explain why hed used an older vaccine than the one used elsewhere in Venezuela.

    In an e-mail to AAA officers that was leaked to the news media last week, Sponsel and Cornell University anthropologist Terence Turner—who are among the few anthropologists who have read the book—even speculate that Neel may have used the risky vaccine to test what they call his “fascistic eugenics” theory that dominant males like unokai could better survive catastrophes and pass on their genes.

    Angered by these allegations, Neel's colleagues are lining up rebuttals. Samuel L. Katz, a measles specialist at Duke University, says the vaccine simply is not deadly, even to people without prior exposure. Doctors have “given hundreds of thousands of doses to malnourished infants in Upper Volta (now Burkina Faso) and Nigeria with no severe consequences,” he argues in an e-mail passed on to Science. “Indeed, in the history of Edmonston B (a licensed U.S. product), I know of only two fatalities—two Boston children with acute leukemia under heavy chemotherapy.”

    The contretemps is not likely to end soon, although it may get better informed. Because Tierney is being kept mum by his publishers until the book appears, he cannot defend it. And some of his critics concede the oddity of attacking a work that they have not read. But even when both sides can fully argue their cases, in Sponsel's view, the debate will last a long time. “There's an incredible amount in the book,” he says. “People are going to be working at it for years to come.”


    Ice Man Warms Up for European Scientists

    1. Richard Stone

    After spending about 45 million hours in a deep freeze, Italy's “Ice Man” was thawed for 4 hours earlier this week in an Italian museum to allow scientists to snip out tiny fragments of bone, teeth, skin, and fat. Scientists hope that turning up the heat on the famous emissary from Neolithic Europe could help solve such lingering puzzles as who his kin were and what caused his death.

    Hacked from a glacier in the Ötztaler Alps in 1991, the 5200-year-old mummy, known as Ötzi, has already provided researchers with a breathtaking view of life in that prehistoric era. He carried a copper ax—a precious object indicating a high social rank, perhaps that of clan chieftain—and wore a waterproof grass cape much like those used by Alpine shepherds as late as the 19th century. Tattoos on his back and legs suggest that he practiced acupuncture—some 2 millennia before the therapy is described in Chinese records (Science, 9 October 1998, p. 242).

    Indeed, Ötzi appears to have had good reason to seek pain relief. A short man who may have lived into his 40s—a ripe old age in the Neolithic—Ötzi had arthritis and his guts were infested with eggs of the whipworm, a parasite that would have caused wrenching pain. Needle marks near acupuncture points for the bladder hint at the possibility of a urinary tract infection as well.

    Scientists studied Ötzi intensively in 1991, but a biuter custody fight between Austria and Italy imposed a 9-year hiatus on invasive research. After precise measurements showed that the mummy had been discovered 93 meters south of the Austrian-Italian border, Italian officials in 1998 installed Ötzi in a refrigerated room with a peephole for viewing in the South Tyrol Museum of Archaeology in Bolzano. Austrian and Italian scientists, who had by then mended fences, began planning new lines of inquiry.

    On 25 September, a team of forensic scientists from the University of Verona, Italy, and the University of Glasgow, U.K., reexamined the body for signs of trauma. Their work in the months to come is aimed at answering one major question, namely, how he died. One hypothesis is that he simply fell asleep, exhausted, and froze. But Ötzi also had a few broken ribs, hinting at an accident. Damaged tissue in Ötzi's brain suggests a third hypothesis—a stroke. An effort to test this idea could get under way next year, says anthropologist Horst Seidler of the University of Vienna, who chaired the committee that selected the current projects. He and a team at Wake Forest University in Winston-Salem, North Carolina, next year will examine the timing of Ötzi's rib injuries.

    Another key project seeks to clarify Ötzi's roots. In 1994, a mitochondrial DNA study showed that his genetic stock most closely matches that of modern central and northern Europeans (Science, 17 June 1994, p. 1775). Two Italian groups hope to extract better DNA samples from bone and narrow Ötzi's ancestry in hopes of learning more about migration patterns in Neolithic Europe. Complementing the DNA studies is an effort to analyze the strontium and lead isotopes in Ötzi's tooth enamel. Comparing the isotopic ratio with samples from 5200-year-old geologic layers in the region can help pinpoint where Ötzi spent his childhood.

    One intriguing project in the offing would look at the process of mummification by comparing Ötzi's soft tissues—particularly fatty acid content—with samples from Juanita and other mummies found in the Andes. Seidler is negotiating a joint study with the discoverer of the Peruvian mummies, Johann Reinhard, and the University of Arequipa. But he worries about the effects of Juanita's current tour of Japan, which involves stops in more than 20 cities. “I fear that all the shows and environmental changes would not be so helpful,” he says.

    Last week's quick analysis of Ice Man evoked no such concerns in Seidler. Monday, he says, “was a great day for my South Tyrolean friends.”


    Structural Biology Gets a $150 Million Boost

    1. Robert F. Service

    Structural biology got a shot in the arm this week. The U.S. National Institute of General Medical Sciences (NIGMS) selected seven centers to be the initial test-beds for structural genomics, a field that aims to work out the structures of large numbers of proteins using robotics and advanced computers. The 5-year, $150 million program is intended to speed up the determination of three-dimensional, atomic-scale maps of proteins, which in turn should accelerate discovery of new drugs by giving pharmaceutical companies a closeup look at the proteins they are trying to target.

    “This is a major undertaking,” says Gaetano Montelione, a structural biologist at Rutgers University in Piscataway, New Jersey, and leader of the Northeast Structural Genomics Consortium. “It's just a starting point for structural genomics. But it's a good start.”

    View this table:

    The program grew out of the widespread recognition that the Human Genome Project and similar gene-sequencing efforts are only the first step to understanding biology and disease. Although genes harbor the cell's storehouse of genetic information, proteins carry out the bulk of cellular chemistry. Genetic sequences determine the order of amino acids in the proteins they code for, but the chainlike protein molecules generally fold into 3D shapes that cannot be predicted. Fortunately, proteins tend to cluster into families that share similar overall 3D shapes, or “folds.” By finding examples of each of these folds, structural genomics researchers hope to identify patterns that will enable computer models to predict the shapes of unknown proteins from their amino acid sequences.

    That's a fairly safe bet, says Andrej Sali, a protein modeling expert at the Rockefeller University in New York City and member of the New York Structural Genomics Research Consortium (NYSGRC). Using the estimated 800 known separate protein folds, Sali and his colleagues have been able to create computer models for at least portions of 200,000 proteins. As such, he says the new structural genomics research effort will help modelers achieve “huge leverage” in understanding novel proteins.

    Officials at the National Institutes of Health (NIH) say they hope the new program will enable them to determine the structure of as many as 10,000 proteins in the next 10 years. That's just a smattering of the more than 1 million proteins thought to be present in nature. Nevertheless, it would mark a surge in the pace of discovery for structural biologists, who have collectively solved the structures for only about 2000 unique proteins in the past 4 decades. It's also expected that the coming bolus of protein structures will reveal a large fraction of the estimated 1000 to 5000 protein folds thought to exist.

    The centers, each a consortium of institutions ranging from universities and national labs to companies, plan to take slightly different paths to obtaining their protein structures. The TB Structural Genomics Consortium, a center headed by Tom Terwilliger of Los Alamos National Laboratory in New Mexico, for example, is planning to focus its structural work on Mycobacterium tuberculosis, the organism that causes tuberculosis, in an effort to spur new treatments for the disease. The NYSGRC, meanwhile, will take a more varied approach. “We're doing proteins from bacteria to man” in an attempt to come up with new clues to conditions ranging from antibiotic resistance to cholesterol metabolism, says Rockefeller's Stephen Burley, who heads the five-institution consortium.

    Each new center is slated to receive about $20 million over 5 years, a number that will vary depending on indirect costs paid to the institutions involved. But more money is in the pipeline. In July, NIGMS released another request for additional structural genomics centers to be funded next year. And when the current program is finished, NIH is expected to select two or three of the current crop of centers and ramp up their funding considerably.


    Astronomers Measure Size of a Giant's Sighs

    1. Dana Mackenzie*
    1. Dana Mackenzie is a writer in Santa Cruz, California.

    To stargazers, Zeta Geminorum makes up the kneecap of one of the twins in the constellation Gemini. To astronomers, it is also one of the brightest Cepheid variables in the sky—giant yellow stars that grow dimmer and brighter over periods of days or weeks. Astronomers have long presumed that the surface layer of a Cepheid variable, called the photosphere, physically expands and contracts to cause this odd behavior. Now, they have caught Zeta Geminorum in the act of swelling and shrinking, making it the first Cepheid that astronomers have actually seen change its size.

    “It's been something that we've always wanted to do,” says graduate student Ben Lane of the California Institute of Technology (Caltech) in Pasadena, part of the five-person team that made the observations. Earlier astronomers inferred the size of the oscillations indirectly, through the well-known phenomenon of the Doppler shift. As a Cepheid variable grows, its surface moves closer to Earth, causing its light to appear bluer; as it shrinks, the surface moves away from Earth and the light is redshifted.

    Seeing the size change directly, however, has been a daunting challenge in precision astronomy. The angular diameter of Zeta Geminorum, as seen from Earth, is only about 1.5-thousandths of an arc second, or 0.0000004 degrees, and the change in its diameter over a 10-day cycle is only one-tenth of that. Picking out such a small change is equivalent to spotting a basketball on the moon—a feat beyond the ability of either the largest Earth-based telescopes or the Hubble Space Telescope.

    Astronomers have now gotten around that problem by linking two telescopes into a 110-meter-wide interferometer. The Palomar Testbed Interferometer (PTI) in California has as much angular resolving power as a telescope with a mirror larger than a football field. (No such telescope exists, of course.) That makes the interferometer perfect for detecting small motions in relatively nearby objects, such as the wobble of a star with a large planet orbiting it or the pulsing of a Cepheid variable. Nevertheless, previous attempts at the PTI, as well as at two other large interferometers in Arizona and France, failed to separate the expected motion from the random jitters caused by Earth's atmosphere.

    Lane attributes the Caltech astronomers' success, reported in this week's issue of Nature, to three factors. First, they hiked the interferometer's resolution by retuning the instrument to collect a shorter wavelength of infrared light than it had gathered in previous attempts. Second, the group filtered out atmospheric turbulence with a type of optical fiber that some of the other groups did not have. The third ingredient, Lane says, was “persistence. It took a lot of observing time,” which had to be squeezed in around the higher profile search for extrasolar planets.

    By last Christmas, Lane already had clear evidence of the star's growth and shrinking, and by this spring he had the most accurate estimate ever of the angular size of the oscillations. Then, by dividing the angular size of the oscillations into their absolute size (as inferred from redshift measurements), Lane calculated the distance of Zeta Geminorum as 1100 light-years from Earth.

    But the significance of the result extends far deeper into space. “In time, measurements like these will simplify and therefore strengthen astronomers' measurements of the distances of galaxies, and thus the size and age of the universe,” says Jeremy Mould, an astronomer at the Australian National University in Canberra. That is because Cepheid variables are used to calibrate the distances to nearby galaxies, which in turn form a reference for estimating the distance to those farther away. This leads to an estimate for the Hubble constant—the ratio of the recession speed of the galaxies to their distance from Earth—which, finally, constrains the age and fate of the universe.

    “Because we still have a 10% uncertainty, we're not making a dent in the Hubble constant today,” says Shrinivas Kulkarni, who supervised Lane's research. “The excitement is that the technique does work. As other optical interferometers come online, they will produce a dozen similar measurements with accuracy to a few percent. This is like an initial public offering.” One new interferometer that will probably improve upon the accuracy of the PTI measurements is the Center for High Angular Resolution Astronomy, a 400-meter-wide array of six telescopes on California's Mount Wilson, which will be dedicated on 4 October and is expected to start operations by the end of the year.


    Panel Cites Barriers to Government Service

    1. Jeffrey Mervis

    Why don't more scientists want to work as top officials in Washington?

    The answer, according to a panel of veteran government policy-makers, is a lack of attention to science by incoming Administrations, a slow appointment process, and outdated rules to prevent conflicts of interest. The problem is particularly acute among high-tech industry executives, according to a new report from the National Academies of Sciences and Engineering and the Institute of Medicine, which urges the next president to give industry a bigger place at the policy table. “We don't want to lower the standards,” says Mary Good, dean of engineering at the University of Arkansas and chair of the panel. “But we think that it's fair to ask if the world has changed so much that the rules need to change, too.”

    Industry officials don't disagree that recruitment is a serious issue. But many say that considerations such as salary levels and career prospects are bigger disincentives to government service, and that it's also possible to serve the government without working in Washington full-time. “It's not a career path for most people in Silicon Valley,” says Tim Newell, an aide to science adviser Jack Gibbons during Clinton's first term and currently managing director at E*Offering, an Internet investment banking firm in San Francisco. “The last few years have seen huge growth and unprecedented economic opportunities,” he adds. “Those tremendous opportunities, plus the barriers mentioned in the report, make it harder to attract quality people to Washington.”

    The eight-page report ( is a follow-up to a 1992 study by the academies and similar exercises by others carried out during an election year. It urges the incoming Administration to include scientists on its transition team and to appoint a presidential science adviser early enough to play a role in screening for other top positions. For example, President Reagan's decision to wait until May 1981 to appoint his first science adviser, George Keyworth, “was a big problem at the start,” notes panelist John McTague, a retired Ford Motor Co. executive and acting presidential science adviser during Reagan's second term. “His first two science budgets were woefully inadequate, not out of malice but out of ignorance.”

    The science adviser is one of 50 science and technology slots, from the director of the National Institutes of Health to the undersecretary for technology in the Commerce Department, that the panel labeled as “most urgent” of rapid appointment. The panel was also concerned that the Clinton Administration included fewer people from industry in its first batch of nominees for top science jobs than did the Reagan and Bush presidencies. (The report did not tally appointments made after the second year in office.) It blames the decline, from 27% in 1982 to 12% in 1994, in large part on the screening process, which it says has grown so cumbersome that it deters potential hires. Indeed, several executives noted that the long delay between initial consideration and confirmation—data from the panel show that a majority of people now wait more than 4 months—is a big disincentive for industrial leaders, who must put their enterprises on hold while awaiting resolution of their job status.

    Part of the problem are rules that require divestiture of stock, stock options, and other financial stakes that could be seen as a potential conflict of interest. To try to avoid these problems, the panel calls for the creation of a bipartisan panel, involving the White House and Congress, that would examine ways “to reduce unreasonable financial and professional losses” for nominees.

    However, industry officials say a more important barrier than the ethics rules is the fact that a job in Washington may not look as good on the résumé of a rising executive as it might on the CV of a university administrator. The economy also plays a role in determining the pool of applicants, say industry officials. And good times don't last forever, Newell notes. “Just wait until the next recession,” he says. “That could change things in a hurry.”


    A New Look at How Neurons Compute

    1. Marcia Barinaga

    The eyes, considered windows to the soul, may offer views of the brain as well. Researchers seeking a simple system in which to study how neurons perform computations—such as tallying the myriad of incoming signals they receive and concluding whether or not to fire—have for decades focused on the retina, which contains neurons that fire only in response to objects moving in certain directions. By studying how those neurons calculate the direction of movement, they hoped to learn general lessons about how brain neurons compute. But the studies were handicapped because no one knew which retinal neurons do the math. Now, on page 2347, a team led by W. Rowland Taylor of Australian National University in Canberra and David Vaney of the University of Queensland in Brisbane, Australia, reports evidence that the directional computations take place in retinal neurons called ganglion cells.

    “This is really important work,” says Alexander Borst, a neuroscientist at the University of California, Berkeley—especially because it offers researchers a welcome chance to explore how neurons compute in a well-defined system. The Australian work might not be the last word, however. Another group has evidence that the site of computation lies elsewhere—a discrepancy that is likely to spark a flurry of additional studies.

    The first sign that retinal ganglion cells do computations came in 1965 when Horace Barlow of the University of Cambridge in the U.K. and William Levick, a co-author on the current paper, showed that some of the cells respond only to objects moving in a certain direction. The ganglion cells are third or fourth in a chain of neurons triggered when light strikes the retina. Barlow and Levick proposed that neurons somewhere in this path calculate movement direction from the timed interplay of excitatory and inhibitory neural impulses.

    In their scenario, when an object moved in the neuron's preferred direction, excitatory impulses would reach the target neuron first, triggering positively charged sodium ions to flow into the cell—an excitatory current. But when the object moved in the opposite direction, inhibitory and excitatory signals would arrive together. The inhibitory signal would cause chloride ions to enter the cell, their negative charge effectively canceling the excitatory effect. A decade later, Nigel Daw's team at Washington University in St. Louis confirmed that inhibitory impulses are required for directional selectivity, but a key question remained. Do the inhibitory and excitatory impulses converge on the ganglion cells or on earlier cells in the pathway?

    To answer that question, Taylor used a method called patch clamping, which enables researchers to detect electrical changes in a single cell—in this case, ganglion cells in cultured rabbit retinas. Taylor and postdoc Shigang He found, as expected, that movement in a cell's preferred direction caused a greater excitatory current to enter the cells' dendrites, the structures that receive incoming signals. But that didn't pinpoint the site of computation; cells earlier in the pathway might be analyzing motion and delivering a larger excitatory signal to the ganglion cells in response to movement in the preferred direction.

    To find out, the researchers shifted the voltage across the dendrite membrane of individual ganglion cells in a way that would favor inhibitory currents over excitatory ones. They found increased inhibitory currents in response to movement in the nonpreferred (null) direction, suggesting that inhibitory inputs play a role in the ganglion cell's response. Next they flooded the interior of the dendrites with chloride to block the inhibitory inward flow of chloride ions; that change abolished directional selectivity. These results provide “strong evidence” that the computation is going on in the ganglion cell dendrites, says Borst.

    What's more, indirect evidence suggests that ganglion cells are capable of something called shunting inhibition, a phenomenon in which chloride channels are opened by inhibitory signals, but there is no net flow of chloride through them unless an excitatory signal comes along at the same time and drops the voltage across the membrane. This voltage change drives chloride through the open channels into the dendrite, where their negative charge electrically nullifies that incoming excitatory signal. In neighboring dendrites the calculation may be different; excitations arriving without inhibition could add up to help make the neuron fire. This model provides a much more complex vision of neuronal computation than does the view in which a neuron simply sums up all the excitatory and inhibitory signals it receives.

    Shunting inhibition has been found in brain neurons, but the computations they perform are not known. Assuming that retinal ganglion cells do in fact calculate direction, researchers can investigate whether shunting inhibition occurs in these cells and, if so, how it contributes to computation, something many have been eager to do in neurons of well understood function, says California Institute of Technology neuroscientist Christof Koch.

    But Lyle Borg-Graham, a neuroscientist at the French research agency CNRS in Gif-sur-Yvette, is not convinced that retinal ganglion cells have computational powers. He reported last July at a meeting in Brugge, Belgium, that his team has evidence that the critical direction-selective computation in turtle retinas occurs earlier in the signaling pathway. “I doubt that the different interpretations may be ascribed to using the turtle as opposed to the rabbit,” says Borg-Graham.

    The reason for the discrepancy is not clear. But when it is resolved, both camps agree, this particular window into the brain may provide quite an exciting view.


    Bones Decision Rattles Researchers

    1. Constance Holden

    The Interior Department has decided to turn over the 9300-year-old remains of Kennewick Man to the five Indian tribes that have laid claim to them. But scientists suing to study the remains, found 4 years ago on the banks of Washington's Columbia River, say they will continue to pursue their case.

    Last March, federal Judge John Jelderks gave the government until September to try to get some DNA out of the bones before deciding whether to allow academic anthropologists to study them (Science, 17 March, p. 1901). Three labs have since failed to obtain any DNA and, thus, suggest a link to a particular people or culture. This week, however, Interior Secretary Bruce Babbitt said the bones have been studied enough and that they should go to the Indians under the controversial Native American Graves Protection and Repatriation Act (NAGPRA).

    NAGPRA applies to remains that are “native American” and “culturally affiliated” with existing groups. But many scientists say the Kennewick skull bears a greater resemblance to early Pacific rim inhabitants than to modern native Americans. And there is no cultural evidence connecting him to existing tribes: The only artifact accompanying the bones was a projectile point in Kennewick's pelvis. Nonetheless, in a letter to the Army Corps of Engineers, Babbitt said reports by four scientists have persuaded him that “the geographic and oral tradition evidence establishes a reasonable link between these remains and the present-day Indian tribe claimants.” He referred to the “continuity of human occupation” in the area for more than 10,000 years and the fact that oral traditions support a very long residency for the tribes.

    Scientists who want to study the bones aren't happy with Babbitt's decision. It is “absolutely absurd” and “cannot be supported either scientifically or from a legal standpoint,” says Alan Schneider, a Portland, Oregon, lawyer for the scientists. Anthropologist Richard Jantz of the University of Tennessee, Knoxville, one of the plaintiffs, says “I can't imagine how the government can defend its decision in court.” No trial date has been set.


    Funding Hike, With Strings Attached

    1. Annika Nilsson,
    2. Joanna Rose*
    1. Nilsson and Rose are science writers in Stockholm.

    STOCKHOLM—Seeking to boost basic research and improve science education, the Swedish government earlier this month announced plans to pump an additional $100 million into academic research. But the funds come with some strings—most of the money is earmarked for political priorities—and this has prompted grumbling in the academic community.

    The government this year will spend about 60% of its $1.6 billion R&D budget on basic research, most of which is handed out as block grants directly to university faculties. The cash infusion is slated primarily for a new government agency, the Science Council, which will give grants for basic research in natural sciences, technology, medicine, social sciences, and the humanities. The move is viewed as a criticism of the Swedish university system's ability to set its own priorities. “There are signs that the state has lost faith in the universities,” says Boel Flodgren, rector of Lund University. “We lose a lot of influence.”

    Academic research has been squeezed in recent years by rising salary and overhead costs, which, along with demands for matching funds from external financiers, consume a growing portion of the block grants. That has forced scientists to rely more heavily on outside funding sources such as the European Union, which sets its own priorities. The upshot, says Flodgren, is that “research originating in curiosity has gotten the short end.”

    The government shares that sentiment, and in a research policy document presented on 15 September it decided to put more money into a new Science Council, which will start its activities next year. “Our idea is to fund basic research and to protect the freedom of research. Through the Science Council, scientists themselves will decide what to prioritize,” says research and education minister Thomas Östros.

    However, the government has already decreed that much of the $100 million should be spent on new postdoc career opportunities and research in eight areas: social sciences and humanities, biosciences and biotechnology, information technology, educational science, materials science, health care research, environment and sustainable development, and the arts. Other research fields will have to vie for funds from the rest of the budget. The university share, meanwhile, will be channeled into shoring up recently established universities and creating 16 centers for doctoral studies in subjects ranging from genomics to history. Östros describes the new centers as an effort to revitalize doctoral education by drawing together resources from several campuses. “Our goal is to bridge the gap between the universities and colleges,” says Östros. “Universities can count on getting even more money for graduate studies in the coming 10 years.”

    University officials are giving the government plans a lukewarm reception. “They could have left more responsibility to the new Science Council,” says Dan Brändström, chair of the committee responsible for organizing the council. Lars Lönnroth, a literature professor at Göteborg University, sees the proposal as overly pragmatic: “Research is viewed as a tool for solving problems in society, and thus the system favors mediocre researchers satisfied with meeting the demands of others.” And although some of the priority areas have merit, adds Anders Flodström, rector of the Royal School of Technology in Stockholm, “the message of the proposal is clear—we have to look for both money and our freedom in collaborations with the private sector.”


    Group Urges Southwest Migration Corridors

    1. Jocelyn Kaiser

    Conservationists with an ambitious goal of “rewilding” North America have released their vision for an ecologically rich swath of the Southwest, where subtropical and temperate biomes meet. The Sky Islands Wildlands Network is a blueprint for protecting species across 4.2 million hectares of Arizona, New Mexico, and Mexico by encouraging the return of wolves, jaguars, bears, and mountain lions.

    This 220-page report is the first detailed product to emerge from The Wildlands Project, an organization of scientists and activists that advocates restoring big carnivores and linking large wilderness areas across North America (Science, 25 June 1993, p. 1868). Sky Islands, for example, would establish corridors along a historic wolf and jaguar migration route. These connections are at the heart of the plan, a web of wilderness reserves and human “buffer zones.”

    The plan, which would take decades to implement and comes without a price tag, is intended to guide the actions of government agencies and conservation groups, notes Roseann Hanson of the Sky Island Alliance in Tucson, Arizona, another project sponsor. One key element is to enlarge wilderness areas. Some other steps are already under way, says Hanson. Federal biologists are reintroducing the Mexican wolf, and the Clinton Administration's policy to ban new roads in national forests is “a nice bonus.”

    Dozens of conservation groups and a few ranchers have endorsed the plan, which is not as radical as it may sound, asserts the network: Only 5.5% of the targeted lands are private, and 95% are already managed for wildlife. Even so, the network is bracing for resistance from ranchers and off-road vehicle enthusiasts, among others.

    Some biologists have qualms as well, because Sky Islands bucks the trend of focusing on habitat types; rather it assumes that providing habitat for top predators will also protect other species. The Wildlands Project's scientific director, conservation biologist Michael Soule, responds that just because it's “based on a different set of premises” doesn't mean it's not done “according to scientific principles.” The plan will now go out for review by 20 or so biologists. Next on the agenda are regional plans for Maine, the southern Rockies, and linking Yellowstone with the Yukon.


    CERN's Gamble Shows Perils, Rewards of Playing the Odds

    1. Charles Seife

    Many seemingly robust findings in particle physics and astronomy have crumbled while less convincing data have brought glory. CERN is hoping for the latter in its quest for the Higgs boson

    “It would be unprecedented in the history of science,” says Michael Riordan, a particle physicist at the Stanford Linear Accelerator Center (SLAC). “It would be the greatest signal since the discovery of quarks, and they can't chase it down.”

    Riordan is referring to a last-gasp chase by physicists at CERN, the European particle physics laboratory in Switzerland, for solid evidence of a particle known as the Higgs boson. Experiments using CERN's Large Electron-Positron collider (LEP) turned up hints of the elusive Higgs particle just before LEP's scheduled shutdown (Science, 22 September, p. 2014). The tantalizing results won LEP a monthlong stay of execution, but nobody expects CERNto nail down whether the discovery is real or a mirage. “Oh, no—that is quite excluded,” says Peter Igo-Kimenes, a particle physicist in charge of combining data from the four LEP experiments.

    Igo-Kimenes is certain that even an extra month of experiments—about as much overtime as LEP can get without triggering harsh penalty clauses in builders' contracts—will not boost the data across the threshold particle physicists use to separate true discoveries from the chaff of statistical fluctuations: five standard deviations, or five sigma. The LEP data are languishing in the three- to four-sigma range, far short of what is needed to declare a stone-solid discovery.

    The endgame drama at CERN has focused attention on just what it takes to stake a claim to a particle or an event in areas of science where the data are fuzzy, sightings are fleeting, and probabilities rule. Glimpse your new fundamental particle or extrasolar planet at the right moment, with the right degree of confidence, and you win the discoverer's laurels. Otherwise, you are just another precursor or confirmer, fodder for footnotes. Or, if you are really unlucky, your seemingly robust result will turn out to be a product of experimental bias, and you may wind up humiliated in front of your peers.

    Inspiration, perspiration, or …?

    When “seeing” is statistical, credit for a discovery can seem a matter of divine whim.


    To physicists and astronomers, the five-sigma rule is the acid test for judging discoveries and assigning credit. So why do the physicists at LEP persist when they know they can't possibly make the grade? Because they also know that reality is a lot messier than theory. In practice, the five-sigma rule is far from golden. Discoveries that seem statistically unassailable can vanish overnight, while flimsier looking findings have entered the award rosters and the textbooks without cavil. Qualitative factors, such as the reputation of a team of scientists, whether a finding conforms to prevailing theory, and how and why the team announces a discovery, can determine whether it wins the Nobel Prize or languishes as an also-ran.

    Vanishing probabilities

    To a statistician, such vagaries may seem absurd. On the surface, finding a new particle should be little different from figuring out whether a medication is effective or when a coin is biased. Numerically, a five-sigma result corresponds to less than one chance in 3 million that a sighting is due to chance (see sidebar). Even a much weaker three-sigma result in particle physics means that the scientists are 99.9% sure that their signal didn't appear by accident. By definition, then, a mere one three-sigma result in 1000 results is wrong. Right?

    Not exactly. “Half of all three-sigma results are wrong,” says John Bahcall, a particle physicist and astrophysicist at Princeton University. “Look at the history.” He's right: Not only do a surprising number of three-sigma results vanish on closer inspection, but an astounding number of five- and six-sigma results have done so, too.

    In the mid-1980s, for example, physicists at the Organization for Heavy Ion Research (GSI) in Darmstadt, Germany, looked well on their way to the Nobel Prize. Two separate experiments had found peaks in their data, hinting at a new particle in the 600- to 700-KeV (thousand electron volts) range. It wasn't predicted by the Standard Model, but the signal was strong—more than six sigma, corresponding to a one-in-a-billion chance of error. Today, the mysterious particle is gone forever. “We have given up the experiments,” sighs GSI physicist Helmut Bokemeyer. “We have not been able to see what we had seen before.”

    There was probably nothing to see in the first place. Experiments at Brookhaven, Argonne, and elsewhere tried finding similar peaks and found nothing, and the GSI result died in a firestorm of controversy (Science, 10 January 1997, p. 148). “That's a fairly sad episode,” says Brookhaven's William Zajc.

    What went wrong? Bokemeyer thinks that the GSI experiments were highly “optimized” to find the peak. In other words, change the experiment ever so slightly, and the peak disappears, which explains why the result is so difficult to reproduce. “Otherwise, we have no idea what it could be,” he says. There are other possible explanations. For instance, the researchers might have begun an experimental run and looked for a growing peak to make sure that the equipment was set up properly. If there was no indication of a bulge in the data, they would change aspects of the experiment and try again.

    Some physicists believe that this habitual restarting of the experiment may have introduced an unintentional bias into the results. Subtle statistical effects like this, or problems with equipment, or a slight error in calculation, or an overlooked source of conflicting data, can throw off statistical calculations in a tremendous way. “It's the systematic errors that kill you,” Bahcall says. Bahcall knows that the perils of failure against the odds stretch far beyond particle physics: Seven years ago he saw it strike on a cosmic scale.

    Vanishing planets

    “It was the thing that one fears more than anything else in one's scientific life, and it was happening,” says Andrew Lyne, an astrophysicist at Jodrell Bank Observatory in Manchester, U.K. “I certainly at the time thought that it was the end of my career.”

    In January 1992, Lyne was celebrating a monumental discovery. He and his team had spotted what appeared to be the first planet circling a foreign star. Their radio telescope had found a pulsar whose clocklike pulses sped up and slowed down in a way that suggested it was being tugged around by an invisible orbiting body. “Indeed, based upon a straightforward statistical analysis, the effect was very highly significant—hundreds of sigmas, a certainty,” Lyne says. “We did all sorts of tests on the data and tried to think of all the possible ways we might be making a mistake.” After finding their procedures sound, the team published their discovery: the first extrasolar planet. “It received a lot of interest, as you can imagine, from the media and others,” he says.

    Bahcall, then president of the American Astronomical Society, called a special session together to discuss the discovery at the society's annual meeting. But then disaster struck. “Ten or 12 days before I was due to give that talk, I discovered the error and the true source for the periodicity,” Lyne says. “It was rather subtle.”

    When timing signals that come from pulsars, astronomers have to correct for Earth's motion around the sun, which introduces a tiny periodic distortion in the signal. To save computer resources, Lyne's group used an approximation of Earth's orbit for the preliminary calculations. For a more detailed analysis, they planned to switch to a more accurate model and redo their work from scratch. Unfortunately, with one of the 200 or so pulsars that they looked at, they forgot to perform the more accurate calculation and based their conclusions on the rough approximation. “The full high-precision analysis was not carried out,” Lyne says. The slight inaccuracy in accounting for Earth's orbit led to a periodic signal that mimicked a planet around the pulsar. Hundreds of sigmas crumbled to dust just before Lyne was to present his findings.

    Lyne gave a presentation anyhow—a retraction. “It was an extremely difficult time,” he says. “It was a large audience of extremely eminent astronomers and scientists.” But at the end of his presentation, the audience broke out into a long, loud round of applause. “Here I was, with the biggest blunder of my life and …” Lyne pauses, gathering himself. “But I think that many people have nearly done such things themselves.”

    Lyne's reputation didn't suffer; other planet hunters weren't quite so lucky. Peter Van de Kamp of Swarthmore College in Pennsylvania will always be known as the one who found the planet around Barnard's Star. It was a planet that made it into the textbooks, even though it didn't exist.

    According to George Gatewood, an astronomer at Allegheny University in Philadelphia, Van de Kamp was a victim of an equipment change. A lens assembly in the telescope had a color error that shifted redder stars with respect to their bluer counterparts. During the time that Van de Kamp observed Barnard's Star (which is red), Swarthmore replaced the assembly with one that had less color error. Barnard's Star seemed to move compared with the background. Years later, Swarthmore discovered a problem with the new assembly and went back to the old one. The red stars moved back into their original position. It looked as though Barnard's Star had wobbled, pulled by an imaginary planet.

    Vanishing chances

    As in the heavens, so, more subtly, on Earth. Going by statistics, if physicists discovered a new five-sigma particle every day, mistaken sightings ought to turn up about once every 10,000 years. In fact, the history of high-energy physics is littered with five-sigma mirages. One was the “split A2,” an unexpected double peak that, in the 1960s, seemed to signal the existence of two particles where only one was expected. “It was believed by everybody,” Bahcall says. But as scientists made more measurements, the two peaks filled in, and the mysterious second particle vanished. The story replayed itself in the early 1980s, when physicists at Stanford, at DESY in Hamburg, and elsewhere found something that looked remarkably like a Higgs boson at an energy of about 8 giga electron volts (GeV), well short of the 114 GeV where CERN's current Higgs candidate lurks (Science, 31 August 1984, p. 912). The discovery, dubbed the zeta particle, had a five-sigma significance, but it didn't survive for long. “They kept measuring, and it disappeared,” SLAC's Riordan recalls. Physicists on the zeta particle team still suffer from the memory.

    Decades of such reverses have taught experimental physicists that five-sigma rules and one-in-a-million errors are not to be taken literally. “[The statistical analysis] is based upon the assumption that you know everything and that everything is behaving as it should,” says Val Fitch, who won the 1980 Nobel Prize in physics for discovering charge-parity violation in K mesons. “But after everything you think of, there can be things you don't think of. A five-sigma discovery is only five sigma if you properly account for systematics.”

    View this table:

    But if good stats alone can't guarantee a discovery's acceptance, neither do mediocre ones necessarily spell its doom. In fact, many key advances in modern physics have been accepted before passing the five-sigma test. “Neutrino mass is taken seriously, even though it's not five sigma currently,” CERN's Igo-Kimenes points out—partly, Bahcall adds, because the discovery was one that physicists had long expected. “You ask, ‘Does it contradict other things you already know; does it fit in with theory and experiment?' “If an observation seems to fit, scientists need less convincing to accept it, whereas extraordinary claims require extraordinary proof.

    In this regard, CERN's Higgs candidate scores high. Calculations from SLAC show that the Higgs particle should appear at an energy below 140 GeV—right in the range where the LEP team is looking. Other omens are less favorable. To get stronger confirmation of a particle's existence, physicists will often crank up their accelerator's power to see how the effects they are observing change with increasing energies. The Higgs hunters at CERN can't do that, because they are already running LEP near its maximum power to squeeze the most work out of it during its remaining weeks. Furthermore, the hints of success that CERN has reported have come from only two of LEP's four detectors. One of them sees a strong effect (three events that hint at a Higgs particle), a second sees a weak effect (one Higgs candidate), and the others see nothing. The putative Higgs particle is not behaving quite as expected: So far it appears to be sticking to one set of decay paths and ignoring other decay paths that theory says it should be taking. Moreover, LEP has been detecting three times as many of the particles as theory says it should, if the Higgs particle does indeed have the 114-GeV energy that CERN's results suggest.

    Such anomalies raise the stakes for LEP's final month in two ways. On the one hand, they make it appear more likely that the collider is chasing a will-o'-the-wisp. On the other hand, if LEP now confirms those results—if its two other detectors spot Higgs candidates, or if the Higgs candidates start using other decay paths—the CERN sightings become more believable, even with few sigmas to back them up. “People will be very, very excited,” says Sau Lan Wu, a CERN physicist working on one of the four detectors.

    The final reason for CERN to try to beat the odds is that it has nothing to lose. If LEP doesn't find the Higgs particle, then another collider—the Tevatron at the Fermi National Accelerator Laboratory, CERN's archrival in Batavia, Illinois—probably will. By staking a claim to the Higgs particle now—and pegging it to a specific energy, 114 GeV—CERN will claim a share of the glory should Tevatron confirm the discovery, according to Riordan. “They can get a piece of it by writing an ‘evidence-for' paper,” he says. They might even be able to lobby for a further LEP extension and aim for a bona fide discovery, Wu hopes. And if the worst happens and the results disappear, then CERN's Higgs particle will join the ever-growing parade of ghost discoveries and phantom particles vanquished by the progress of science.


    A Greek Letter, Demystified

    1. Charles Seife

    In principle, screening out statistical noise from a particle physics experiment is a lot like determining whether a coin is biased toward heads or tails. Random events such as coin flips and particle detections tend to follow a bell curve distribution. Sigma—the standard deviation—is related to the fatness of the curve and gives a handy way to quantify how far from the center of the distribution your results are.

    With the coin experiment, the center of the curve represents what you'd ideally get with a perfectly fair coin—50 heads and 50 tails, which is zero sigma away from the center. As you rack up a larger and larger surplus of heads (or tails), you move away from the center of the curve. A result of 55 heads and 45 tails is one sigma away from the center; there's about a 16% chance that an unbiased coin will give that result. On the other hand, 60 heads and 40 tails is a two-sigma result, which has only about a 2% chance of happening with an unbiased coin. A run of 65 heads and 35 tails would be pretty damning evidence for bias; it's a three-sigma result, which a fair coin would yield only 0.1% of the time. In a sense, a three-sigma result means that you're 99.9% sure that your coin is biased. At five sigma—the acid test applied to fundamental particles and extrasolar planets—the odds of a fluke dwindle to 1 in 3.5 million.


    Salted Clouds Pour More Rain on Mexico

    1. Richard A. Kerr

    Water-attracting salt grains wring more water from reluctant rain clouds in Mexican experiment, boosting the beleaguered science of rainmaking

    Skies have been looking bleak for nearly 2 decades for the science of rainmaking—or weather modification, as it's more properly called. That wasn't always the case. In the late 1940s, the young science shook off 100 years of charlatanism, and pioneering researchers tried everything from seeding Missouri clouds to taming hurricanes. By the 1970s, federal funding for weather modification was running upward of $20 million per year. Then the scientific winds shifted. Practically nothing was working in experiments intended to increase precipitation, and scientists didn't understand what was happening in the little work that showed any promise (Science, 6 August 1982, p. 519). Research funding in the United States completely dried up. And, although professional rainmakers still plied their trade—today a dozen programs seek to bring some relief to 18 million hectares of drought-stricken Texas—the science of weather modification has been moribund ever since. Now, a glimmer of hope out of South Africa by way of Mexico has researchers guardedly upbeat again.

    At the 13th International Conference on Clouds and Precipitation held last month in Reno, Nevada, cloud physicist Roelof Bruintjes and his colleagues at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, reported that their 3-year cloud-seeding experiment in drought-prone northern Mexico provides strong support for a new approach pioneered in South Africa in the mid-1990s that apparently wrings more moisture out of continental storm clouds. The method depends on dispersing tiny water-loving salt particles into developing clouds in order to grow raindrops faster and more efficiently. “I'm very impressed,” says cloud physicist Johannes Verlinde of Pennsylvania State University, University Park. “I think it's really promising. The results show pretty conclusively that the seeded storms are producing more rain.” Obstacles remain, notes weather modification researcher Daniel Rosenfeld of The Hebrew University of Jerusalem, including understanding all the effects of seeding, but the Mexican experiment is “an important first step toward getting a powerful cloud-seeding method.”

    Squeezing a cloud.

    Particles from burning flares beneath a Mexican cloud (top) can extract more rain as the cloud grows (bottom).


    The good news on weather modification got its start serendipitously, when a highly instrumented Learjet came upon some “huge” water drops as it flew through the flanks of a large thunderstorm over South Africa. Measuring 4 to 6 millimeters in diameter, such big drops shouldn't have been there, according to all that scientists knew about clouds. On further investigation, South African weather modification researcher Graeme Mather (who died in 1997) realized that the storm was developing over a large paper mill belching tons of particles into clouds overhead. These presumably organic particles turned out to be hygroscopic, able to attract water in vapor form the way salt in a shaker gets damp on a humid day.

    Prompted by this finding, Mather, Deon Terblanche of the South African Weather Bureau in Bethlehem, and their colleagues fashioned a 5-year randomized cloud-seeding experiment. They designed a flare, based on a U.S. Navy fog-producing flare, that would yield hygroscopic salt particles (mostly potassium chloride) averaging about 0.5 micrometer in diameter. That was 1/20 the size of any hygroscopic cloud-seeding agent used previously. Mounting 24 flares on the wing of a plane, they flew just beneath puffy, developing clouds whose updrafts would draw the particles up through the cloud's base. Earlier projects had targeted higher, colder parts of clouds with the intent of forming more ice particles.

    If the flare particles worked the way the pollutant particles seemed to, Mather reasoned, they would increase the efficiency with which a cloud converted water vapor into raindrops. In untreated summer clouds over eastern South Africa where the experiment was conducted, the abundant but nonhygroscopic particles typical of continental air would form the cores of water droplets once the air rose and cooled enough to saturate it with water. But the available water would be spread over a large number of uniformly small droplets that would all fall at about the same pace. As a result, collisions would be infrequent, and few droplets would coalesce into large enough drops to fall as rain before the updraft flushed all the moisture out the top of the cloud.

    Hygroscopic particles, according to theory, would get more rain to form sooner. They would draw water to them, starting droplet formation earlier, and produce a range of droplet sizes, allowing larger, faster falling drops to collide with and coalesce with smaller, slower falling drops. The result would be more raindrops sooner in the half-hour life of a rainy South African cloud and thus more rain on the ground.

    Mather's randomized trial of such hygroscopic cloud-seeding produced positive results, as he and his colleagues reported in 1997, but critical statisticians and weather modification's bleak track record required more evidence. So Bruintjes and his colleagues set out to test whether the same method in different hands and applied in a different place would work as well. It did. In Mexico, for example, among the largest quarter of storm clouds, those randomly chosen for seeding were producing 45% more rain as determined by radar than were nonseeded clouds 30 minutes after seeding began. That difference was statistically significant at the 95% confidence level. “They are exciting results,” says Daniel Breed of the NCAR group. “There's obviously something going on.” Terblanche takes the Mexican results “as a verification of the South African results in a different part of the world. It's confirmation.”

    Neighbor effect?

    A seeded cloud may make adjacent clouds rain more, too.


    Cloud physicists in and out of the weather modification community are also encouraged, but they and the experimenters themselves still have reservations. For one, the Mexican results may be statistically significant, but funding—all of which had come from the state of Coahuila bordering central Texas—dried up once the northern Mexico drought eased. The funding loss prevented a fourth season of operations that should have strengthened the results. And neither the Mexican nor the South African experimenters measured actual rainfall on the ground, only the strength of the radar reflection from raindrops. Because radar is far more sensitive to the size of raindrops than to their number, a few very large drops could have made it appear that seeding triggered more rain than actually reached the ground. In his own computer modeling of hygroscopic seeding, “we definitely see an increase [of rainfall] on the ground,” says Zev Levin of Tel Aviv University, “but it's not as much as the radar shows. You still need to do measurements on the ground.”

    Especially worrisome is that researchers don't fully understand how this seeding works. Although rainfall is enhanced in the 20 to 30 minutes after seeding starts, as the coalescence hypothesis predicts, the most dramatic increases come more than 30 minutes after seeding, and seeded storms rain 30 minutes or more longer than unseeded storms. “It points out there are many things we don't understand about clouds,” says Verlinde. Such mechanistic “black boxes” in earlier weather-modification experiments helped trigger implosion of the field in the 1980s. More field studies and more modeling will be required to sort out the possibilities. They include precipitation-enhanced downdrafts that feed back into updrafts. But whatever the explanation, the current results are bringing researchers the first few drops of hope after a long, dry spell in their field.


    Japan's Whaling Program Carries Heavy Baggage

    1. Dennis Normile*
    1. With reporting by Jeffrey Mervis.

    The Institute of Cetacean Research in Tokyo is the scientific arm of Japan's controversial whaling research program. What has it contributed to the field?

    TOKYO—Science is supposed to be an international enterprise, but when it comes to research that requires killing whales, Japan is pretty much on its own.

    Japan's recent decision to add two species to its scientific haul—which until now has targeted minke whales—has revived the question of whether the real purpose of the program is research, as Japan claims, or keeping the country's whaling industry alive. And although the political and ethical aspects of the debate tend to overwhelm any discussion of the science, the answer seems fairly clear: Although researchers agree that the work is scientifically rigorous, its focus on providing data for managing whales as a sustainable marine resource has generated data of marginal interest to the mainstream marine mammal community. “I think that they are contributing to a large, existing body of knowledge,” says ecologist Randall Davis of Texas A&M University in Galveston. “But it's not startling new information.”

    The basis for Japan's whaling program is a clause in the 1946 International Whaling Convention that allows taking whales for scientific research. Japan has used the clause, which effectively provides a loophole through the 1983 global moratorium on commercial whaling, to allow its scientists to catch and analyze hundreds of minke whales each year as part of an ongoing study of whale stocks. Its efforts generated a political storm that had stalled over the years—until Japan announced this spring that it planned to extend the hunt to a small number of Bryde's and sperm whales. In July, the International Whaling Commission (IWC) registered its unhappiness, saying that the new plan was seriously flawed.

    However, Japan pushed ahead, and last week President Clinton responded by banning Japanese whaling ships from U.S. waters, a largely symbolic act that nonetheless underscores U.S. concerns about Japan's research program. “We think that they are abusing the rights afforded them under the convention, and we certainly see no good reason to expand the research to Bryde's and sperm whales,” says Mike Tillman, science director at the U.S. government's Southwest Fisheries Science Center in La Jolla, California, and deputy U.S. commissioner to the IWC, which enforces the convention.

    The war of words has largely ignored the body of work by Japanese scientists. “We're proud of the research we do here,” says Seiji Ohsumi, director of the Institute of Cetacean Research (ICR), the scientific arm of Japan's whaling program. “You just can't do this work without taking the whales.” But biologist Greg Donovan, a 20-year employee of the IWC's secretariat in Cambridge, U.K., and editor of its Journal of Cetacean Research and Management, is more circumspect. “I don't think anyone can say there aren't any scientific results coming out of this,” he says. “It is really a value judgment: Do the results justify sacrificing the animals?” And on that question, he says, “there isn't unanimity.”

    The 1946 whaling convention, which 40 nations have pledged to follow, allows countries to issue permits for taking whales for research purposes. The permits must be reviewed by the IWC's scientific committee, a panel of 120 scientists who examine the aims and methodology of the research plan, its likely scientific payoff, the availability of nonlethal alternatives, and the impact of the research on whale stocks. The committee's review is only advisory, however, and countries are free to issue annual permits. Canada, the United States, the Soviet Union, South Africa, and Japan were among several countries that did so before 1982, but in recent years Japan has stood alone.

    Japan has long argued that the moratorium was adopted because of insufficient and misunderstood data. So in 1987—the year the moratorium went into effect—it created the new institute out of an older entity and gave it the job of supplying the IWC's scientific committee with a steady flow of data.

    The institute, with 35 scientists and technicians plus support staff, has an operating budget of $73 million. It recoups slightly more than half that amount from selling the whales it captures, a sore point among environmentalists who see the income as proof of the program's commercial focus. Except for small contributions from the fisheries industry, the government provides the balance. Its research ships and crews were previously engaged in commercial whaling, and are now owned and operated by a subcontractor.

    ICR operates a program in the Antarctic that, since 1987, has taken 400 minke whales a year. A second program, involving the killing of 100 North Pacific minke whales annually, ran from 1994 through 1999. The U.S. government unsuccessfully urged international sanctions in response to the launch of both programs. This year's new initiative allowed for capturing up to 100 minke, 50 Bryde's, and 10 sperm whales in the North Pacific in addition to the minke whales taken in the Antarctic. Crews returned home last week with a take of 88 whales, including 43 Bryde's and five sperm whales.

    At the helm.

    Seiji Ohsumi leads the Japanese institute into controversial waters.


    The research program involves both sighting surveys and capturing whales. The ICR scientists record 100 data points, including size, weight, age, sex, and stomach contents. They also collect tissue samples to check for such things as accumulated heavy metals, which can be an indicator of pollution levels. Donovan says that the Japanese program is one of the primary sources of data for the committee's efforts to model minke whale stocks for a resource management plan under development. An IWC review of the North Pacific program, completed earlier this year, concluded that the data were relevant to the plan but ducked the question of whether they could have been obtained by nonlethal means.

    That's not unusual, says John Bannister, an Australian zoologist who chaired the review panel and who is a former chair of the IWC's scientific committee. When there's no consensus, he says, the reports “reflect differing views but come to no conclusions.” The sticking point for the committee was the value of the additional data gathered from taking whales, such as their age. Bannister says it is generally agreed that the only reliable way to determine age is to study the buildup of protein in the whale's ear, which can only be done “off a carcass.”

    Biologist Steven Katona, president of the College of the Atlantic in Bar Harbor, Maine, says that such data are not a good enough reason to kill the animals. “In my opinion, most people wouldn't sacrifice an animal solely to know how old it is,” he says. But Doug Butterworth, an applied mathematician at the University of Cape Town in South Africa who specializes in fisheries assessment, argues that taking the whales is justified. “Age data is important, because it provides information about the potential growth rate of the population and the levels at which it can be safely harvested.”

    There is similar disagreement over the value of studying stomach contents. “Much of what we need to know about [feeding habits] is known to some extent, and there are other techniques for studying diet,” Katona says. Biopsy darts to retrieve tissue, for example, would provide an indication of caloric intake, although Katona acknowledges that they would not reveal what species were being eaten. Ohsumi insists that the only practical way of studying what whales eat is to study their stomach contents, which requires killing them. He says that whales are probably in competition with humans for fish and that understanding this interaction, a key element of the new program with Bryde's and sperm whales, will be essential for managing all marine resources.

    Tillman scoffs at that rationale, however. “We think that any research should focus on the management of whale stocks, not fisheries,” he says. “Toward that end, there's no good reason to learn the amount of prey being eaten.”

    Butterworth sees “a bit of a con” on both sides of the debate. He agrees with those who say Japan's major objective is to keep its whaling fleet intact. But he notes that Japan wouldn't be doing any research at all, including the noninvasive sighting surveys, if the program didn't recoup part of its expenses from selling the whales. And he accuses opponents of making misleading conservation claims—which ignore the fact that minke whales are relatively plentiful and have never been listed as endangered—when their real objection is based on ethics.

    Robert Brownell, a member of the scientific committee and a colleague of Tillman's, admits that taking a few sperm whales isn't a threat to the stocks. “But there is a question of trust … and where this is all leading,” he says. What scientists fear, he says, is that Japan will use scientific data to justify a return to the bad old days of unregulated commercial whaling.

    Japan has never made a secret of its hope of resuming commercial whaling, Ohsumi says. But he thinks it should be done right. “We don't understand why whaling shouldn't be managed in the same way other commercial marine resources are managed,” he says.


    A Tentative Comeback for Bioremediation

    1. Todd Zwillich*
    1. Todd Zwillich is a free-lance writer based in Washington, D.C.

    After years of relative obscurity, research on pollution-eating bugs is coming of age. But DOE is not about to field test any genetically modified organisms soon.

    At the dawn of the age of bioengineering, in 1972, General Electric researcher Ananda Chakrabarty applied for a patent on a genetically modified bacterium that could partially degrade crude oil—sparking visions of a brave new world in which toxic wastes would be cleaned up by pollution-gobbling bugs. Researchers quickly jumped on the bandwagon, transferring genes between microbes in the hope of engineering hybrids with a taste for pollution, while a host of “bioremediation” companies sprang up to cash in on the trend. But those hopes were soon dashed. Immobilized by the high costs and technical difficulties of this research, the companies soon went bankrupt. And experimentation retreated from biotech start-ups to government and academic laboratories, where it has remained in relative obscurity.

    Now, some 30 years later, bioremediation is slowly and gingerly staging a comeback. Naturally occurring microbes have been tried at a few sites with some limited success. Since 1998, for example, one group has been successfully cleaning up a carbon tetrachloride spill in Michigan using natural bacteria imported from California. Elsewhere, strains of Pseudomonas bacteria have succeeded in remediating halogenated hydrocarbons like trichloroethylene. And in October, the Department of Energy (DOE) will perform its first-ever field test of bioremediation to clean up one of its heavily polluted sites.

    With one exception, however, none of the pollution-gobbling bugs released to date has been genetically altered—and DOE is not going to risk it, either. Public resistance to unleashing recombinant microbes—even in field tests—is too great, says Aristedes Patrinos, associate director of DOE's Office of Biological and Environmental Research. Even so, many scientists in this reemerging field, including some at DOE, believe that genetically modified microbes must eventually be employed if bioremediation is ever to succeed.

    For the new efforts, researchers are taking what William Suk, who directs bioremediation funding for the National Institute of Environmental Health Sciences (NIEHS), calls “a more measured approach” than in the past. Then, he notes, microbiologists keen on engineering bacteria to metabolize pollutants quickly learned in lab tests that their bugs had trouble competing with native microbes in their target soil. And those that were effective did their jobs much more slowly than expected.

    Now researchers are trying to avoid these problems by taking into account the chemical properties of the soil and the geological characteristics of polluted areas as well as the properties of the pollution-eating microbes. The DOE effort, for example, will use microbes that emerged naturally from the site they will treat, which is contaminated with heavy metals and radionuclides left over from decades of nuclear weapons programs.

    This first field test will occur adjacent to a particularly nasty site at Oak Ridge National Laboratory in Tennessee known as S-3. Now capped by a parking lot, S-3 was once a series of ponds contaminated with radioactive uranium, cesium, and cobalt mixed with mercury and other toxic heavy metals. Without any human prodding, several species of bacteria have adapted to feed on components of the toxic soup that have leached out into the surrounding soil. For instance, these bacteria can transform dangerous metals into less mobile forms that don't dissolve in groundwater. But the natural metabolism of the bacteria is too slow to handle the job, so researchers will add nutrients such as lactate and acetate to the soil in an effort to stimulate the local microorganisms into a toxic feeding frenzy.

    “Our ultimate goal is to harness natural processes to immobilize harmful metals,” says Anna Palmisano, who manages bioremediation projects for DOE's Natural and Accelerated Bioremediation Research (NABIR) program. If the strategy works, NABIR will next transplant natural bioremediating bacteria from other areas to S-3 to see how well they operate in the new environment.

    Conspicuously absent from NABIR's field-testing program are experiments with genetically altered microbes. Although NABIR funds some of this research in outside laboratories, safety concerns, regulatory hurdles, and anticipated negative public reactions are keeping NABIR from considering field-testing recombinant bioremediators “in the near future at all,” says Palmisano.

    But Oak Ridge microbiologist Robert Burlage and others insist that recombinant technology is exactly what is needed. The problem with naturally occurring microbes, he says, is that “some sites are so bad they will kill off a bacterium as soon as it hits.” And no natural bug is equipped to deal with the “witch's brew” of pollutants present at sites like S-3, the way a specially designed microbe could. Deinococcus radiodurans is one example, says Burlage. This “extremophile” is able to thrive under radiation doses of 1.5 Mrads, up to 300 times the fatal dose for humans. But it can't on its own detoxify the other chemicals that often accompany radioactive contamination.

    In January, geneticist Michael J. Daly and colleagues at the Uniformed Services University of the Health Sciences in Bethesda, Maryland, announced in Nature Biotechnology that they had transferred into D. radiodurans a gene from the common lab bacterium Escherichia coli that enables D. radiodurans to resist toxic mercury II. The result was a microbe that could convert mercury II to less toxic elemental mercury, while withstanding high levels of radiation. Daly and colleagues have since added other genes that code for enzymes capable of metabolizing the toxic organic chemical toluene. The researchers wound up with a microbe able to metabolize a heavy metal and an organic toxin in the presence of radiation, at least under lab conditions.

    At Stanford University, in as-yet-unpublished work, environmental engineer Craig Criddle and colleagues have also designed bioremediating microbes fit for a witch's brew. Criddle's team has taken a gene from a carbon tetrachloride-metabolizing bacterium known as Pseudomonas stutzeri strain KC and transferred it into a heavy-metal metabolizer called Shewanella oneidensis. Now, says Criddle, they have a recombinant strain that can both degrade carbon tetrachloride and immobilize heavy metals. But there's a catch: In lab tests, when the strain metabolizes carbon tetrachloride, it leaves behind chloroform—“and that can leave you worse off than you were before,” says Criddle. So that's the next problem his team is tackling, with funding from NIEHS.

    At Michigan State University in East Lansing, James Tiedje is trying a combination approach to degrade polychlorinated biphenyls, or PCBs. He starts with a natural bacterium that can consume PCBs. Then he adds genetically altered strains of two other bacteria, Rhodococcus RHA1 and Burkholderia LB400, both designed to remove chlorine and break the phenyl rings in PCBs. The mop-up effort by the engineered strains “can remove the majority of the remaining PCBs, but not all” in lab tests, says Tiedje about his as-yet-unpublished work.

    In theory, says Tiedje, these PCB-eating bacteria should be ready for field-testing “by the next warm season,” when they would be most effective. But strict regulations on recombinant bugs mean that these and other engineered microbes are unlikely to see the light of day anytime soon. The Environmental Protection Agency must approve any field tests of recombinant organisms. So far, out of 35 recombinant microbes approved for a variety of agricultural and other uses, only one bioremediator—a Pseudomonas species that fluoresces when it contacts naphthalene—has made the grade.

    Suk of NIEHS and Burlage chafe at the sluggish pace with which the field is moving; in particular, they would like DOE and other funding agencies to push harder to bring recombinant bacteria to the field. “There are plenty of toxic waste sites far away from population centers that would be ideal for testing,” asserts Suk. “Those are the sites to do demonstration research. We need to take some chances to restore [toxic sites] faster, better, and cheaper than we are now.”

    But DOE, which has some 3000 sites to clean up, is not budging. Says Patrinos: “If we rush into field-testing of recombinant microbes and it fails, we may be worse off in the long run.”


    Critics Say Rulings Give State U. License to Steal

    1. David Malakoff

    U.S. Supreme Court rulings that give states more protection from patent infringement suits could be a potential windfall for research universities

    The U.S. patent system is supposed to level the playing field for inventors. But recent Supreme Court decisions may have given states, including research universities, a leg up on the competition by making them immune from suits over patent infringement. Some lawmakers and biomedical executives are pushing Congress to pass legislation closing what they see as a potential multibillion-dollar loophole in the patent laws. But some academics and state officials say that Congress should wait to see if a problem develops before acting.

    At stake is the ability of private software, biotech, and publishing companies—and even poets and musicians—to recover lost profits from state universities, hospitals, and other agencies that have copied or used their work without paying a fee. Critics say the rulings, issued last October, will tempt states to become intellectual property pirates, helping themselves to everything from patented genes to copyrighted textbooks, while at the same time shielding their own increasingly valuable patent portfolios from infringement claims. “It's inequitable … states are now in the enviable position of having their cake and eating it, too,” says Q. Todd Dickinson, head of the U.S. Patent and Trademark Office (PTO). But law professor Peter Menell of the University of California (UC), Berkeley, predicts that the rulings “will have more of a symbolic than substantive impact.” So far, he notes, states have claimed immunity in only a few cases, with mixed results.

    “Bizarre” judgment

    The debate centers on two highly technical constitutional rulings. In the cases, collectively known as Florida Prepaid, a private bank charged that a college savings program run by the state of Florida infringed on a financial patent it had obtained. In narrow 5-4 votes last October, however, the high court upheld the state's claim that it was immune from the federal lawsuit under the 11th Amendment to the U.S. Constitution, which shields states from many kinds of claims. The justices also declared unconstitutional the Patent Protection Act, which Congress had passed in 1990 to overturn an earlier Supreme Court ruling that questioned the long-standing policy of treating state patent holders the same as private entities.

    The decisions sparked fierce criticism. “Truly bizarre,” Charles Fried, President Ronald Reagan's solicitor general and now a professor at Harvard Law School in Cambridge, Massachusetts, wrote in The New York Times. If the decision stands, he and other critics claim, research labs and hospitals could use patented tests without paying royalties. They could also get into the manufacturing business, producing cheap knock-offs of popular biomedical products without fear of paying damages. State officials could even buy a single copy of a software program and copy it, while state university professors could do the same with chemistry textbooks—perhaps while offering the politically popular justification that the rip-off saved money for taxpayers. “The temptation to play Robin Hood may prove irresistible,” patent attorney James Gardener of Portland, Oregon, warned in the January issue of Nature Biotechnology. At the same time, states could use the rulings to protect their own patent hoards from court challenge, notes Dickinson. Immunity represents “a potential windfall for the states,” he believes, noting that state universities acquired at least 13,000 patents between 1969 and 1997, accounting for nearly 60% of the patents granted to all institutions of higher education. Overall, PTO statistics suggest that about 2% of all “utility patents”—the most valuable type of patent—have gone to state institutions in recent years (not counting those awarded to labs and agencies of the federal government). The University of California, for instance, last year held nearly 2000 U.S. patents that earned the school nearly $80 million, with just five inventions garnering nearly 70% of the total. State schools will face an overwhelming incentive to claim immunity to protect the income that flows from such valuable patents, the critics say.

    Even some legal scholars who doubt that states will become patent thieves agree that some public universities may wield immunity to avoid paying royalties on previously patented research methods that—intentionally or unintentionally—become embedded in their own science patents. Under pressure to protect every potential source of basic research funding, “state institutions may toe and possibly cross the line,” Menell concludes in a paper to be published in the Loyola Law Review.

    So far, no one can say how many state institutions are already crossing the line. Indeed, statistics are so scarce that Senator Orrin Hatch (R-UT), chair of the Senate Judiciary Committee, has asked the General Accounting Office, Congress's investigative arm, to examine the issue. In the meantime, critics of Florida Prepaid point to a quartet of recent cases that raise potential complications. In one, the state of Texas last year successfully used the Supreme Court rulings to fend off a copyright infringement claim by an artist who accused state officials of stealing his idea for a license plate design. In another, the University of Houston got a federal judge to throw out an academic's claim that the school's press had improperly reprinted her work.

    The results were less clear-cut in two science-related cases involving the University of California. In one, involving lucrative gene-engineering patents held by the biotech giant Genentech and the university, the school won a court ruling that it could claim immunity, but its impact was unclear as the parties settled out of court (Science, 26 November 1999, p. 1655). In the other case, in which New Star Lasers of Roseville, California, tried to invalidate a university patent, a federal district judge rejected the university's claim of immunity. In a withering opinion, Judge William Shubb noted that the school's overseers wished “to take the good without the bad. The court can conceive of no other context in which a litigant may lawfully enjoy all the benefits of a federal property right, while rejecting its limitations.”

    University officials decline to comment directly on the New Star Lasers case, noting that they are discussing a settlement. But Marty Simpson, an attorney in the UC General Counsel's office, says he “strongly disagrees” with the notion that state universities should be treated like any other patent holder. Instead, he believes they should be treated more like the federal government, whose liability is limited. For instance, while a losing company may have to pay triple damages in a traditional infringement case, the federal government's liability is limited only to documentable losses. Given that reality, “it becomes hard for federal legislators to argue that it's somehow shocking and highly impractical for states to be allowed to do the same thing,” Eugene Volokh, a law professor at UC Los Angeles, writes in a recent Southern California Law Review article.

    Risking backlash

    In general, Menell believes state infringement of others' intellectual property rights will be “unintentional, episodic, and relatively rare” due to a broad array of political, economic, and legal factors. States that aggressively infringe, for instance, are likely to face an intense political backlash and lose potential marketing partners. In addition, although they can't win damages in federal court, patent holders can still ask federal judges to order specific state officials to stop infringing and file damage claims in state courts.

    Many industry executives, however, say pursuing such remedies would be cumbersome and expensive. “We don't have the time or money to become—or hire—experts on the property law of 50 states,” says the staff attorney for one small biotechnology company. As a result, several coalitions of patent attorneys and businesses have been urging the PTO and Congress to craft legislation that would nullify the Florida Prepaid decisions. But Dickinson says a “daunting legal landscape” may make it difficult to devise a solution that will survive scrutiny by the Supreme Court.

    One option is to require states to waive their right to immunity in exchange for seeking federal research grants. But skeptics say that approach—modeled on existing laws that force states receiving federal highway funds to strengthen transportation safety rules—would be unwieldy and could undermine government efforts to encourage public-sector researchers to innovate by promising them patent rights. “I would not support legislative action that would penalize our colleges and universities by withholding needed funds simply because state legislatures are unwilling to waive their sovereign immunity,” says Marybeth Peters, head of the PTO's copyright office. She also worries that the court could strike down the conditions as coercive and, thus, unconstitutional.

    A more promising approach, say some constitutional scholars, is legislation introduced by Senator Patrick Leahy (D-VT) that would reverse the rulings. Believing “it would be naïve” to rely on the “commercial decency” of state governments to avoid problems, Leahy proposes to allow states to obtain new patents, trademarks, and copyrights only if they renounce immunity and accept an infringement liability scheme similar to the federal government's. Each state “would be given a real choice,” says Peters: “whether it is better to be a player in the system or an outlaw.” The bill stagnated this year, but a Leahy aide predicts quick passage next session. The House Judiciary Committee, which held hearings on the issue this summer, is working on its own version.

    Even that solution, however, could be rejected by some states and voided by an increasingly skeptical Supreme Court majority, observers warn. “The initial ‘fix' failed miserably,” says Gardener, referring to the 1990 Patent Protection Act, “and it is unclear that a second ‘fix' will fare any better.” He would prefer to see a nationwide campaign to convince state legislatures that it is in their long-term business interests to renounce immunity. “Persuading states to waive their sovereign immunity,” he says, “is the only surefire method.”

  18. Taxonomic Revival

    1. Elizabeth Pennisi

    By putting museum collections online and training students to be computer- and molecular biology-savvy, taxonomists hope their field will thrive in the new millennium

    The biodiversity crisis is not just about the perilous state of plants and animals. Accumulated knowledge about each species is also under threat. For several decades, the plight of pandas, whales, woodpeckers, and butterflies has regularly made headlines, while scores of conservation organizations, government agencies, and private foundations have worked to stem the decline. But only recently has attention turned to protecting the other side of biodiversity.

    One “hotspot” of our knowledge of organisms is the drawers and cabinets full of animal hides, bones, bodies, and mounted plant specimens warehoused in natural history museums, herbaria, and what were once the zoology, entomology, and botany departments of universities. Another hotspot is the aging taxonomists and systematists who are retiring, taking with them their in-depth understanding of whole groups of organisms. The preservation of both types of knowledge has finally begun to attract attention from the conservation community, biologists, and—perhaps most important—funding agencies.

    Two decades ago, taxonomy and systematics appeared to be on the way out, pushed aside by the more glamorous discipline of molecular biology. Government funding for this field, widely perceived as stodgy, lapsed in the 1980s, and at universities across the country, taxonomists lost office space, positions, and respect.

    Now this discipline is remaking itself into a more rigorous, hypothesis-driven science. Increasing numbers of systematics researchers have embraced molecular biology techniques and evolutionary principles. Universities are once again hiring them, and in 1995, the U.S. National Science Foundation (NSF) created a program designed to promote this resurgence. With about $4 million a year, Partnerships for Enhancing Expertise in Taxonomy (PEET, is creating a new generation of systematists comfortable with molecular and computer tools as well as with the microscope and collecting kit.

    And just as critical, PEET awardees are also helping to spawn a new field: biodiversity informatics. PEET scientists and others are feverishly putting collections data online. Although daunting, the task is critical, says ecologist Jim Reichman, director of the National Center for Ecological Analysis and Synthesis at the University of California, Santa Barbara, as these data sets provide a baseline for the next generation of biodiversity studies, in large part by providing a historical context. “Our hope is that the bioinformatics will give biodiversity a brighter future,” says Patrick Crist, a conservation biologist with the U.S. Geological Survey in Moscow, Idaho.

    From ledgers to keyboards

    Together, the world's natural history museums house about 3 billion specimens, some accompanied by notes about how these organisms lived, reproduced—even what they ate. Collections can date back centuries, and paleontologic repositories provide a view of life going back millions of years (see sidebar). With these records, researchers can track changes in distribution through time and therefore assess the impact, say, of global change.

    But because these data are spread across many institutions, they have been notoriously hard to compile and use. A decade ago, anyone interested in learning about a particular species had to scan index cards and ledger books in which numerous biologists had recorded what they had collected and where. To see what was contained in other museums, the researcher actually had to visit them. Eventually, say leaders in biodiversity informatics, that will no longer be necessary: Researchers will need only log onto a Web site that gives them inventories of what each museum contains, sortable by species, geographic location, and perhaps even habitat.

    The University of Kansas Natural History Museum and Biodiversity Research Center is at the forefront of this new movement. “It has done a lot in terms of leading the way in the integration and coordination of collections” online, says Terry Gosliner, who studies mollusks at the California Academy of Sciences in San Francisco. Although the Kansas museum is a staid, limestone edifice built in 1901, behind the scenes, PEET scientists and other curators have broken out of the traditional mold, thanks in part to the vision of a new director, Leonard Krishtalka.

    At the museum, a plant physiologist-turned-computer expert named Dave Vieglais, for instance, has amassed a veritable database empire, known as Species Analyst (, making possible one-stop shopping for specimens. Now, thanks to Vieglais and his colleagues, instead of paging through ancient ledgers a researcher can type an organism's name into a personal computer to find out not only what's in the Kansas collections but also whether samples of the species exist among the 12 million specimens at several other museums, a number that will soon increase to 38 million (Science, 7 May 1999, p. 888). “It's one of the more exciting projects that is coming along,” notes Stanley Blum, a bioinformaticist at the California Academy of Sciences.

    To pull it off, Krishtalka, Vieglais, and colleagues had to overcome some entrenched views. Historically, museums have tended to be possessive about their collections. “There's a very fierce streak of independence,” notes Kansas informatics expert Jim Beach. Data-sharing is often carefully worked out among researchers studying a particular group of organisms. And often individuals want to wait until they've published all their results before they open their books to colleagues. Complicating matters further, each museum has had its own way of doing things: Some, like Kansas, compiled information in ledgers, others used computer text files, and still others developed spreadsheets.

    To Vieglais fell the task of coming up with a system that could work with all these types of collections. He first tested his software in 1998 by incorporating collections from the Kansas herbarium along with the museum's bird and mammal data into the prototype Species Analyst. It was able to retrieve information, despite differences in the formats of the collections. Soon afterward, he added mammals on file at the University of California, Berkeley, and he's been signing up museums, collection by collection, ever since. He lets the curators in charge of these collections decide how much to put online, so they are comfortable with sharing their hard-earned data.

    As an added feature, Species Analyst forwards the geographic information about a given species in each collection to the San Diego Supercomputer Center. There, a program called GARP developed by David Stockwell maps that information and, based on the environmental data available for those sites, predicts the species' environmental niche and its overall distribution. For instance, after the Asian long-horned beetle was discovered in Chicago and a few other sites in the eastern United States in 1998, Vieglais, Kansas bird curator Town Peterson, and their colleagues used their program to determine where it might choose to live in the United States. Their preliminary run was “encouraging,” says Vieglais, in that it shows the program works, “but discouraging in that it identifies most of the eastern United States as potential habitat for this invertebrate.”

    At this point, Species Analyst will provide the distributions of a particular species and report those results back to researchers at their own computers. But Beach is already working on the next generation of bioinformatics software, one that will create an archive of distributions for all the species contained within the Species Analyst fold. This archive will enable researchers to do more computer-based assessments of the organisms under study. With a $2 million NSF grant, Beach and colleagues from California, New Mexico, and Massachusetts will spend the next 2 years creating this archive. Once it is in place, Beach hopes to set up a Web site that will enable users to ask for a list of species that inhabit any place on the globe. In addition, researchers will be able to look for overlap in the habitats of, say, sister species, or of a plant and its insect pests.

    Working on the Web

    As bioinformaticists, Beach and Vieglais appreciate how computers can revive interest in and the use of taxonomic collections. With impetus from their PEET grants, some traditional systematicists are learning this same lesson. The PEET awards, started in 1995, come with a key stipulation. To receive one of these 5-year grants, researchers must agree that in addition to classifying little-known species and preparing the next generation of taxonomists, they must develop Web-based resources describing their organisms; the Internet, NSF's board realized, can open the museum drawers and cabinets to a much broader constituency than the few researchers who have access now.

    Entomologist James Stephen Ashe, who runs one PEET team at Kansas, is attempting to sort out a large group of tiny beetles, the Aleocharinaethe—the type of work he has always done, but with a new twist. “There are so many and they are so small that they have overwhelmed the taxonomic expertise,” Ashe says. Even beetle experts often can't pin down the genus, let alone the species, of these beetles.

    To aid in this endeavor, Ashe, a postdoctoral fellow, and two graduate students have been assessing the evolutionary relationships of the oldest Aleocharinae species. They have already drawn up a list of names and synonyms of these beetles and are working out identification guides to the Aleocharinae in North America and Mexico. And rather than publish these in some obscure journal, they are putting their work online. They are also posting illustrations to help biologists figure out what they've got. To date, the Web site contains 1300 pictures, drawings, and scanning electron micrographs of body parts from 350 genera. It's possible to call up a set of images by the genus name, say, or call up legs from an entire group.

    Kansas marine biologist Daphne Fautin, one of only four sea anemone systematists in the world, also sees illustrations as critical for the proper identification of her group of organisms. She received a PEET grant to build a global database of sea anemones. Early in the project she realized few people had access to the original literature in which many sea anemones were first described. For that reason, her Web page includes historic illustrations from some century-old reports.

    Fautin's goal is to sort out just how many species there are. Sometimes, a species was named more than once by researchers who discovered it on different sides of the globe. That members of the same species can look quite different only complicates the matter. Explains Fautin: “It's a big bookkeeping problem, and also a biodiversity and biogeographical problem.” The 1300 species could really represent only 800.

    To figure this out, Fautin tracked down the specimens that were used in the original description of a species. Now she has an inventory of what museums have which species, a resource that enables researchers at their office computers to easily identify the location of the specimens they are interested in. Unlike that in Species Analyst, her data set already covers all of the relevant museums, but only for sea anemones.

    Early days

    Despite these promising advances, biodiversity bioinformatics programs still have a way to go. Converting collection information into a digital form is tremendously hard—and boring—work. The data on the millions of specimens at each major museum could take tens of person-years to enter by hand into a computer, even if those institutions had the funds to do this work. So while fish collections are in good shape—a database containing the estimated 25,000 fish species was recently completed—many insect collections remain untouched.

    Adding to the tediousness of the chore are entries about where the plant, butterfly, or elephant was found. Saying that a specimen comes from “5 miles [8 km] south of the intersection of Route 5 and Interstate 80” means little to a researcher in a different state, much less another country. For that reason, geographic analysis programs require locations demarked by latitude and longitude. So for the time being, someone must figure out from maps what the corresponding coordinates should be. “Just documenting what we already know is a tremendous challenge,” says Gosliner of California's Academy of Sciences.

    Finally, Fautin and others wonder whether technology will change so that 50 years from now, Web-based taxonomic resources will be inaccessible. She, for one, still plans to publish a print catalog about her sea anemone work. But others, such as Blum, think those worries are unfounded. The future is in the Web, he insists, citing a dozen projects like Species Analyst as just the tip of the electronic taxonomic iceberg. “We need to stop committing the information to paper,” he says. His attitude, and that of an increasing number of systematists, curators, and biologists, is gradually changing the face of systematics, says Beach, and making this “a tremendously exciting time.”

  19. Fossil Databases Move to the Web

    1. Jocelyn Kaiser

    Paleobiologists readily acknowledge that they lag behind disciplines such as molecular biology in sharing data on the Web. But several researchers are working to put existing data sets online. And at least one team hopes to build a sort of GenBank of paleobiology, a Web site where everyone can deposit their fossil finds. Organizers of these efforts face a big hurdle, however: deep divisions within the community over the sharing and quality of data.

    Although a Web search will turn up at least a dozen paleo databases, many consist merely of photos of selected fossil specimens or taxonomic lists. A few broader databases limited to one region, animal or plant group, or time period can be downloaded from the Internet in one huge chunk. What's missing are comprehensive, open-access, interactive Web databases that archive published data on where and when a species lived, information that is critical for analyzing patterns of evolution and extinction.

    Paleobiologists point out that they face the difficult task of integrating species, temporal, spatial, and geochemical data that can quickly become obsolete if new fossils are discovered. “It's a much more complicated endeavor” than even living species databases, says Doug Erwin, paleobiology curator at the Smithsonian's National Museum of Natural History (NMNH) (see main text).

    But would-be Web database builders must also deal with an ambivalence over such repositories that goes back to the most famous one, a marine species database built in the 1970s that the late Jack Sepkoski of the University of Chicago used to overturn many ideas about extinctions and diversification. Besides logging his own fossil finds, “he grazed and browsed in the literature and used it in ways that made some paleontologists uncomfortable,” says Kay Behrensmeyer of NMNH. Such attitudes are still common among vertebrate paleontologists, whose fossils are relatively rare. These views play out both as reluctance by some collectors to share their specimen databases and as long-running disputes over the quality of compilations such as Sepkoski's—whether he used correct taxonomy, for example.

    Despite that baggage, a few broad Web paleodatabases are under construction. They've been spurred by advances in “relational database” software that make it possible to dovetail separate data sets so that, say, shifting the Eocene period by 1 million years doesn't mean having to adjust every entry in a database. Such tools “made a huge difference,” says Charles Marshall of Harvard University.

    One impressive experiment in cooperative database building is Neogene Marine Biota of Tropical America, which logs marine fossils from the last 25 million years ( At this stage, the site, which emphasizes photos, is basically “like a Peterson's field guide” for identifying specimens, says co-curator Ann Budd of the University of Iowa in Iowa City. But her group plans to add data that's now accessible only to contributors so that other users can plot ranges and evolutionary trees.

    Another project just coming online is the Evolution of Terrestrial Ecosystems (ETE) database developed by NMNH and John Damuth of the University of California, Santa Barbara. It covers both animal and plant terrestrial fossils and includes age, species lists, body size, and diet for nearly 4000 localities, largely from the African late Cenozoic. This week, ETE debuted a pilot Web version of the database ( Behrensmeyer hopes more researchers will contribute. “Once people see what can be done, I really think they will be willing to provide access” to their data, she says.

    An even more ambitious project is under way at the National Center for Ecological Analysis and Synthesis (NCEAS) in Santa Barbara: The goal is to span all time periods and organisms. Led by Marshall and John Alroy of NCEAS, the Paleobiology Database is starting with marine paleofauna but plans to fold in other data such as Alroy's own North American mammalian databases ( “We need to have integrated databases to answer big-picture questions” about evolution, says Alroy. An open database where anyone can enter data via the Web “represents the only rational solution,” he says. Alroy has found 36 collaborators so far but predicts that some private databases “will never be online.”

    Indeed, skeptics of the all-in-one database idea abound. Richard Stucky of the Denver Museum of Nature and Science, who's compiling Cenozoic North American mammal data to expand an older database called FAUNMAP, asserts that he's painstakingly gathered data using “strict criteria” to address specific research problems. Asks Stucky: “Can a central database answer all the questions [researchers are] asking? I say, ‘No.’”

    But others look forward to a day when anyone can troll a central fossil database. Says Marshall, “It seems daft to go into the field and collect a bucketful of fossils and not see it recorded anywhere.”