News this Week

Science  24 Mar 2000:
Vol. 287, Issue 5461, pp. 2126

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Globe's 'Missing Warming' Found in the Ocean

    1. Richard A. Kerr

    Greenhouse skeptics often point to the relatively modest atmospheric warming of the past few decades as evidence of the climatic impotence of greenhouse gases. Climate modelers respond that much of the heat trapped by greenhouse gases should be going into the ocean, delaying but not preventing some of the atmospheric warming. But oceanographers plumbing the ocean depths have been unable to say who was right, because records of deep-ocean temperature have been too spotty to pick out clear trends. Now, on page 2225 of this issue of Science, physical oceanographers rummaging through piles of neglected data report that they have turned up millions of old, deep-ocean temperature measurements, enough to draw up oceanic fever charts that confirm the climate models' predicted ocean warming. “We've shown that a large part of the ‘missing warming’ has occurred in the ocean,” says oceanographer Sydney Levitus, the lead author of the paper. “The whole-Earth system has gone into a relatively warm state.”

    The international data search-and-rescue effort “adds credibility to the belief that most of the warming in the 20th century is anthropogenic,” says climate modeler Jerry D. Mahlman of the National Oceanic and Atmospheric Administration's (NOAA's) Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. It also suggests that past greenhouse gas emissions guarantee more global warming ahead and that the climate system may be more sensitive to greenhouse gases than some had thought.

    How could millions of valuable oceanographic measurements go missing for decades? Oceanographers have never had the orchestrated, worldwide network of routine observations that meteorologists enjoy. Instead, 40 or 50 years ago, ocean temperature profiles made by dropping a temperature sensor down through the sea might end up handwritten on paper, captured in a photograph, or recorded in analog form on magnetic tape. Everything from mold to mice was devouring the data. That's why, under the auspices of the United Nations-sponsored Global Oceanographic Data Archeology and Rescue project, data archaeologists like Levitus have spent the past 7 years seeking out ocean temperature data around the world and digitizing them for archiving on modern media.

    After adding 2 million profiles of ocean temperature to the previously archived 3 million profiles, Levitus and his NOAA colleagues in Silver Spring, Maryland, could see a clear change. Between 1955 and 1995, the world ocean—the Pacific, Atlantic, and Indian basins combined—warmed an average of 0.06°C between the surface and 3000 meters. That's about 20 × 1022 joules of heat added in 40 years, roughly the same amount the oceans of the Southern Hemisphere gain and lose each year with the change of seasons. Half the warming occurred in the upper 300 meters, half below. The warming wasn't steady, though; heat content rose from a low point in the 1950s, peaked in the late '70s, dropped in the '80s, and rose to a higher peak in the '90s. All three ocean basins followed much the same pattern.

    These rescued data have oceanographers excited. “I've never seen anything like this before,” says physical oceanographer Peter Rhines of the University of Washington, Seattle. “What surprises me is how much [of the warming] is in the deepwater.” The newly retrieved data “show how active the [deep-ocean] system is,” says oceanographer James Carton of the University of Maryland, College Park, “and how it's a part of the climate system on short time scales.”

    The friskiness of the whole-ocean system came as a surprise as well. “There's striking variability from decade to decade,” says Rhines. That the heat content tends to rise and fall in concert across all three ocean basins, in both the north and the south, is “quite amazing,” he adds. Meteorologists and oceanographers are increasingly recognizing that the atmosphere connects ocean basins (Science, 10 July 1998, p. 157), but as to what could be coordinating global swings in heat content, “I really don't know,” says Rhines.

    The most immediate reward for retrieving so much data from the oceanographers' attic seems to be more confidence in climate models. The increased heat content of the world ocean is roughly what climate models have predicted. “That's another validation of the models,” says climatologist Tom Wigley of the National Center for Atmospheric Research in Boulder, Colorado.

    As the models implied, rising ocean temperatures have delayed part of the surface warming, says climate modeler James Hansen of NASA's Goddard Institute for Space Studies in New York City, but that can't continue indefinitely. Even if rising concentrations of greenhouse gases could be stabilized tomorrow, Hansen says, gases that have already accumulated will push surface temperatures up another half-degree or so.

    The ocean-induced delay in global warming also suggests to some climatologists that future temperature increases will be toward the top end of the models' range of predictions. Mainstream climatologists have long estimated that a doubling of greenhouse gases, expected by the end of the 21st century, would eventually warm the world between 1.5° and 4.5°C. Some greenhouse contrarians have put that number at 1°C or even less. Now, the ocean-warming data “imply that climate sensitivity is not at the low end of the spectrum,” says Hansen. He, Wigley, and some others now lean toward a climate sensitivity of about 3°C or a bit higher. But as climatologist Christopher Folland of the Hadley Center for Climate Prediction in Bracknell, United Kingdom, notes, the considerable variability in ocean heat content from decade to decade means scientists will still be hard pressed to find a precise number for climate sensitivity.

    Getting better numbers for ocean heat content remains a top priority for oceanographers. “There's still a vast amount of data out there that needs digitizing,” says Folland. And for future numbers, an international effort called Argo, now under way, will create an ocean-spanning network of 3000 free-floating instrument packages. Linked by satellites, the Argo drifters will create a “weather map” of the ocean down to 1500 meters. At least future oceanographers won't have to rummage through the data detritus of their predecessors to see what the ocean is up to.


    How a Bland Statement Sent Stocks Sprawling

    1. Eliot Marshall

    Muddled news reports and a volatile stock market turned a presidential statement on genome data last week into a disaster for many biotech companies. Stocks of genetic research companies, after shooting upward early this year, plummeted on 14 March when President Bill Clinton and British Prime Minister Tony Blair issued a bland statement urging all labs to provide “unencumbered access” to raw DNA sequence information (Science, 17 March, p. 1903). Almost immediately, biotech stocks, which were already headed downward, went into a nose dive; some companies lost as much as 20% of their value on paper in a few hours. Within 48 hours many began to stabilize, but remained well below their peak a week later. Industry analysts had trouble interpreting these market gyrations. One biotech expert suggested a simple explanation: Stock buyers “don't understand what they're investing in,” he said, and they can be easily spooked.

    The spark that ignited the panic may have come during an informal briefing given by Clinton's press secretary Joe Lockhart on the morning of 14 March. As The Wall Street Journal reported the next day, Lockhart told a “gaggle” of regulars who cover the president that Clinton and Blair intended to issue a statement in the afternoon about a plan to restrict the patenting of human genes. If this is what Lockhart said—his remarks were off the record—it was not correct. Francis Collins, director of the U.S. National Human Genome Research Institute, says the statement was never meant to describe a new policy. The wording—which had been debated and revised “in many iterations … over many months,” Collins says—simply affirmed support for a 1996 research policy that calls for the immediate release of raw sequence data. Indeed, the Clinton-Blair statement specifically endorsed the patenting of “new gene-based health care products.” But this clear message became tangled in stories of the rivalry between publicly and privately funded genome scientists over who should control human genome data (Science, 10 March, p. 1723). The upshot: Early news reports were confused.

    At 9 a.m., CBS Radio News broadcast that the United States and Britain were aiming to “ban patents on individual genes.” The Associated Press reported that there was a plan to restrict gene patents, but later said that Britain and the United States would begin to “openly share data” on the human genome. (They already do.) The stories became clearer later in the day. Even so, Chuck Ludlam, vice president of the Biotechnology Industry Organization in Washington, D.C., who saw the Clinton-Blair statement as “positive news” for industry, says he found it “unbelievable how wrong the reports were all day.”

    White House spokesperson Jake Siewert later told Science that “we completely dispute” the Journal's account of what caused the muddle. Lockhart, he says, told reporters that the Clinton-Blair announcement “had to do with public access to raw genomic data.” But there was “confusion” during the “back and forth” between Lockhart and the reporters, Siewert concedes. “I don't think Joe got it perfectly right. … And some reporters didn't get it perfectly right.”

    During the confused morning, stocks of companies that are creating private genetic databases—such as Celera Genomics of Rockville, Maryland, and Incyte Pharmaceuticals of Palo Alto, California—began to tumble. Other genome-related stocks began to slide, too. Soon the entire biotech sector slumped, as did the Nasdaq stock exchange index, which tracks high-tech firms. The Nasdaq index bounced back within 48 hours, but dropped again later, as investors remained wary of genomics and biotech companies. A week later, Celera and Incyte stocks, for example, were still 60% below their peak immediately before the statement. Predicts industry analyst Sergio Traversa of Mehta Partners in New York City, “Investors will remain a little bit more careful now,” having been stung so badly.


    Academy Head Touted for Top Political Post

    1. Dennis Normile*
    1. With reporting from Beijing by Li Hui of China Features.

    TOKYOLast week's presidential election in Taiwan, hailed as a boost for the country's young democracy, may also have a major impact on Academia Sinica, the island's premier collection of research institutes. Its leader, Nobel laureate Lee Yuan-tseh—who publicly backed the winning candidate, Chen Shui-bian, just days before the 18 March election—is being tipped as a possible premier in the new government or as a special envoy to Beijing. But some researchers have criticized Lee's endorsement of Chen for jeopardizing Academia Sinica's political neutrality. And, given Chen's strong stance on Taiwan's independence, some worry that China could react to a government led by the two men by curtailing research collaborations.

    A native of Taiwan, Lee spent nearly 30 years at the University of Chicago and the University of California, Berkeley, before returning in 1994 to head the academy. He lobbied successfully for a dramatic jump in funding and convinced a dozen or so topflight scientists to return home to take up key positions in the academy's 24 institutes. He also introduced peer-review procedures and other reforms that have led to a sharp rise in publications in top international journals. He's made it “a dynamic and vibrant institution,” says Kenneth Wu, a molecular biologist who headed the academy's Institute of Biomedical Sciences for 3 years before returning last year to the University of Texas Health Science Center in Houston.

    At the same time, Lee took on a very public role. Described by the media as “Taiwan's conscience,” he championed social welfare measures, spoke out against governmental corruption, led a drive for educational reform, and chaired a commission to assist the victims of last fall's earthquake.

    Although the academy reports directly to Taiwan's president, Lee had always urged his colleagues to maintain political neutrality. But colleagues say Lee grew increasingly concerned about corruption and decided to throw his weight behind Chen when it became clear that the race was close. On 10 March Chen visited Lee in his Academia Sinica office, and on 13 March Lee announced that “Chen is the only candidate capable of really rooting out the endemic corruption in Taiwan's politics.” The same day, Lee tendered his resignation, which was refused by outgoing President Lee Teng-hui. The Nobelist then announced he would be taking vacation leave until the end of the month.

    Some scientists worry that Lee's political move will strain relations between Academia Sinica and the long-ruling Nationalist Party, which still controls the country's legislature. “The momentum of progress at Academia Sinica is now irrevocably interrupted, and its reputation badly damaged,” says one Chinese-American scientist from a major U.S. research university who has close links with Taiwan. But others say that Lee's departure from Academia Sinica could be equally damaging by depriving it of a strong leader. “It won't be easy to find [a replacement for Lee] with his vision and his international status,” says Lin Sheng-hsien, director of the Institute of Atomic and Molecular Sciences, part of a group that has urged Lee not to step down. Lee could not be reached for comment.

    As for scientific relations with China, China's Communist Party made it clear throughout the campaign that it opposed Chen, who was considered the most pro-independent of the major candidates. Lee himself enjoys close ties with the mainland. He has received honorary degrees from several Chinese universities and is an honorary professor of the Chinese Academy of Sciences' Institute of Chemistry in Beijing. Li Jia-quan, a research fellow at the Institute for Taiwan Studies in Beijing, says that because of Lee's endorsement of Chen, “it would be impossible for the mainland to accept Lee as a negotiator for relations across the Taiwan Strait.” He added, however, that he doesn't think that it will affect contacts between academy scientists and their mainland counterparts.


    New York's Deadly Virus May Stage a Comeback

    1. Martin Enserink

    ATLANTIC CITYWill it come back? That question has been haunting public health officials in New York City and state since a surprise outbreak of the West Nile virus sickened more than 60 people late last summer and killed seven. No more cases of this rare illness were detected after temperatures started dropping in October, rendering the climate inhospitable to the mosquito that transmits the disease, most likely a subspecies of Culex pipiens. But researchers didn't know whether the virus would survive the winter, either in mosquitoes or their eggs, or in birds, the virus's animal reservoir. Now they do. Two recent observations have shown that the virus is alive and kicking, researchers said at a meeting of the American Mosquito Control Association held here last week—and it may well spread and cause disease come summer.

    A first clue came in early February, when a red-tailed hawk was found dead in Westchester County, a suburb north of New York City. Lab tests at the University of Connecticut in Storrs and the Connecticut Agricultural Experiment Station in New Haven confirmed that the bird died from the West Nile virus. Because there were no mosquitoes around at the time, researchers don't understand how and where the hawk became infected. It may have been bitten by a mosquito in the fall and only succumbed recently, or it may have eaten an infected animal. Another theory is that it picked up the virus farther south and then flew north, perhaps disoriented from the disease. That's a troubling scenario, because it would mean that the virus has already gained a foothold somewhere in the southern United States.

    More troubling news came from researchers from the Centers for Disease Control and Prevention (CDC) and New York City and state, who have been monitoring mosquito populations in sewers, abandoned hangars, swimming pool utility rooms, and other likely winter shelters. They had a hard time finding any mosquitoes at all, said CDC's Roger Nasci. But the whitewashed interior walls of Fort Totten, a historic site in the New York borough of Queens, proved an excellent hunting ground. Early last week, researchers found live West Nile virus in one Culex sample taken there. “This means the virus has survived the winter in a viable form,” says Nasci.

    The results have experts worried. Some entomologists question the region's ability to wage an effective war against mosquitoes this spring and summer—the only way to stop transmission of the virus. With viral epidemics such as AIDS taking a huge toll, insect-borne diseases have been “pretty far back on the burner” in New York and some other states, says John Edman, director of the Center for Vector-Borne Disease Research at the University of California, Davis. He points out that New York state is still hiring personnel for new surveillance and control programs. As a result, it may be difficult to start an aggressive larvae-control campaign in early spring, as experts recommended during a workshop in January, says Edman. “I'm really worried,” adds Yale medical entomologist Durland Fish. “I don't think they'll be able to mount an effective response.”

    But city and state officials dismiss those worries, citing detailed plans of how they will monitor and battle possible outbreaks. “We have been working very diligently,” says a spokesperson for the state Department of Health in Albany. “We will be prepared.”


    X-ray Pulses Chopped to Femtoseconds

    1. Robert F. Service

    Nightclub operators aren't the only ones who like strobe lights. Chemists in recent years have increasingly turned to short pulses of x-rays to illuminate the dance of atoms in molecules as they undergo chemical reactions. Using nanosecond pulses of x-rays at synchrotrons, for example, researchers have made movies of proteins as they bind to molecular dance partners (Science, 27 June 1997, p. 1986). Now a team working at Lawrence Berkeley National Laboratory's Advanced Light Source (ALS) synchrotron in California has managed one better.

    On page 2237, the team reports generating x-ray pulses lasting a mere 300 quadrillionths of a second, or femtoseconds. And the researchers are now building a fast-pulse beamline at the ALS, which is expected to shave those pulses down to 100 femtoseconds. The new pulses aren't bright enough at the moment to shed light on massive molecules such as proteins, but by giving freeze-frame views they may help unveil intimate details of key atoms in molecules in disordered solids and liquids as they undergo reactions. “It's an exciting experiment,” says James Penner-Hahn, a chemist and fast-pulse x-ray expert at the University of Michigan, Ann Arbor, of the new work. By allowing researchers to see atomic close-ups of chemical reactions as they occur, “I think it would open up a whole new class of experiments,” he says.

    X-rays aren't the only type of short-pulsed strobe out there. Chemists track chemical reactions with infrared laser pulses as short as a few femtoseconds. Laser pulses, however, tell researchers only about the chemical components of the compounds they are looking at. X-rays can illuminate the precise structure of atoms in a sample.

    For that reason the Berkeley group—led by physicist Robert Schoenlein—has been working to generate ever shorter x-ray pulses. Four years ago, Schoenlein and his colleagues created the first 300-femtosecond x-ray pulses through a technique called relativistic Thomson scattering (Science, 11 October 1996, p. 236), which uses electrons careening through a linear accelerator to kick pulses of infrared photons up to higher x-ray energies. But the resulting x-ray pulses were too dim to be useful for most experiments.

    This time around, the researchers turned to the ALS, which produces extremely bright bursts of x-ray photons. The catch is that those x-ray bursts last about 30 picoseconds each, over 1000 times too long to illuminate the fastest atomic movements. So the team had to come up with a way to chop out part of each long pulse while retaining enough photons to capture the action.

    They started by manipulating not the synchrotron's x-rays, but the electrons that produce them. Synchrotrons accelerate bunches of electrons to relativistic speeds in a stadium-sized “ring,” actually a 12-sided polygon. Magnets at the vertices of the polygon force the electrons to bend and travel around the ring; the change of direction causes them to shed x-rays, which can be used in experiments.

    The Berkeley researchers wanted to slice off a portion of each electron bunch and use those to generate short x-ray pulses. They did so with the help of a femtosecond optical laser and a magnetic device called a wiggler, a gantlet of alternating north and south magnetic poles that force electrons traveling between them to shimmy back and forth, releasing additional x-rays with every wiggle. The researchers rigged their laser to fire light pulses into electron bunches as they passed through a wiggler in the straight section of the polygon. Light rays—like other forms of electromagnetic radiation—travel like waves with peaks and troughs in their electric field as they move. And in strong laser pulses, all the photons travel in lockstep, with their peaks and troughs coinciding.

    This coherent motion creates a strong electric field. Ordinarily, the field would not influence the flight of electrons traveling in the same direction as the laser light. But in the wiggler, when the electrons are forced to move askew, it does. “Those [electrons] sitting in the electric field get an energy kick,” Schoenlein says. That boosts them into a slightly wider orbit as they travel around the ring, making it possible to separate them out. Finally, the synchrotron takes over, forcing the small bunch of higher energy electrons to emit short x-ray pulses, which are steered to the experimental chamber.

    The result was a series of 10,000 x-ray pulses a second, each lasting 300 femtoseconds. A second's worth of pulses contained approximately 105 photons. That's well below the 1015 to 1018 photons per second the ALS normally generates. But it's likely to be plenty for use in a technique called extended x-ray absorption fine structure, which reveals the placement and movement of a core atom in a sample relative to its closest neighbors. By using fast pulses to probe a chemical reaction at various stages of completion, researchers can construct fast-action movies of chemistry in the making. Unlike similar movies made with laser pulses in so-called “pump-probe” experiments, a femtosecond x-ray pulse would reveal the position of atoms in the sample. Although the Berkeley team has yet to perform the experiment, it's the next step. Says Penner-Hahn, “This is something that could have a tremendous impact.”


    Hominid Ancestors May Have Knuckle Walked

    1. Erik Stokstad

    As countless cartoons of hunched brutes suggest, the evolution of an upright posture is one of the crucial features that makes hominids distinct from other primates. But exactly how our predecessors got around before they walked upright has been a mystery. Now a new analysis of casts of 3-million- to 4-million-year-old hominid bones suggests that humans evolved from an ancestor that walked on its knuckles, like chimpanzees and gorillas. “For the first time we are able to say early hominids bear the echo of a knuckle-walking ancestry that they shared with gorillas and chimps,” says paleoanthropologist Rick Potts of the Smithsonian Institution.

    Anthropologists Brian Richmond and David Strait of George Washington University (GWU) report in this week's issue of Nature that two species of australopithecines, including the famed “Lucy” skeleton, have features in their wrist joints that resemble those seen in living African apes. Aside from shedding some light on the murky nature of the common ancestor of humans, chimps, and gorillas, the find helps resolve a conflict between anatomical and molecular evidence linking chimps and humans. “This makes things nice and neat,” says Leslie Aiello, a paleoanthropologist at University College London.

    When chimps and gorillas scamper on the ground, they curl their long fingers, plant the second segment of their finger bones on the ground, and shift their weight. A bony ridge near the base of the finger results from the hand bearing weight in this position. Humans don't have this ridge, nor did early hominids like Lucy, a member of the species Australopithecus afarensis, because she, like us, walked upright.

    The lack of evidence for knuckle walking among human ancestors implied that chimps are closer to gorillas than to humans, yet strong molecular data and some subtle anatomical data suggest the opposite. But if chimps and humans are indeed closely related, as the genes suggest, then knuckle walking must have evolved twice, independently, in chimps and gorillas—a somewhat “untidy” theory that dissatisfied many anthropologists.

    Then in 1998, Richmond, a predoctoral fellow at the Smithsonian Institution at the time, read some old papers on the evolution of the primate hand. He noted that early descriptions of knuckle walking in modern chimps and gorillas reported not only the ridges on the finger bones, but also ledges and notches in the wrist joint that help keep the arm rigid. He and Strait walked across the hall to check the collection of casts at the National Museum of Natural History. “I walked over to the cabinet, pulled out Lucy, and—shazam!—she had the morphology that was classic for knuckle walkers.” Lucy herself wasn't a knuckle walker, notes Richmond; rather, these wrist traits are a leftover from her knuckle-walking ancestors.

    Teaming up with Strait, a postdoc at GWU, Richmond found that chimps, gorillas, and two early hominids—A. afarensis and A. anamensis—had similar specialized wrist features that are lacking in modern humans and other primates. (Two later hominids—A. africanus and Paranthropus robustus—lacked these specializations.) “These features in the early hominid bones can't be explained except that they are uniquely related to knuckle walking,” says John Fleagle, a paleoanthropologist at the State University of New York, Stony Brook, the institution from which Richmond and Strait received their Ph.D.s. And these common traits imply that the common ancestor of australopithecines, chimps, and gorillas was a knuckle walker. The knuckle-walking traits were lost in hominids—by about 2.5 million to 3.0 million years ago, according to specimens of A. africanus, Richmond says.

    But the finding raises other questions, such as why a climbing creature already adapted for traveling on the ground would evolve the ability to stand on two feet as well. “In some ways, for me, it makes it more difficult to understand the evolution of bipedalism,” Potts says. One idea is that walking upright freed the hands for other uses, such as carrying food, tools, or weapons, says Carol Ward, a paleoanthropologist at the University of Missouri, Columbia. “The big problem is that we don't have a fossil record of the chimp-human-gorilla ancestor,” she notes. “So what you have to do is build an argument based on parsimony and hope for the best.”


    Medalists Gaze Out on a Familiar Future

    1. Jeffrey Mervis

    Who says good scientists need data to voice an opinion? Last week the newest winners of the national medals of science and of technology (Science, 4 February, p. 785) spent an hour speculating on what the world might look like in 2025 and whether “innovation will surpass science fiction.” The spirited discussion among the 15 medalists—part of a daylong series of events that culminated in presentation of the awards by President Clinton—flowed freely around such knotty questions as the impact of technology on the quality of life and what drives human behavior. Not surprisingly, there was no consensus. But although a few scientists declined to venture outside their own discipline, most were happy to extrapolate from today the shape of tomorrow.

    Computer scientist Raymond Kurzweil spoke glowingly of nanobots communicating directly with our neurons to repair damaged tissue, part of a panoply of technological advances that would bring good health and prosperity to all. His sunny view, however, clashed with conservation biologist Jared Diamond's warning about a collection of 25-year “time bombs”—in particular the loss of biodiversity—that must be defused before humanity can prosper. Cellular biologist Lynn Margulis was even gloomier, fretting about how the desire to procreate could lead to unsustainable population levels that would overwhelm the capacity of any technology. In rapid succession, the three scientists gave thrust and parry, conceding nothing.

    The issue of how to monitor where the world was headed proved equally hard to pin down. Economics Nobelist Robert Solow objected to overly optimistic predictions of ever-expanding productivity from computers and electronic communications, saying that the slight gains in recent years have yet to survive a recession. Kurzweil disagreed, saying that traditional economic measures were no match for the new economy, but Solow stuck to his guns. Taking another tack, medicine Nobelist David Baltimore opined that productivity itself was a poor measure of progress and that, for most people, an improved quality of life from modern pharmaceuticals was a more meaningful indicator.

    Moderator Ira Flatow, a science journalist, seemed happy to let participants state their views and take their shots, leaving the audience to draw its own conclusions. But at least one panelist expressed displeasure at how the issues were being framed. “I don't know what life will be like in 2025, and I don't think scientists have much that's useful to say about the topic,” commented physics Nobelist James Cronin after the roundtable ended. “But I can promise you that in 25 years we will know a lot more about the composition of the universe. That's what science can give the world. And I think that's pretty important.”


    New Ion Channel May Yield Clues to Hearing

    1. Marcia Barinaga

    As every biology student learns, the sense of hearing depends on the operation of the hair cells in the inner ear. These cells bear microscopically fine projections, the hairs or cilia, that bend in response to passing sound waves, setting off nerve impulses that the brain recognizes as sounds—a clap of thunder, say, or a hushed whisper. But even though neuroscientists have learned a great deal about hair cells, they have been unable to track down a key element needed for the cells' operation—the ion channel that opens when the hairs bend to produce the electrical signal. Now, working with a seemingly different system, they've made a discovery that may help them get their hands on the elusive channel.

    On page 2229, a team led by Charles Zuker of the University of California (UC), San Diego, reports that it has cloned an intriguing ion channel from the neurons that underlie the sensory bristles of the fruit fly. It is a mechanically sensitive channel—in other words, it responds to mechanical force instead of voltage changes or biochemical modifications. At first blush, the fruit fly bristles, visible with a magnifying glass, appear to be quite different from the microscopic bundles of hair cells within the human ear. But the Zuker team has shown that the neurons beneath the bristles operate much like the hair cells as they convert movement into electrical impulses. That has some researchers thinking that the functioning of the two types of cells may depend on structurally similar ion channels. If so, the new gene could provide a useful probe for fishing out the channel in human hair cells—an accomplishment that could lead to new insights into the causes of hereditary deafness and perhaps ways to correct it.

    “I am really excited about [this] channel,” says Cornelia Bargmann, who studies sensory systems at UC San Francisco. Although there are other candidates for such mechanically sensitive channels, including one discovered by Bargmann's team, she calls Zuker's the “most intriguing candidate right now” because of its possible connection to hair-cell channels, with their “clear medical relevance and interesting biophysics.”

    Hair-cell physiologists have long wanted to see what the hair-cell channel looks like, because their experiments had shown that it has fascinating biophysical properties. By studying the electrical currents passing through the membranes of hair cells as they are stimulated, they learned that hair-cell channels are stunningly fast, opening up within microseconds, compared to the milliseconds needed by biochemically activated channels. They are also exquisitely sensitive to the slightest movement and to direction; they open when the tip of the cell's cilia bundle is deflected by a mere atom's width—akin to bending the tip of the Eiffel Tower by the width of your thumb. If the cilia bundle moves one way, the channel opens; the other way and it shuts. The channels are also able to register tiny cilia movements on top of a larger constant deflection—a trait that lets us discern meaningful sounds from background noise.

    Efforts to isolate the channels have been stymied, however, primarily because hair cells are so sparse and contain relatively few channel molecules. So Zuker decided to apply the power of fruit fly genetics to the problem, on the hunch that the flies' bristles might contain channels similar to those in hair cells. In the first phase of the work, begun about 7 years ago, Zuker and then-postdoc Maurice Kernan, now at the State University of New York, Stony Brook, created mutant flies and screened them for those that were defective in their sense of touch. Some of those flies, they reasoned, would have mutations in genes specific to the touch response—including the gene for the touch-sensitive channel itself.

    In a separate phase of the work, postdoc Richard Walker, who arrived in Zuker's lab in 1996, examined whether the bristle system would be a good model for hair cells. It was. Using electrophysiological methods similar to those employed to study hair cells, Walker found that when the fly neurons respond to touch, they share key characteristics of hair cells: fast responses to even the tiniest movements, directional sensitivity, and adaptability to new bristle positions. Hair-cell researcher David Corey of Massachusetts General Hospital in Boston calls the comparison “beautiful.” Walker “repeated the last 20 years of human hair-cell physiology on this bristle system,” he says, “and everything looks the same.”

    Walker then applied the same methods to the bristle neurons of the mutant flies to search for those in which the mutations caused defects in the channel's function—a good indication that the affected gene encodes the channel. He found that a gene called nompC (for no mechanoreceptor potential C) seemed to fit the bill. Mutations in nompC either blocked the opening of the channel in response to bristle movement or in one case altered the channel so it opened but let through less current than normal. To Bargmann, this is the “most convincing” evidence that the NOMPC protein is the mechanically sensitive bristle channel.

    The sequence of the nompC gene supports that view, as it encodes a protein with the general structural features of proteins that form ion channels. The gene sequence also contains a clue to how mechanically sensitive ion channels open. To be tugged open, a channel must be anchored so that pulling on it changes its shape. NOMPC appears to have “a great way of anchoring the channel” to the cell's skeleton, says Corey. This is a set of 29 so-called ankyrin repeats—short amino acid sequences that link up to other proteins.

    Although all these data constitute strong evidence that NOMPC is a mechanically sensitive channel, definitive proof would require putting it into cultured cells and showing that it renders them responsive to touch. That is a tough experiment, because other specialized proteins are likely required for NOMPC function. And even if NOMPC does turn out to be a mechanically sensitive channel in flies, that doesn't necessarily mean that it will be related to the elusive hair-cell channel in vertebrates.

    So far, opinion on that issue is mixed. Neuroscientist Denis Baylor of Stanford Medical School is cautious. “The anatomy [of bristles and hair cells] is so different that I wouldn't be surprised if [the hair-cell channel] is a completely different molecule, not even a relative,” he says. But Corey and fellow hair-cell researcher James Hudspeth of The Rockefeller University in New York City come down on the other side. Given the similarities that Walker found between the hair-cell responses and those of the bristle neurons, “chances are very good” that the two are related, Hudspeth says.

    To find out, his team is now using the Zuker group's cloned gene to look for expression of a similar gene in hair cells from chickens. Researchers will also want to determine, Hudspeth suggests, whether a human version of nompC might turn out to be mutated in any of the many forms of hereditary deafness for which genes have not yet been identified. If either of these searches is successful, then the similarity of bristles to hair cells will indeed have paid off.


    Family of Bitter Taste Receptors Found

    1. Marcia Barinaga

    Our ability to savor the sweetness of a fig or the sour tang of a lemon may seem more like a pleasure than a necessity, but the sense of taste is actually honed for survival. Sweetness, for example, means that a food has high caloric value, while bitterness tells us that it may be poison. For neuroscientists, however, bitter has been a perplexing flavor, because a wide range of unrelated chemicals all taste similarly bitter even though their diverse structures suggest that they must trigger different receptor molecules. The solution to that puzzle may now be at hand—along with other insights into the phenomenon of taste.

    A team led by Nicholas Ryba of the National Institute of Dental and Craniofacial Research and Charles Zuker of the University of California, San Diego, reports in the current issue of Cell that it has identified a huge family of receptors, each of which seems to respond to different bitter-tasting compounds. The researchers have also discovered how those various signals are apparently combined to send just one bitter message to the brain. “This is clearly a major breakthrough for taste research,” says Gary Beauchamp, director of the Monell Chemical Senses Center in Philadelphia. “It all fits together in a very nice story. My only regret is that I didn't make the discovery.”

    For years researchers have struggled to identify receptors for the five different tastes—sweet, bitter, sour, salty, and umami (MSG)—that the taste cells in our taste buds detect. The stumbling block has been a lack of starting material; there is no way to grow taste-bud cells in the lab. So with the exception of a recent discovery of a possible receptor for umami, receptors for the different tastes in vertebrates have not been identified.

    Taking a new tack, Ryba and Zuker (whose team this week also reports the discovery of a receptor for touch sensation; see previous story), decided to let genetics lead the way. Taste researchers have long known that some people can taste a bitter compound known as PROP, while others can't. Last year, Danielle Reed at the University of Pennsylvania and Linda Bartoshuk at Yale narrowed down the chromosomal location of the gene responsible for that difference. Zuker grad student Ken Mueller suspected that this gene might encode a bitter taste receptor and set out to find it.

    Mueller had one clue to guide him. Bitter receptors are known to interact with so-called G proteins, which are involved in intracellular signaling in taste and other responses. So Mueller looked in the vicinity of the PROP-tasting mutation for genes that might encode receptors with the ability to interact with G proteins. He found one, and together with Elliot Adler, a postdoc in Ryba's lab, discovered that it is part of a family of at least 50 genes that cluster at several locations along the human chromosomes. The large number of genes was encouraging, says Zuker, because the team had suspected that many bitter receptors would be required to recognize all the different chemicals that taste bitter. What's more, in mice as in humans, the genes turned out to reside in chromosomal areas known to be involved in bitter perception.

    Next, Mark Hoon, a postdoc in Ryba's lab, isolated the mouse counterparts of the human genes and investigated which taste cells in mice express them. He discovered that taste cells that respond to bitter flavors generally express not just one or two of the receptor genes, but most of them. As a result, each individual cell should be able to detect a wide variety of bitter-tasting compounds. This may explain why the brain can't distinguish among bitter chemicals, because no matter which receptor type is activated, the cell will send the same signal to the brain. As a result, the brain receives “a single channel of information” with the simple message that this food is to be avoided, says Robert Margolskee, a taste researcher at the Mount Sinai School of Medicine in New York City. In addition, Hoon found that the receptors are made in the same taste cells as gustducin, a G protein necessary for the perception of bitter tastes. To Margolskee, whose lab discovered gustducin, that essential association nearly cinched the case.

    But definitive proof that the family of genes does in fact encode the bitter receptors came when Jayaram Chandrashekar, a postdoc in Zuker's lab, showed directly that the receptors are activated by bitter-tasting compounds. He did this by separately putting each of 11 receptor genes into cultured cells that were engineered so that triggering the receptor would activate a dye. Chandrashekar then exposed the cells one at a time to several bitter compounds. He found that three receptors responded, each to different compounds. What's more, in a different test, the team showed that the activated receptors bind to gustducin, the first step in sending their bitter signal to the brain. The team was able to go even further to show that a mutation in the receptor molecule that recognizes the bitter chemical cycloheximide makes mice less able to taste that compound.

    These results led Margolskee to conclude that the molecules are “unqualified taste receptors, as opposed to ‘candidate’ receptors.” Those bona fide bitter receptors represent “an extremely powerful tool,” says Catherine Dulac, who studies the chemical senses at Harvard University. They will enable researchers not only to learn more about how the brain encodes taste, but also to develop antidotes for bitter flavors in medicines and foods. And for kids who hate brussels sprouts or taking their medicine, that would be a sweet outcome indeed.


    Iridium's Loss Is Astronomers' Gain

    1. David Malakoff

    A spectacular business flop is evoking sweet sorrow among radio astronomers. The once high-flying Iridium mobile phone company last week pulled the plug on its $5 billion satellite fleet and will eventually send the 68 orbiting craft into fiery death dives in Earth's atmosphere. That means an end to electronic smog that clouded sensitive telescopes. “I'm not going to say Iridium deserved it, but they certainly were not good neighbors,” says Willem Baan, director of Holland's Westerbork Observatory. The experience has also steeled astronomers' resolve to protect important frequencies.

    Iridium's globe-girdling constellation was supposed to be the next big thing in communications when it went live in late 1998 (Science, 2 October 1998, p. 34). But radio astronomers weren't thrilled, because the satellites produced static that interfered with the faint cosmic signals they study. In particular, Iridium threatened a 1612-megahertz signal produced by hydroxyl masers, blasts of laserlike radio waves that provide important insights into stellar evolution. After 6 years of tense negotiations, the company agreed to provide some unobstructed listening hours each day to radio telescopes in Europe, the United States, and India, and to fix the problem in newer satellites. That deal is now moot, however, as technical glitches and Iridium's high prices—the phones cost $3000 and calls up to $7 a minute—forced the company to shut down on 17 March.

    The Iridium episode has prompted astronomers “to become much more vigilant” about the interference threat from the growing communications industry, says Baan. In the United States, for instance, a recent government proposal to loosen standards on satellite radio emissions drew angry replies from 50 concerned astronomers, an unprecedented response. And researchers are organizing to protect key bandwidths at an international spectrum-allocation conference to be held in Istanbul in May.

    Meanwhile, Iridium's demise will also lighten the load on some optical astronomers. Solar panels on the satellites produce flashes that amateur sky watchers occasionally mistake for new celestial bodies, says Daniel Green of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “At least we won't be getting these weekly reports from people saying they've discovered another naked-eye supernova,” he says.


    Can Celera Do It Again?

    1. Robert F. Service

    The upstart genomics firm is set to embark on an ambitious new effort in protein analysis, but it faces a huge challenge and some stiff competition from companies already in the field

    J. Craig Venter and Michael Hunkapiller proved the skeptics wrong once. In 1998, the two teamed up in a bold plan to sequence the human genome by the end of 2001, a full 4 years ahead of the finish date then projected by the publicly funded Human Genome Project. Venter, head of Celera Genomics Corp. in Rockville, Maryland—the firm created to carry out this mission—was the brains behind a novel genome sequencing approach. Hunkapiller, head of the scientific equipment powerhouse PE Biosystems, provided brawn: a suite of the company's experimental high-speed DNA sequencers. Despite strong doubts among genome researchers that Celera could pull it off, the project appears to be just months away from completion.

    Now the pair is preparing to push Celera beyond the genome to conquer the next frontier—the identification of proteins involved in human disease. To do so, Celera is launching a major effort in “proteomics,” an effort to identify all the proteins expressed in an organism and then track their ebb and flow. The move, says Venter, is the next logical step in understanding the role of all the genes they've decoded. It is also a critical step in developing novel drugs and tailoring medical care to the genetic makeup of individuals. To pay for the new initiative, Celera raised $944 million in a stock offering earlier this month, much of which will be devoted to proteomics.

    This time, however, Venter and Hunkapiller are tackling a much more complex problem, and they will be facing competition from companies ranging from small start-ups to big pharma that have already started proteome projects of their own. “I don't think they are more credible than anybody else,” says Mattias Mann, chief scientific officer of Protana, a proteomics company in Odense, Denmark.

    View this table:

    But Venter, never one to understate his ambitions, boasts: “We're going to dominate in our own way. We're going to have the biggest facility and the biggest database.” He concedes that the huge amount of work needed to understand proteins and their interactions inside cells guarantees that many academic labs and other companies will also be players in the field. But, he says, “we're building a Celera-scale proteomics facility” capable of identifying up to 1 million proteins a day.

    Plans for the new facility are still coming together, Venter says, but it will likely consist of a fleet of up to 100 machines, including high-speed mass spectrometers for protein analysis, as well as additional protein separation devices. Celera also plans to boost the capacity of its $100 million supercomputer—which currently holds some 50 terabytes of genome data—by a factor of 10 to handle the expected torrent of protein data.

    Venter will have Hunkapiller's help getting his operation up to speed. Last week, PE—the parent company of both Celera and PE Biosystems—announced that it will form a proteomics research center at its PE Biosystems site in Framingham, Massachusetts, to create new high-speed machines. As part of that initiative, PE officials plan to pursue two technologies—one developed by Denis Hochstrasser and colleagues at the University of Geneva in Switzerland, the other by Ruedi Aebersold at the University of Washington, Seattle—that aim to do for protein analysis what the high-speed gene sequencers did for genome work.

    Outside observers say that with PE Biosystems' backing, Celera's move into proteomics is likely to be pivotal for the emerging field. “The genomics company stuck its flag in the arena of proteomics,” says William Rich, president and CEO of Ciphergen, a proteomics company based in Palo Alto, California. “It sends a message to the protein people that the gene people are not going to sit around and wait” for proteomics companies to give them the information they're looking for.

    In fact, the march of genomics companies into proteomics is well under way. Incyte Pharmaceuticals, one of Celera's chief genomics rivals, is 2 years into an extensive partnership with Oxford GlycoSciences (OGS), a proteomics company based in Oxford, England. OGS creates protein profiles of different tissues in both healthy and diseased states, and Incyte incorporates this information into a proteome database that it markets to pharmaceutical companies. Incyte signed up its first subscriber to its proteomics database last fall, giving it “a significant lead over other companies in developing proteomics databases,” asserts Incyte CEO Roy Whitfield. And like Celera, the company is also flush with cash. Incyte recently raised $620 million on the stock market, much of which is intended to bolster their proteomics work, says Whitfield.

    Other companies are pushing into proteomics as well. Virtually every major pharmaceutical company has a proteomics effort under way, says Hanno Langen, who directs proteomics research at Hoffmann-La Roche in Basel, Switzerland. Moreover, small proteomics firms, such as Genomic Solutions of Ann Arbor, Michigan, and Large Scale Proteomics of Rockville, Maryland, are gearing up for initial public stock offerings to raise money for expanded research.

    That makes Celera a late entry into the field, and it has some catching up to do. But it is betting nearly $1 billion that it can close the gap.

    The next step

    This move toward understanding proteins has emerged from the increasing recognition among genomics and pharmaceutical researchers that identifying DNA, or even messenger RNA (mRNA)—the nucleotide messengers that signal cells to produce a particular protein—is not enough. Neither DNA nor mRNA can identify how much protein is produced inside a cell or what it does once created. Although researchers initially hoped that the presence of a large amount of a particular mRNA meant that copious quantities of the corresponding protein were being produced, “there is significant evidence that there is not necessarily a correlation between mRNA levels and protein levels,” says Philip Andrews, a proteomics researcher at the University of Michigan, Ann Arbor. Other factors complicate the picture as well, he adds. For instance, chemical modifications such as phosphorylation play a key role in controlling protein activity; these modifications cannot be detected by screening nucleotides. “The genome tells you what could theoretically happen” inside the cell, explains Raj Parekh, the chief scientific officer at OGS. “Messenger RNA tells you what might happen, and the proteome tells you what is happening.”

    Figuring out what's happening at the protein level won't be easy even for Celera. “Proteomics is a much more difficult problem than genomics,” says Andrews. Whereas the human genome remains largely unchanged among individuals, he explains, the expression of proteins varies widely. Protein expression changes dramatically from one tissue to another and even within single tissues over time as a person ages. What's more, thousands of chemical modifications occur after proteins are created that alter their enzymatic activity, binding ability, how long they remain active, and so on. Although there may be only some 100,000 human genes, the myriad of modifications may give rise to 10 million to 20 million chemically distinct proteins in a cell, says Andrews.

    This complexity, Andrews and others say, makes it almost meaningless to consider a human proteome project—akin to the human genome project—to identify all proteins in every tissue. The best researchers can do is try to focus on changes in key proteins, such as those involved in disease and development. For that reason, skeptics argue that even with Celera's deep pockets it will not be able to sweep aside the competition. “I think it will be very hard for any company to be the dominant proteomics company, much more so than in genomics,” says Mann of Protana. Venter agrees—in principle. But he adds that few competitors will be able to match Celera's industrial approach. “We'll be working through every tissue, organ, and cell,” he says.

    Brute force

    In the current proteomics rush, most companies are taking more or less the same brute-force approach to determining which proteins are present in various tissues, a technique called two-dimensional (2D) gel electrophoresis. Researchers start with a protein extract from a tissue of interest and then add it to a sheet of polymer Jell-O. By applying electric fields across the length and width of the sheet, they separate proteins by their electric charge and size. The result is a series of up to several thousand spots, each containing one or more types of proteins.

    Once segregated into separate spots and stained, the proteins are typically cut from the gel one by one, chopped into fragments with an enzyme called trypsin, and dried. Then they are fed into a mass spectrometer that weighs each fragment, forming what amounts to a mass fingerprint of the protein's fragments. From that fingerprint, researchers can work out the likely combination of amino acids comprising it and then compare that to a genomics database to identify the corresponding DNA sequence. With the DNA sequence in hand, they can get a clearer identification of the protein. Researchers can then monitor changes in the expression of that protein to see whether it correlates with a disease state or perhaps a drug response.

    All of this takes time. Today's top-of-the-line 2D gel operations, complete with robots to cut apart the gels and computers to analyze the information, can study a couple of thousand proteins a day. As a result, it can still take months to figure out which proteins change their expression in one set of tissues.

    It's here that Venter and Hunkapiller hope to make a difference. PE Biosystems produces mass spectrometers, among other things. And according to Hunkapiller, mass spectrometers now in the prototype stage have the potential to go “orders of magnitude” faster than the current variety, making it possible to analyze hundreds of thousands of proteins per day. That scale of improvement in gene-sequencing machines enabled Celera and others “to ask whole new questions,” says Hunkapiller. “If we can do the same thing in the protein area, it should have the same effect.” By analyzing a large proportion of proteins in cells instead of just a select few, “you will see the things going on that maybe weren't so obvious,” predicts Hunkapiller.

    But this newfound speed will create bottlenecks upstream and require a faster front-end procedure to separate proteins and feed them into the high-speed mass spectrometer. This is where PE is looking to Geneva's Hochstrasser for help. Last fall, Hochstrasser and his colleagues published work on a new laser-based “molecular scanner” that automates the process of moving separated proteins from the gels to the mass spectrometers. The system applies an electric field perpendicular to the gel sheet; this field draws the proteins through two membranes. The first membrane is studded with trypsin, which digests all the proteins in a gel simultaneously as they move past. The fragments are then trapped by a second membrane, which is fed to the mass spectrometer. Finally, a laser marches along the membrane firing a steady stream of pulses in micrometer-sized steps, each of which blasts protein fragments into the mass spectrometer for fingerprinting. Here, too, the speed should reach tens of thousands of proteins per day, says Hochstrasser.

    Working in tandem, the molecular scanner and a high-speed mass spectrometer could be a powerful combination, says Andrews. “The idea is spectacular. But whether it will perform as it needs to remains to be seen,” he cautions. OGS's Parekh, for one, is skeptical. A big problem, he says, is how to quantify the amount of protein in each spot on the gel. This is necessary for identifying which proteins change in a stable way with disease—itself a first step to identifying molecular markers of disease and potential drug targets.

    The time-consuming approach of gel cutting does allow such quantification. But that's typically not the case when the spots are transferred, or blotted, to another membrane, says Parekh. “Some proteins don't blot. Others are lost in the process. So as soon as you blot, you lose the quantity information,” he says.

    PE is betting that Aebersold's technology will help. In the October 1999 issue of Nature Biotechnology, Aebersold and his University of Washington colleagues reported a new approach that uses stable isotopes to quantify numerous proteins in cell extracts with mass spectrometry. The advance is “outstanding,” says Hochstrasser, because it allows mass spectrometers to pin down protein levels from large numbers of proteins at once—a difficult proposition with today's technology. At this stage, it is not clear whether the new technique will work with a molecular scanner and a high-speed mass spectrometer. To find out, Aebersold is already working on joint research projects with PE scientists.

    Even if it takes a while to get the mass spectrometers up to top speed, Celera can still make considerable progress, says Venter, as the company will be using other proteomics tools as well. A key approach, he says, will be to create antibodies to all proteins. These antibodies can then be used to fish out of a sample both targeted proteins and those they interact with. That, Venter says, will help Celera build up a database of how proteins interact with each other in complex biochemical pathways—information that is likely to be valuable to drug companies aiming to intervene in those pathways at specific points.

    Incyte's Whitfield says he is not fazed by Celera's entry into the field. Even if PE and Celera manage to pull all these pieces together and launch a high-speed proteomics effort, other companies will also be developing their own high-speed approaches, he says: “We all understand that faster, cheaper, better is the way to go.” With Celera preparing to enter the field, Whitfield adds, “I'm sure there is going to be great competition.”

    • *2D gel electrophoresis and mass spectrometry.


    Space Scientists Decry Stricter Export Rules

    1. Andrew Lawler

    Congressional moves to tighten technology regulations could restrict collaborations with non-U.S. scientists

    When Stanford University needed a proton detector for a NASA-funded satellite that would probe Einstein's theory of relativity, it ordered one from a Hungarian scientist living in Ireland. Last summer the detector was shipped to a Lockheed Martin facility near Denver for testing. But company officials, citing new rules on the export of sensitive satellite technology, said the scientist was not welcome without express permission from the State Department—a notoriously costly and time-consuming process. Stanford scrambled successfully to find another company that would allow the designer to test the instrument, and work continues on the payload (Science, 10 March, p. 1726).

    This incident and other episodes—including one in which universities were reluctant to bid on a program for fear of breaking the rules—have left researchers worried about the status of international collaborations. Lockheed's hard line stems from rules drawn up last spring by the State Department, at Congress's insistence, that are designed to prevent sensitive technology from falling into the wrong hands. Researchers fret that a strict interpretation of the rules could have dire consequences, including jail if, for example, they discuss spacecraft designs with foreign graduate students without prior approval. “This is a terrible problem,” fumes Stanford physicist and engineer Brad Parkinson. “It flies in the face of reason.” But officials at State insist that scientists are exaggerating the threat and that the regulations should not affect research in any dramatic way. “It's an overreaction,” says William Lowell, chief of State's office of defense trade controls. “I'm sure we can address 95% of these problems.”

    The dispute is the latest battle in a long-running war between defenders of national security and scientists who work in sensitive areas. In the early 1980s, for example, scientists fought attempts by the Reagan Administration to regulate the flow of research equipment and data overseas through U.S. arms traffic regulations, which control the import and export of technology, such as satellite technology, with military applications. A 1985 executive order marked a truce by declaring that the problem would be handled by classifying sensitive materials as secret rather than by imposing restrictions on exports.

    But in 1998 Congress told the Administration to take a tougher stance on technology exports, citing the alleged transfer of sensitive satellite technology to China in the course of that country's launching of U.S.-built payloads. As part of the annual authorization of defense programs, it ordered the Administration to shift responsibility from the Commerce Department, which had exercised oversight since the early 1990s, to the State Department. Commerce traditionally seeks ways to promote trade and has a license exemption for technology connected with fundamental research, while the State Department has a reputation for taking a more hawkish position on the transfer of technology.

    The new rules put spacecraft technology, with the exception of information in the public domain, on the department's roster of technologies deemed militarily sensitive. That includes not just hardware but also any technical data and the act of providing technical data to foreigners on the design, manufacture, use, and repair of an item. “Under existing rules, [even] speech can be deemed subject to licensing,” says one official at the National Research Council (NRC).

    A small cadre of policy-minded researchers have begun to protest what they see as a grave danger to scientists with overseas collaborators or who purchase equipment from abroad. “This is chilling the climate” for space research and “putting a burden on a lot of researchers,” warns Claude Canizares, a Massachusetts Institute of Technology physicist and chair of the NRC's space studies board. In a 4 February letter to NRC chair Bruce Alberts, he warned of “serious repercussions in the university and industrial communities” that threaten to undermine international cooperation on space projects. Canizares urged Alberts to organize a workshop on the issue, and on 16 March he briefed NASA Administrator Dan Goldin during a meeting of the NASA Advisory Council, which named Canizares chair of a subcommittee to study the issue.

    State Department managers maintain that the rules are no stricter than when the department was previously in charge of satellite-related licensing and should not hinder researchers. “We don't regulate fundamental and basic research at universities, and there is no intention by State to bring about a change in scientific research,” says Lowell. There are exemptions in the rules for university researchers, he added, giving them more flexibility than industrial scientists and engineers.

    NASA's Robert Tucker, who handles the issue for the space agency, notes that foreign scientists who are full-time employees at a U.S. university do not need licenses to be involved in satellite work. But given the continuing congressional interest in the issue—legislators held several hearings last year—and related judicial decisions, nervous university lawyers and companies such as Lockheed Martin are interpreting the rules strictly in cases like the one involving the Stanford payload, called Gravity Probe B. For example, some university teams were reluctant even to respond to a recent NASA request for proposals for scientific payloads in a small satellite program, Canizares says, out of concern that they would need export licenses to discuss technical details of the payload with foreign-born students. “People are so worried about this, there's a cascading of conservatism,” Canizares told Goldin. Both Canizares and Parkinson, who chairs the NASA Advisory Council, urged Goldin to convey the community's concerns to Administration officials.

    The controversy comes at an awkward time for Goldin, who just 2 months ago announced an initiative to improve and expand NASA's relationship with universities. But NASA officials are loath to risk antagonizing Congress in pressing their case. In February, the agency notified contractors that they are responsible for getting the necessary export licenses for hardware, software, technical data, or technical assistance, as well as for situations in which “the foreign person” has access to data or software. “I would not like to see hearings [involving] NASA-funded researchers who face criminal prosecution,” Goldin told Canizares at the advisory council meeting. But he nevertheless agreed to examine the issue and pledged to work cooperatively with universities.

    The controversy doesn't stop with satellites. Other academic groups are fearful that congressional hostility to many types of international exchanges may spread to work in other areas. For example, Department of Energy (DOE) researchers must navigate much stricter rules than in the past when interacting with foreign colleagues and graduate students, part of the fallout from allegations of lax security at nuclear weapons labs. In response, Rachel Claus, counsel for the DOE-funded Stanford Linear Accelerator Center, has urged universities to stand united and to refuse to attend “U.S. citizens only” meetings held by timid hosts. The Washington-based Council on Government Relations and the Association of American Universities have also begun to raise the issue with Administration officials.

    But most scientists seem to be talking to themselves. Lowell said last week that he has received no complaints from the scientific community about the rules regarding spacecraft. At the same time, he says the department is only a week or so away from revising those regulations based on vocal concerns from industry. “If scientists want to make changes,” he says, “this is the time.”


    Controversial Cancer Therapy Finds Political Support

    1. Jocelyn Kaiser

    Some members of Congress and presidential hopefuls are lobbying the FDA to let a 4-year-old boy receive an unapproved treatment

    For nearly 2 decades, Texas physician Stanislaw Burzynski has battled the medical establishment and federal officials over his controversial treatment for cancer. Many patients who have flocked to the Burzynski clinic outside Houston claim to be cured. But the Food and Drug Administration (FDA) maintains that his drugs, dubbed antineoplastons, have not been shown to be either effective or safe and has tried to shut him down. Burzynski prevailed in a key court battle in 1997 and continues to practice. But under FDA rules, he can only use these drugs in experimental trials monitored by the agency, and only on patients who have exhausted conventional therapies. Now Burzynski's powerful allies in Congress and on the presidential campaign trail have launched a major lobbying campaign and media blitz to overturn that rule. It is the latest saga in the long-running battle over who should control access to unorthodox medical treatments.

    At the center of the furor is a 4-year-old boy with brain cancer, Thomas Navarro, whose parents want him to have access to Burzynski's unapproved treatment. The child's plight has been broadcast on NBC Nightly News and last week was the focus of a six-page spread in People magazine. Representative Dan Burton (R-IN), a longtime Burzynski supporter, has introduced a bill in Congress, named for the boy, that would strip FDA of its power to protect patients from clinical trials where safety is an issue. Some in the medical establishment fear this latest furor could open the floodgates for patients who want access to untested, and possibly dangerous, therapies. Burton's bill, in particular, is drawing heavy criticism. A staffer for Henry Waxman (D-CA), the ranking Democrat on the House Government Reform and Oversight Committee, claims it would undo the FDA's system for protecting patients from the risks of experimental drugs. Thomas Moore, an expert on FDA policy at George Washington University, agrees. “This is like trying to abolish the criminal justice system because you disagreed with one decision made by a judge,” he says.

    The saga began in the early 1970s, when Burzynski, who had moved to Texas after earning medical and biochemistry degrees in Poland, first isolated what he called antineoplastons from blood and urine. Burzynski claims that these compounds, a mixture of peptides that he now produces synthetically, can reprogram cancer cells so that they die. In the late 1970s, Burzynski began treating cancer patients with this home-brewed cocktail, soon attracting a wide following.

    The FDA took him to court in 1983, charging him with selling unapproved drugs across state lines. The legal wrangles dragged on for some 14 years, as various grand juries and U.S. attorneys investigated his practice. The 1983 FDA suit led to a decision that Burzynski could use the drugs only in Texas. In 1996 a U.S. District Court judge ruled that he could treat patients only in FDA-approved clinical trials. A year later, Burzynski was acquitted of charges of illegally shipping the unapproved drugs across state lines. Burzynski now has more than 70 protocols under way, phase II trials designed to test the efficacy of antineoplaston treatment for various tumors. Other researchers have also been evaluating the various compounds in Burzynski's mixture to determine whether any of them work. Evidence so far is inconclusive.

    The Navarros, an Arizona family, found Burzynski on the Internet last October. Their son had just undergone surgery for an aggressive type of brain tumor known as a medulloblastoma. When treated with follow-up radiation and treatment, the survival rate for this type of cancer is at least 70%. But without those treatments, the tumor almost always reappears.

    The Navarros were concerned about the side effects of those treatments. Radiation therapy, in particular, can be neurotoxic in children and can lead to a drop in IQ of up to 20 points. To the Navarros, Burzynski's course of treatment—as he describes it, a nontoxic medicine that is pumped into a patient's veins—promised a cure with no side effects. At the parents' urging, Burzynski asked the FDA to allow him to treat Thomas with antineoplastons. The agency refused, saying that the boy first had to receive standard treatments, which have a high likelihood of success.

    The Navarros enlisted the help of Burton, chair of the House Government Reform Committee and a longtime supporter of alternative medicine and Burzynski himself. Burton wrote the FDA in December asking for “your assistance” in providing Thomas with the antineoplaston treatment and lamenting “personal and institutional bias against antineoplastons, Dr. Burzynski, and other unconventional cancer protocols.” In a 14 January letter in response, FDA associate commissioner for legislation Melinda K. Plaisier said that “it would be unethical and medically inappropriate” to allow Burzynski to treat Thomas in lieu of standard therapy, which has been demonstrated to be beneficial. She cited “the absence of any clinical data to suggest that this [antineoplaston] treatment may be beneficial” and “the fact that it could be harmful.” The harmful effects, say FDA officials, include reports of toxicity from the high sodium content of the drugs. FDA officials say the Navarros are risking their son's life by withholding conventional treatment. “We're standing behind this line. This should not be allowed,” says Dianne Murphy, head of pediatrics at the FDA Center for Drug Evaluation and Research.

    Many oncologists and medical experts back the FDA. Although the Navarros' concerns about the side effects of radiation and chemotherapy are understandable, they say, the risks have been blown out of proportion in the Navarros' minds. “Unfortunately there are [side effects], but not to the extreme it's being painted,” says Archie Bleyer of the M. D. Anderson Cancer Center in Houston. Bleyer chairs the Children's Cancer Group, which includes experts across North America. “Leaving a child to die [without therapy] is worse than taking the risk of neurotoxicity from today's radiotherapy,” says Bleyer, whose center has offered to treat Thomas.

    That argument hasn't swayed the Navarros, whose message of the people versus the government has caught the attention of Republican big guns. In January, presidential candidate Alan Keyes gathered the signatures of then-candidates John McCain, Gary Bauer, Steve Forbes, and Orrin Hatch on a letter asking Health and Human Services Secretary Donna Shalala to “expedite a decision on allowing the medical treatment chosen” by the Navarros. (George W. Bush expressed support but did not sign the letter.) As Science went to press, the Navarros continued to press their case against FDA from a Houston hotel. (Thomas is “fine,” and his last magnetic resonance imaging scan showed no tumor reappearance, says Donna Navarro.)

    Meanwhile, Burton and 18 other sponsors have introduced a bill (HR 3677), the Thomas Navarro FDA Patient Rights Act, that would override the FDA's power to bar patients from participating in a protocol it deems unreasonably risky because an effective therapy already exists. The bill would prevent FDA from using that reason to stop a protocol as long as the patient or patient's parents acknowledged in writing that they were opting for an unapproved treatment. The bill is so sweeping that congressional experts say it is unlikely to go far in its current form. Even so, the publicity the case has generated could provide added fuel for those patients and congressional representatives seeking wider access to unapproved therapies.


    Odd-Shaped Pieces Confuse Puzzle of Galaxy Evolution

    1. Ann Finkbeiner*
    1. Ann Finkbeiner is a science writer in Baltimore, Maryland.

    Astronomers hoped distant images would clarify how galaxies grow and change, but the big picture still hasn't come into focus

    What fossilized shells and bones are to the history of life, galaxies are to the history of the universe. Astronomers have collected galaxies in great variety and numbers: ranging from newborn to elderly, dim to brilliant, shapeless to orderly, at distances as far back as anyone can see. With large enough collections, cosmologists once hoped, they would see the galaxies falling neatly into a straightforward history leading from clouds of gas in the early universe to the familiar, orderly spirals and ellipticals of today. Evolutionary biologists could have told them it wouldn't be that simple.

    Even a few years ago, scientists had studied only a handful of galactic specimens. But recently, galaxy collections have grown flourishingly. Using new detectors that operate at infrared and submillimeter wavelengths, along with the Hubble Space Telescope's (HST's) two Deep Field views of the early universe, cosmologists have turned up exotic fossils: distant ultraviolet balls and nearer little blue messes, all forming stars at rates that range from moderate to vigorous; huge, smudgy, red galaxies forming stars so furiously that they give off a good fraction of all the light in the universe.

    The distant galaxies' properties are still uncertain, their masses and exact shapes mostly a mystery. Cosmologists now don't know whether they are dealing with closely related creatures, or whether they are trying to construct an evolutionary history from cows, rattlesnakes, and squid. “We don't know whether we're looking at different populations or at one population in different ways,” says Simon Lilly of the University of Toronto. But cosmologists are undismayed: “It's a big zoo out there,” says Mark Dickinson of the Space Telescope Science Institute (STScI) in Baltimore, Maryland, but “it's a zoo with some order, and we're trying to piece it together.”

    Ill-fitting pieces

    A generation ago, the pieces seemed to fit. Most cosmologists believed that galaxies were born large and fully formed and have simply ticked along quietly ever since, fading as they aged. In that case, early galaxies should be bigger and brighter than present galaxies, but otherwise they should look much the same. The problems began when astronomers started to see regularly beyond a distance they call z = 1. The “z” is redshift, the extent to which light has been stretched to longer, redder wavelengths by the universe's expansion. Galaxies ride the expansion; the farther away a galaxy is, the faster it is receding from us, and the redder its light becomes. Ultraviolet light turns optical, visible light becomes infrared, infrared light stretches toward the radio spectrum. Because reddened, distant light is older light, astronomers also invoke redshift to measure time. At z = 1 they are looking halfway back to the big bang; at z = 3, 84% of the way back; at z = 5, 91%.

    Ten years ago, z = 1 was as far as astronomers could easily see. But in the 1990s, a new technique opened up legions of increasingly distant galaxies, picked out by a feature in their spectrum called the Lyman break. Hydrogen atoms scattered through space between those galaxies and our telescopes absorb the ultraviolet light in a galaxy's spectrum, producing a sharp drop-off, or break, in certain wavelengths. By looking for this Lyman break, astronomers can pick out the light from galaxies shining when the universe was maybe a billion years old. The redshifts of Lyman break galaxies are scattered between z = 2 and 5.6.

    Charles Steidel and colleagues at the California Institute of Technology in Pasadena, who were among the first to put this technique to work, have now collected nearly 900 Lyman break galaxies. In general, Steidel says, the Lyman breaks are “not very interesting—sort of like little balls, relatively small.” But Steidel's team was looking at optical wavelengths. Corrected for redshift, that light had originated in the ultraviolet—the wavelengths of forming stars. Because stars form in isolated clumps within a galaxy, cosmologists suspected that the ultraviolet might be giving a distorted view of the distant galaxies' appearances. To see the true shapes of any galaxies at such large distances, cosmologists needed to look in the galaxies' own optical range: our infrared.

    One hotbed of distant galaxies is the spectacularly beautiful Hubble Deep Fields, a pair of 10-day exposures of small areas in the northern and southern skies that STScI produced in the late 1990s. In 1999 several teams observed both Deep Field regions with a new infrared camera on the HST, called the Near-Infrared Camera and Multi-Object Spectrometer, or NICMOS.

    One team—Hyron Spinrad and Andrew Bunker of the University of California, Berkeley, and Rodger Thompson of the University of Arizona, Tucson—compared optical and infrared images of the 100 brightest galaxies in Deep Field North. At redshifts around 2 and 3, Spinrad says, the galaxies even in their own optical wavelengths showed a range of peculiar shapes: “automotive accidents, bow shocks, multiple components.” But “things clean up” around z = 1, he says. “There's a trend toward symmetrizing.”

    Dickinson, leader of a team now painstakingly comparing the optical and infrared images of 250 Deep Field North galaxies from redshifts 0.5 to 3, says his preliminary results confirm the trend. “Out to redshifts of 1, we have no trouble finding giants, normal spirals, and ellipticals. But over redshifts of 2, whatever these things are, they don't look normal.” His conclusion: “You're not being fooled by the wavelength you're looking in. They really are disturbed systems.”

    Meanwhile, astronomers observing even longer wavelengths have found a new population of galaxies previously hidden in dust. Dust in galaxies absorbs the ultraviolet and optical starlight, heats up, and reradiates the energy as infrared light. The light of galaxies at high redshifts gets stretched into so-called submillimeter wavelengths, between infrared and radio. In late 1996, several international teams used a camera called SCUBA (for Submillimetre Common User Bolometer Array) on the James Clerk Maxwell Telescope in Hawaii and found a handful of surprising galaxies with redshifts between 1 and 3.

    The submillimeter galaxies aren't numerous—in an area of the Hubble Deep Field with hundreds of galaxies, SCUBA finds just five—but they are outrageously bright. By calculating how much starlight would be necessary to heat dust to the energies they observe, astronomers have concluded that each submillimeter galaxy is forming stars at least several hundred times faster than our galaxy does. Taken all together, says Toronto's Lilly, who is on one of the teams, this small class of galaxies “has produced a significant fraction of all the light in the universe.”

    Dusty picture

    To make sense of all these multifarious galaxies, cosmologists must try to fit them into a larger evolutionary picture. In the most popular scenario, known as hierarchical formation, the first galaxies to condense out of primordial gas clouds are small disks. The disks can merge and form brilliant galactic cores, called spheroids. Some spheroids probably become elliptical galaxies; others pull the ambient gas into disks and become spirals. Nearby galaxies may collide, igniting frenzies of star formation, then settle slowly into ellipticals. Later still, and slowly, smaller gas clouds light up with stars, merge with other gas clouds, or fall into nearby galaxies. In this violent, almost biologically messy process, astronomers would expect to see increasing numbers of merging galaxies with increasing distance.

    But exactly where do the distant observations fit into the scenario? “There,” Steidel says, “you're on thin ice.” For one, although the submillimeter galaxies are certainly bright enough to be the results of mergers, no one knows whether they have the messy shapes of galactic wrecks or the symmetry of spheroids: SCUBA's resolution is so low that they look like big red smudges. Accordingly, matching a submillimeter galaxy to its optical counterpart is an exercise in probability. “At the start, we all had a tendency to pick something [on the optical map] faint and close to the right position, and say, ‘That's it,’” says Ian Smail of the University of Durham in the United Kingdom, a researcher on one of the teams.

    Recent follow-ups with the Keck Telescopes and HST, coupled with high-resolution radio observations, have assigned fairly believable optical counterparts to at least half of the submillimeter galaxies. Those observations suggest that the submillimeter galaxies most closely resemble a small, local class of galaxies called Ultra-Luminous Infrared Galaxies—more elegantly, ULIRGs or ULIGs. ULIGs are ultrabright, ultradusty objects that most likely occur when galactic mergers set off a violent burst of star formation. “To the first order, I reckon all these objects are similar to ULIGs,” Lilly says. “I don't know of any properties that are different.”

    In any case, astronomers suspect that the submillimeter galaxies are transients. “We only see them when they have the awful accidents of merging and turn into spectacular firework shows,” says Len Cowie, an astronomer at the University of Hawaii, Honolulu. Smail says he thinks of them “as an event rather than as a galaxy.”

    Similar disputes arise over Lyman break galaxies. At redshifts between 2 and 4.5, the faint blue blobs could be almost anything. Because Lyman breaks cluster the way today's big galaxies do, STScI's Dickinson says, “we'd sort of like to say that Lyman breaks turn into spheroids.” If so, then some of them, at least, could be the same sorts of objects as the submillimeter galaxies, only viewed at different wavelengths.

    Steidel thinks that is possible. Lyman breaks “have a huge range of luminosities, maybe a factor of 100,” he says. “It's a continuum of dustiness and luminosity, and [SCUBA is] seeing the dustiest and most luminous.” But Cowie thinks another step is necessary to boost the Lyman breaks to high energy. “I don't at any level think the submillimeter objects are just the bright-end flavor of the [Lyman] break galaxy population. It's when Lyman breaks merge to form spheroids that we get the bright submillimeter population.”

    Needed: more pieces

    What's needed to settle the debates is—as usual—more data. Astronomers need to know the galaxies' true shapes and their interior motions, from which they can calculate the galaxies' masses. The Next Generation Space Telescope is designed for just those kinds of observations. New infrared and submillimeter observatories are in various stages of planning; in particular, the Millimeter Array, an interferometer to be constructed in Chile within the next decade, will see much more detail than SCUBA can.

    Until they can make those detailed observations, some astronomers are taking a more impressionistic approach to galactic evolution by measuring how the total rate of star formation throughout the universe has varied over time. To measure this cosmic life force, says Piero Madau of STScI, “you count the galaxies, measure their light, convert the light to a star formation rate, [and] bin it up by redshift space.”

    In the past 2 years, such measurements have been made in ultraviolet, optical, infrared, and submillimeter wavelengths. All show roughly the same trend: The star formation rate is low at present, rises as you go back in time, then levels off and remains high as far back as anyone can see. In other words, the early universe was alive and jumping. Then, says Princeton cosmologist Jim Peebles, it gradually “ran out of gas and is now just living off its remittances.” The drop-off point: a redshift around 1, about when most galaxies seem to have settled into symmetry.

    Exactly what the cosmic star formation rate implies about galaxy evolution, no one knows. At the least, the rate sets a rough history of the universe as a system, says Lilly, who helped invent the idea. “We've been trying for donkey's years to understand formation and evolution of galaxies,” he says, “floundering around in a morass of Lyman breaks and submillimeter galaxies. If you're interested in the universe as a system, all these little galaxies might be something of a nuisance.”

  15. Ideas Fly at Gene-Finding Jamboree

    1. Elizabeth Pennisi

    Computer experts and fruit fly geneticists worked side by side in an unusual jamboree to make sense of the new Drosophila genome

    Call it an idea frenzy, a discovery lek, a Woodstock for science nerds. It's that moment every scientist lives for—a time when discoveries come fast and furious, prodded by the collective resourcefulness and creativity of researchers so caught up in their work that eating and sleeping are unwanted interruptions. In November 1999, about 45 bioinformatics experts, protein specialists, and fruit fly biologists experienced just such a moment when they gathered in Rockville, Maryland, to take a first look at the newly sequenced Drosophila genome and, more important, to see whether they could make sense of it. “It was some of the most exciting science I've done in a long time,” raves William Gelbart, a developmental geneticist at Harvard University.

    At stake was the reputation of an upstart sequencing company, Celera Genomics of Rockville, Maryland, and, in some ways, the future of genome sequencing. In 1998, the company's founder and president, J. Craig Venter, shocked the scientific community when he announced that his new company, formed in partnership with PE Corp., intended to sequence the entire 3-billion-base human genome in just 2 years—well ahead of the publicly funded Human Genome Project. What raised the eyebrows and the ire of the sequencing community was Venter's claim that he could accomplish this gargantuan feat with a sequencing strategy previously thought useful only for small microbial genomes (Science, 20 October 1995, p. 397; 3 September 1999, p. 1558). In contrast to the deliberate, chromosome-by-chromosome approach being pursued by the Human Genome Project, Venter planned to break the entire genome into small pieces, sequence them in one fell swoop with a phalanx of very fast and very expensive new PE sequencing machines, and use some of the world's most powerful supercomputers to assemble the sequenced fragments in the correct order.

    As a dry run—and to prove to the world that this so-called shotgun strategy would work—Celera first took on the 180-million-base genome of the fruit fly Drosophila melanogaster. Venter set up a collaboration with the Berkeley Drosophila Genome Project (BDGP) and its European counterpart to help guide the effort and interpret the data, and he set his new sequencers to work on the fly DNA in May 1999 (Science, 5 February 1999, p. 767). By late fall, the sequencing was finished and the computers had assembled the pieces together. That's where the November meeting came in.

    Sequencing and assembly were just the first steps. The tough task was to pinpoint the genes and begin to figure out what they do, a process called “annotation” in the jargon of genomics. Venter, Celera's Mark Adams, and Gerald Rubin, director of the BDGP, hit upon an annotation strategy that was as bold as the sequencing venture that preceded it: Just as Celera had sequenced and assembled the fly genome all at once, they would interpret the entire thing in an intense annotation “jamboree.” They would essentially lock biologists and computer scientists in the same room to get the job done. Fly geneticists were eager to participate, if only so they could get a first look at the long-awaited Drosophila sequence. And Celera sweetened the deal by picking up the tab.

    By all accounts, this slam-dunk approach, which took 11 days, worked even better than expected. The results of this effort were announced in February and are published in the following series of papers in this issue. Although the participants agree that the current descriptions and classifications of Drosophila genes represent a “first pass,” it is still “a pretty good job,” notes J. Michael Cherry, a bioinformaticist at Stanford—especially since the entire process of sequencing and annotation took less than a year. What's more, the workshop yielded a plethora of insights into the fly.

    Although the fly sequence still has about 1000 small gaps, the results provide confidence that shotgun sequencing will work for other complex genomes—indeed, researchers involved in the publicly funded mouse genome project recently decided to adopt the approach (Science, 18 February, p. 1179). And Venter has erased most people's doubts that he will complete the human sequence later this year.

    A shaky start

    This positive outcome seemed far from assured when the fly biologists first arrived at Celera last November. The company's timetable had slipped a few weeks, so the software specialists had barely a week to run the sequence data through the gene-finding programs to identify the beginnings, ends, and coding sections of what some thought would be about 20,000 genes. When Tom Brody, a fruit fly geneticist at the National Institute of Neurological Disorders and Stroke, arrived 3 days into the jamboree, “everything was in [a] shambles,” he recalls. The researchers had found less than 4000 genes and had yet to run the computer program that compares selected sequences from other organisms to the entire fly genome to look for matches and thus hints about what genes and proteins have been conserved through time. The visiting scientists were frustrated because they had submitted their favorite sequences beforehand and expected to be able to begin analyzing the results as soon as they arrived. “They really weren't ready for us,” says Brody.

    Within days, however, the group rallied. “It was as close as I've come to being in a small start-up where everyone is working 20 hours a day,” Cherry recalls. “We were really productive, doing stuff no one thought was possible.” Each day, biologists and programmers spent hours at the computer screen, occasionally looking over each other's shoulders and discussing their findings with whomever was in the adjacent cubicle. A gong called them to the conference room for takeout lunches and dinners and for impromptu mid-afternoon seminars to discuss the day's finds.

    As expected, Drosophila genes were at first harder to find than genes in either the nematode Caenorhabditis elegans or yeast—the two largest genomes sequenced until now. Instead of being simple stretches of sequence, Drosophila genes have more interruptions, called introns, in the coding regions. In addition, many genes can be expressed in different ways and thus have several “start” sequences, or alternative splice sites, within them. Finally, there's just more DNA between the genes to contend with than there has been in other organisms sequenced to date. “It's more of a hunt to piece these things together,” says Cherry.

    While some of the bioinformaticists worked on finding the genes, BDGP's Suzanna Lewis and Celera's Mark Yandell and Jennifer Wortman guided the rest of the computer experts' effort to use programs to translate those genes into proteins and check to see whether they matched known proteins in yeast, nematode, or human. One new database of protein domains, InterPro (, just developed by Rolf Apweiler of the European Bioinformatics Institute and his colleagues, analyzed the proteins predicted from the fly sequences. Based on similarities with known proteins, InterPro decided whether each was, say, a protease enzyme that chews up other proteins or a membrane protein; it then classified the proteins into one of some 2000 possible families.

    The biologists, many of them experts in particular protein families, pored over the results. They could spot when InterPro and other programs predicted a gene that was actually two genes, or lumped a protein into the wrong family. Feedback to the bioinformaticists led to almost instant improvements in the computer programs. Over the course of the day, the biologists came up with new ways to analyze or portray data. When they went back to their hotels for beer and some sleep, the programmers wrote the new code and ran the analyses, much to the amazement of their slumbering colleagues. “It was incredible,” says Brody. “Every day they had a new way to look at genes and a new way to annotate them.”

    Discovery frenzy

    At one point, the biologists were concerned that the sequence might be incomplete, as the fly seemed to have fewer genes than the nematode. To find out, Cherry checked whether the sequence contained the 2500 genes already known to exist in the fruit fly. It did. Cherry found all but 18 of the previously identified genes in the main scaffolds of the genome, sections where there was a lot of overlapping sequence and few gaps. He found another 12 in pieces of sequence that hadn't yet been fitted into these larger sections. When Cherry reported his findings that afternoon and wrote the six missing genes on the board, two fruit fly veterans stood up and said the first and likely another were known experimental artifacts. With further investigation into the remaining five, “we got down to one we didn't know about,” meaning that they couldn't find it in their data, recalls Celera's Adams. It proved to be contaminating sequence from the polymerase chain reaction (PCR) probe used to isolate the gene, not fruit fly DNA at all. “As far as we know, we're not missing anything,” he adds. The number of genes finally topped out at 13,600.

    That small number of genes was just one of several surprises. Fly biologists expected more genes because C. elegans has 18,000 or so (Science, 11 December 1998, p. 1972), even though it consists of about 1000 cells whereas the fruit fly has 10 times as many cells. It turns out that these two multicellular organisms also differ in the number of proteins they use to carry out critical functions. Researchers had expected to find larger protein families in the more complex species. Instead, “a handful of families are greatly expanded in C. elegans, but in Drosophila those families are more modest in size,” Adams notes. Certain receptors involved in development and nerve cell signaling are one example. The nematode has 1100 of these, but Drosophila has a mere 160, the big difference being in olfactory receptors. “We expected to find many more,” Brody notes. The fly also has far fewer hormone receptors than either the worm or vertebrates. On the other hand, Drosophila has 199 trypsinlike peptidases—which are involved in signaling in digestion, development, and the immune system—compared to just seven in the worm and one in yeast. Fruit fly biologists will now have the challenge of figuring out why these proteins are so prominent in this organism.

    With discoveries like these popping up almost hourly, any remaining skeptics soon came to appreciate the value of obtaining a complete genome sequence of the fly. Such comparisons among species “give you the potential for so much more insight into how these organisms work,” says David Coates, an expert on proteases at the University of Leeds in the United Kingdom. They can also indicate which model organism is best suited for various studies. The nematode, for instance, seems to be a better model for studying certain kinds of membrane proteins, whereas the fruit fly is probably better for experiments involving certain proteases. “Everyone was very excited when I said that C. elegans wasn't the be-all and end-all,” he adds, alluding to the long-running debate between fly and worm biologists over which organism is superior.

    The fly's merits as a model for studying human biology and disease were bolstered when jamboree participants compared fly genes with all known human genes. As the researchers report on page 2204, the fly has counterparts for 177 of 289 genes known to be involved in human disease, including the tumor suppressor p53 gene, as well as many genes involved with the insulin pathway.

    These comparisons have whetted researchers' appetites for the human genome. But the human genome will be more complicated than that of the fly, and concerns are mounting that the sequencing community will not be ready to annotate it well (see sidebar). “A lot of people have spent a lot of time on sequencing, and they haven't spent a lot of time on annotating,” complains BDGP's Martin Reese. “There's a big gap” (Science, 15 October 1999, p. 447).

    But as critical as annotation is to interpreting a sequence, it is by no means the end. Annotation provides just a “nice approximation of the truth,” says Apweiler. Not until biochemists, physiologists, and other wet-lab researchers have verified those predictions experimentally will anyone know for sure that they are right. Determining such truths will likely occupy biologists for much of the next century.

  16. Are Sequencers Ready to 'Annotate' the Human Genome?

    1. Elizabeth Pennisi

    Imagine trying to put together a car engine when all you have is a parts list by numbers, no name or description. That's about how the rough draft of a genome looks to a biologist. To be useful, a genome must be annotated—that is, documented to provide at a minimum the putative start, stop, and structure of each gene. Biologists benefit more if information is included about the predicted gene's product and about similarities to other known or predicted genes and proteins. Only then can a researcher begin to piece together how genes and proteins interact to make life possible.

    A bare-bones “parts list” for humans should be available later this spring: A rough-draft sequence is being assembled by a consortium of researchers funded by the U.S. government and Britain's Wellcome Trust, and Celera Genomics Corp. of Rockville, Maryland, has promised to produce its sequence sometime this year. Now, the genomics community is scrambling to figure out the best way to annotate those sequences. Celera will likely rely largely on its in-house team of experts and fast computers, while the public consortium is likely to put together a more disperse effort.

    The challenge will be even more daunting than the one faced by fruit fly biologists in analyzing the Drosophila genome (see main text). “Annotation of the human genome is intellectually a very hard task, harder still than [was] Drosophila,” says Michael Ashburner, a fruit fly geneticist turned bioinformaticist at the EMBL-European Bioinformatics Institute (EBI) in Cambridge, United Kingdom. For one, the fruit fly genome was completely sequenced when it was analyzed, but the rough draft of the human genome will be full of gaps, as well as sequence fragments that are out of place. As more sequencing is done, the publicly funded draft will evolve until it is “finished” in 2003. Until then, annotators have to learn how to work around these limitations to get the most from the current drafts.

    Consistency is also lacking in the annotation that has been done on human sequence so far. For the fruit fly, researchers had set up a centralized database, called Flybase, long before the sequence was complete. Flybase contains an authoritative gene list that helped in the annotation effort. But there's no single database or list for the human.

    At a January meeting at the National Institutes of Health, 10 scientists—both cell and molecular biologists and bioinformaticists, including one from EBI—met with staff from the National Human Genome Research Institute (NHGRI) and the National Center for Biotechnology Information (NCBI), which maintains the public U.S. sequence database. They agreed that they needed a standardized vocabulary for describing the 80,000 genes expected in the human sequence, as well as their functions. A single international gene index, similar to the one that exists for the fly, is crucial, some pointed out. “We need an index of human genes so we're not in a tower of Babel,” says NHGRI director Francis Collins. EBI and NCBI are now collaborating on this index, as well as on integrating their two annotation strategies.

    But the annotation experts could not decide how and when to bring biologists into the process. In November, a team of sequencers jump-started the annotation process for the fruit fly by holding a 2-week jamboree, during which the bioinformaticists writing the annotation software worked side by side with fly biologists who evaluated the computed results. Some see this as a model for the human annotation, but the question is by no means decided.

    EBI's Tim Hubbard, for instance, is pushing for a distributed annotation system, a plan proposed by Lincoln D. Stein at Cold Spring Harbor Laboratory in New York and his colleagues ( In this model, NCBI and EBI would generate a minimally annotated sequence backbone—such as that now produced jointly by the Sanger Centre and EBI through an effort called Ensembl (—to which other researchers could link their findings about a particular gene or protein. If places such as NCBI and EBI set up the computer infrastructure to do this, then a larger proportion of the biological community can get involved in this model, says Hubbard, than with a jamboree.

    Many worry, however, that biologists will be less eager to contribute their specific knowledge and expertise to interpreting human sequence data than were the fly biologists. “With the Drosophila community, there's a certain unity. People were really gung ho,” explains Steve Henikoff, a geneticist at the Fred Hutchinson Cancer Research Center in Seattle. “My impression is we don't have that kind of unity” among human biologists. The rivalries between laboratories can be keener, particularly when there is a lot of money at stake.

    Others say contributing annotation may simply be a low priority for overworked biologists. “If you are back at your own desk, swamped with your own job, these [annotation needs] might not have the same importance,” Stanford bioinformaticist J. Michael Cherry points out. Even though a group process, or jamboree, for the human genome would likely require many more people or many more weeks than did the fly event, “this would be one way for the human [biology] community to really get into it,” he adds.

    Indeed, getting biologists “into it” has proven difficult with other organisms. For example, biologists have not pitched in to annotate the genome of the soil bacterium Pseudomonas as readily as did Drosophila experts. “It's not the computing tools that are the issue, it's getting all those people to work together,” says Maynard Olson of the University of Washington, Seattle. “It's a different way of doing science.”

    How this all unfolds over the next year will determine how useful the rough draft ultimately is, and potentially how expensive it will be for the average biologist to use. If biologists don't step up to the plate, warns Ashburner, then “private companies will be able to sell [their annotation] to the public domain for vast amounts of money.”