News this Week

Science  03 Nov 2000:
Vol. 290, Issue 5493, pp. 910

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Librarians Seek to Block Merger of Scientific Publishing Giants

    1. David Malakoff

    Research librarians have asked the U.S. government to block one of the biggest ever science publishing mergers as part of a battle against spiraling subscription prices and the growing concentration of ownership of academic journals. Their target is the European journal giant Reed Elsevier, which last week announced that it will swallow American rival Harcourt General for $4.5 billion, creating a global company with more than 1500 journals, including a substantial fraction of all biomedical titles.

    Company executives say the deal will improve efficiency and benefit consumers by bringing related titles under one roof. But librarians say that their experience with past mergers suggests that this one will drive up journal prices and reduce the flow of scholarly information. The planned union of Reed Elsevier and Harcourt “will have severe repercussions for libraries, researchers, and the public,” predicts Duane Webster, executive director of the Association of Research Libraries in Washington, D.C., which represents 121 of the largest research collections in North America. “This transaction should be prevented,” he wrote to U.S. Department of Justice regulators on 27 October, the day the deal was announced.

    The current confrontation began in June, when Harcourt General—a $2 billion publishing empire based in Chestnut Hill, Massachusetts, that owns nearly 450 scientific and technical journals—announced that it was for sale. Its $700 million science, medical, and technical division includes several prominent presses, such as Academic Press and W. B. Saunders, that publish scores of highly cited titles, from The Journal of Molecular Biology to Icarus, a planetary science journal.0

    Attracted by the prospect of adding such thoroughbred titles to its existing $1 billion stable of 1100 journals, Reed Elsevier officials entered the bidding after forming an alliance with another rival, Toronto, Canada-based Thomson Corp. Under the pact, the two companies will carve up Harcourt, with Reed Elsevier getting Harcourt's journal and K-12 textbook divisions, and Thomson buying Harcourt's higher education and professional services businesses for $2 billion. “The strategic fit is excellent,” Reed Elsevier CEO Crispin Davis said in a statement from London. “The combined businesses will have strong positioning across the entire scientific, technical, and medical spectrum.”

    It's not known how U.S. or European antitrust regulators will view the deal. But although Reed Elsevier officials said they expect no serious opposition, Webster is pressing for a complete review. In the past 2 years, he notes, the number of major biomedical publishers has shrunk from 13 to eight. Another megamerger would “enhance the market power of the merging companies and significantly increase” prices, he predicts. Over the last decade, he suggests, consolidation has allowed commercial publishers to increase subscription prices far faster than the rate of inflation, prompting research libraries to trim subscriptions by 6% while spending 170% more on titles. The association wrote a similar letter to Justice officials in September after Harcourt put itself on the auction block.

    Webster's warning is backed up by research by Mark McCabe, an economist at the Georgia Institute of Technology in Atlanta, who analyzed publishing mergers while working for the government. In a recent paper analyzing the impact of two mergers on biomedical journal prices, McCabe found that subscription prices for Elsevier's new titles jumped by an average of 27% within a few years after its purchase of Pergamon Press in 1991. Similarly, Lippincott titles purchased by Kluwer the same year jumped by 30% over the same period. “Harcourt's titles could experience similar double-digit increases” under Reed Elsevier, McCabe predicts, particularly because Harcourt's journals are, on average, lower priced than Elsevier's.

    Commercial publishers dispute McCabe's analysis, however. They say the increases stem from adding pages and color and improving editing and design, as well as rising printing costs and currency fluctuations. Some argue that further concentration in the industry may actually make burgeoning online databases—such as Reed Elsevier's Science Direct—more useful by giving computer users access to hundreds of titles at a time. “More journals under one umbrella can be easier and better for users,” says one executive.

    But McCabe believes regulators “should not let this deal pass without a careful second look.” It appears to fail at least one traditional test of antitrust law, he says, by creating a company that controls more than one-third of a given market—in this case, the market for high-quality biomedical journals. By his count, the new company would own 424, or 34%, of 1240 mainstream biomedical journals tracked by the Institute for Scientific Information (ISI) in Philadelphia, Pennsylvania. Analysis by Science of other ISI data showed that the merger would also give Reed Elsevier 134 of the 500 most cited journals.

    If regulators do find an antitrust problem, Reed Elsevier may be forced to sell some journals, analysts say. But few of those contacted by Science believe that requirement would kill the deal—although European regulators did sink Reed Elsevier's last proposed megamerger, with Dutch giant Wolters Kluwer, in 1998. Any hint of trouble for this merger may not surface for months, however, as analysts predict the regulatory review could continue well into 2001.


    Report Flags Hazards of Risk Assessment

    1. Helen Gavaghan*
    1. Helen Gavaghan writes from Hebden Bridge, West Yorkshire, U.K.

    LONDON—What happens when the premise underlying a scientific risk assessment is wrong and, as a result, the risk is vastly understated? In the case of so-called mad cow disease, or bovine spongiform encephalopathy (BSE), people die, an industry suffers, and a country panics.

    Last week an independent panel issued its report ( on how the British government has responded to a BSE outbreak over the past 15 years that has claimed 81 human lives and counting, led to the slaughter of 176,000 cattle, and cost the government $7.5 billion. The 16-volume report, written by a three-member panel chaired by senior appellate judge Lord Andrew Phillips, concluded that the practice of feeding cattle with the remains of dead cattle spread BSE “like a chain letter” through the British herd before anyone knew what was happening. It also describes how an incorrect assumption by a scientific panel of how BSE would behave played into a desire to assure the public that the health risks were negligible—with tragic consequences. “At the heart of the BSE story lies the question of how to handle hazard—a known hazard to cattle and an unknown hazard to humans,” it says.

    The panel rejects the original assumption that BSE derived from scrapie, a 200-year-old disease in sheep that is not transmitted to humans, and embraces the current view that BSE and its human variation, called variant Creutzfeldt-Jakob disease (vCJD), may have emerged in the 1970s from a genetic mutation that went unnoticed in a single cow. Although the report concludes that the crisis was unavoidable, it says that the epidemic could have been curbed with the swifter introduction of regulations intended to keep infected meat out of the human food chain. Phillips says a research “supremo” might also have helped to spot gaps in the scientific effort, including proposing “experiments to test the scrapie hypothesis origin.”

    Reactions to the report have been generally favorable. “By and large, the report's grasp of events and what drove people is about right,” says Chris Bostock of the government's Spongiform Encephalopathy Advisory Committee, which will review the report. But some scientists worry that the government may be getting off too lightly. The report “looks exceedingly useful, but it's not aggressive enough,” says Stephen Dealler, a microbiologist at Leeds General Hospital. Dealler is one of several scientists denied access to a clinical collection of brains from slaughtered cows held by the Ministry of Agriculture, Food, and Fisheries (MAFF). The report states that there should be open access to such material for researchers. A MAFF spokesperson says that the ministry is preparing a response to the report.

    The Phillips panel, convened in January 1998, was asked by the government to establish the history of the emergence and identification of BSE and vCJD until 20 March 1996—when the British government announced that BSE might be transmissible to humans. The panel was also charged with assessing the adequacy of the government's response, “taking into account the state of knowledge at the time.” The report concludes that the government took the right measures, such as excluding those parts of the carcass from the human food chain thought most likely to pose a risk of spreading infection across species, but that decisions were “not always taken in a timely fashion.” For example, animal-feed traders misinterpreted a 5-week grace period to clear existing stocks of infectious material as an indication that the risk was low and, therefore, continued to sell the stock after the ban went into effect.

    Nor were the best people recruited to give advice in the early days. “There were a number of people not only in this country, but in Switzerland and the U.S., who could have been approached and weren't,” says panel member Malcolm Ferguson-Smith, a professor of clinical genetics at Cambridge University.

    The policies were also undermined by politicians, policy-makers, and scientists playing down the BSE threat to humans. One key element was a 1989 report on the potential BSE risk to humans by a committee headed by Richard Southwood, a professor of zoology at the University of Oxford, that was based on the assumption that BSE was likely to behave like scrapie and not infect humans. Although Southwood's report said that the consequences could be very serious if that assumption were wrong, that message was rarely repeated in public utterances. “Those at the ‘coal face’ were getting the message that there was little risk of BSE spreading to people,” says Ferguson-Smith. “So they were thinking, ‘What does it matter if we chuck a bit from the carcass into the wrong bin and it is processed into human food.’”

    In many ways, the BSE-vCJD picture is as murky today as it was when the British government first struggled to come to grips with the nightmarish outbreak. It's still far from certain, for example, how many people may succumb to vCJD, or even why BSE infects humans in the first place. The panel's report also presents a cautionary tale to the rest of the world: “BSE could have arisen anywhere and spread wherever animal protein is recycled,” says Ferguson-Smith. “Other countries should ponder our experience.”


    China, Denmark Team Up to Tackle the Pig

    1. Hui Li*
    1. Li Hui writes for China Features in Beijing. With reporting by Lone Frank in Denmark.

    BEIJING—China and Denmark have formed a consortium to sequence the pig genome. The project, expected to take several years, is the first to tackle livestock; when completed, it would be the fourth to tackle a mammalian genome, after the human, mouse, and rat. The partners hope that information from the project will benefit pig-breeding industries in both countries as well as basic science and medicine.

    The project links the Danish Institute of Animal Sciences, the Royal Veterinary and Agricultural University (KVL), and representatives from Denmark's pig industry with the Beijing Genomics Institute (BGI) of the Chinese Academy of Sciences (CAS). Leaders of the four groups struck a deal on 20 October during a visit here by a Danish delegation. The partnership is a “perfect match” between China's powerful sequencing capacity and Denmark's expertise in pig breeding and experience in comparative and functional genomics research, says BGI Director Yang Huanming. “We came here with the intention of signing an agreement, and we are satisfied with the result,” notes KVL pro-rector Torben Greve, who is head of the Danish Pig Genome Consortium.

    The two sides have agreed to split the $15 million cost of the first phase of the project, a 3-year effort to identify valuable genes, develop markers for physical and genetic mapping, and provide research tools for xenotransplantation. A second phase, taking several more years and costing up to $60 million, would aim for a working draft covering 90% of the sequence and 95% of the genes. The pig genome is estimated to contain 3 billion base pairs.

    BGI will do the sequencing and sequence analysis using a supercomputer and more than 100 of the latest capillary sequencing machines. BGI plans to redirect its current roster of 45 machines from work on the international human genome project to the pig project, says BGI deputy director Yu Jun. Denmark will be responsible for developing genetic markers for valuable traits such as disease resistance, growth, and litter size. Its scientists will also build about 100 libraries of cDNA clones containing partial gene sequences that will help the teams identify the full-length genes.

    Each side has agreed to put up $2 million for the initial phase. CAS has already provided the Beijing institute with money. The Danish ministries of research and of food, agriculture, and fisheries are expected to fund work in Denmark, while the National Committee for Pig Production and the Danish Bacon and Meat Council support the BGI sequencing team.

    Researchers had hoped to make the sequencing data available to the scientific community immediately, as has been the case under the so-called Bermuda rules used in the human genome project. But industry contributors asked for release to be delayed several months. “Both sides have agreed to create a balance between the Bermuda rules, which require immediate release within 24 hours, and the present data-release policies by other private sectors to protect commercial applications,” says Yang.

    Even so, the partners say that they remain committed to the concept of sharing. “This project is just like any other international collaboration project, and there will be no new restrictions,” says Orla Gron Pederson of Denmark's national committee. The final terms of data release are still being hashed out, Yang says, along with provisions for scientific procedures, intellectual property rights, and future partners. Yang says that institutes in Singapore and France have expressed an interest in the project but so far lack funding.

    The project marks “an important step” for China toward sequencing the country's resource genomes, notes Wang Guihai, director of the CAS Bureau of Life Sciences. It is the second project of the China Biological Resource Genomes Project, following a decision to sequence China's superhybrid rice (Science, 5 May, p. 795). Danish officials hope to use the knowledge to stimulate work in bioinformatics as well as to strengthen the country's pork industry. A better understanding of pig genomics would also promote the use of transgenic animals as sources of transplant organs, as disease models, and for the production of medical treatments.


    New Site Suggests Anasazi Exodus

    1. Mark Muro*
    1. Mark Muro writes from Tucson, Arizona.

    High in the cliffs of Mesa Verde in southwestern Colorado lie some of the world's most beautiful and mysterious ruins. For decades, scientists have puzzled over the fate of the people who once lived there, the Anasazi. Whereas conventional wisdom has them dying off or leaving slowly, archaeologist Stephen Lekson of the University of Colorado, Boulder, has now proposed a more dramatic and large-scale exodus to the south for at least some Anasazi. That effort, he argues, would have required a higher degree of social cohesion than has been attributed to the Anasazi culture.

    Lekson's work involves pottery and masonry styles from three pueblo ruins in southern New Mexico, up to 470 kilometers from Mesa Verde. “These sites are significantly farther south than the Anasazi are supposed to have gone,” says Lekson, who has been invited to present a paper on his finds at the Society for American Archaeology annual meeting next April. “Meanwhile, the size involved suggests whole villages picked up and moved as units. This is different from the usual picture of just individual families wandering off.” Comments Jefferson Reid, an anthropologist at the University of Arizona in Tucson who has heard Lekson's presentations: “This is a highly plausible idea that can now be evaluated.”

    The traditional view of the Anasazi's disappearance suggests that a killer drought or large-scale political or social stresses set off a slow trickle of émigrés. Rarely are large groups imagined in motion. And rarely are the emigrants said to have moved farther than the areas that became today's pueblos in northern Arizona and New Mexico, ranging from the northern Rio Grande country in the east to the Hopi lands to the west. Lekson, in contrast, found Anasazi-like artifacts 420 kilometers south of Mesa Verde, at Pinnacle Ruin (see map), during work this summer with graduate students Brian Yunker and Curtis Nepstad-Thornberry. He says the far-south pueblo-style ruin, like two others in the region, exhibits key characteristics of Mesa Verdean culture that “stick out like a sore thumb” in their locale, he says.

    Half of the pottery sherds collected at the site look very much like the Mesa Verde black-on-white style, Lekson argues. The neatly coursed masonry and layout of the multistoried room-blocks look more like the massive defensive pueblos of Mesa Verde than like the region's less organized Mogollon culture sites. And an excavated midden shows that Pinnacle's dwellers piled their trash thickly like Mesa Verdeans instead of following the local practice of spreading it thinly around habitations. Such evidence, along with the sheer size of the three southern ruins—which between them may have contained 800 rooms—“pretty strongly” argues that a sizable stream of well organized Anasazis trekked deep into southern New Mexico around 1300, Lekson says.

    Several researchers who have heard Lekson's presentations are attracted to his ideas but say that more data are needed. Archaeologist Harry Shafer of Texas A&M University in College Station, for example, chided Lekson for drawing “premature” conclusions without further excavations and chemical trace analyses of the ceramics. Nevertheless, John Kantner, an archaeologist at Georgia State University in Atlanta, says that “a huge, systematic move could add another element to the picture” of greater mobility.

    Lekson, for his part, says that the small amount of trash at the site suggests that his wayfarers' wanderings did not end in southern New Mexico. Moreover, the oral traditions of several pueblo peoples possibly descended from Anasazi emigrants tell of long, convoluted migrations that wended far to the south, then turned back north. “It could be these folks came here for 100 years, then headed north again,” Lekson says. “Quite a trip, huh?”


    Listeria Enlists Host in Its Attack

    1. Elizabeth Pennisi

    It was just a small innovation—a 27-amino acid addition to a protein some 500 amino acids long. But that change likely made all the difference for a food-borne pathogenic bacterium called Listeria monocytogenes. As described on page 992 by microbiologists Amy Decatur and Daniel Portnoy of the University of California, Berkeley, this innovation enables Listeria, which can cause meningitis and death in people with compromised immune systems, to deploy a toxic protein without killing its host cell. As a result, the microbe remains comfortably ensconced within the cell and can avoid confronting antibodies, the immune system's foot soldiers.

    Many bacterial pathogens are extracellular, frequently doing their dirty work by injecting toxins into cells. But not Listeria. When consumed, say, in a contaminated cheese, it enters the body and hunkers down in a nearby macrophage—even though this type of immune cell usually helps fend off infections.

    As a macrophage first engulfs Listeria, it traps the microbe in a vacuole called a phagosome, supposedly out of harm's way and targeted for eventual destruction by the cell. But once inside, Listeria makes a pore-forming protein called listeriolysin O that tunnels into the phagosome membrane, dissolving it and setting the microbe free within the macrophage, where it can replicate before conquering other cells.

    Microbiologists had long wondered why Listeria's pore-forming protein doesn't bore through the macrophage's outer membrane as well and destroy the host cell. That's how a family of 19 related proteins deployed by extracellular pathogens usually work. Although these bore in from the outside, there seemed to be no reason why listeriolysin O couldn't punch holes in the membrane from within. Indeed, 6 years ago, Portnoy made a Listeria strain in which he replaced the listeriolysin gene with that of one of its relatives, perfringolysin O (PFO), which is used with horrific success by the gangrene- causing Clostridium perfringens. He found that, unlike listeriolysin, the introduced PFO was toxic and destroyed the host cell.

    To figure out what gives listeriolysin its unique abilities, Decatur, who is a postdoc in Portnoy's lab, compared its amino acid sequence with that of PFO. The sequences were quite similar except at the amino end, where listeriolysin turned out to have 27 extra amino acids. Portnoy and Decatur then made a strain of Listeria in which they modified the listeriolysin gene to make a protein lacking this extra bit of sequence. In tissue culture experiments, the altered Listeria was more toxic to macrophages than was its normal counterpart. Next, the Berkeley scientists altered the PFO gene so that its protein would have this 27-amino acid tag, then replaced the listeriolysin gene with the hybrid. With its new appendage, PFO was considerably less toxic; in fact, it acted much like listeriolysin.

    To nail down the identity of this tag, Decatur and Portnoy combed the databases looking for anything resembling this stretch of amino acids. To their delight, they found it is a dead ringer for a sequence often found at the ends of proteins in yeast and multicellular organisms, including humans. In many organisms, these so-called PEST sequences are the starting points for protein-protein interactions, often targeting specific proteins for degradation by other proteins. Finding such a PEST sequence in Listeria was “surprising,” to say the least, notes Nicholas Davis, a molecular biologist at Wayne State University in Detroit, because bacteria normally aren't equipped with the protein-degrading machinery it triggers.

    Instead, the Listeria PEST sequence apparently prompts the protein-degrading machinery of the bacteria's host macrophages to obliterate the pore-forming protein once it has done its job, suggests Patrick Berche, a microbiologist at the Necker Hospital in Paris, whose team has similar, as yet unpublished, results. The PEST-like sequence could be a “very important” adaptation for the parasite, he adds. Listeria must make pore-forming proteins to escape being killed in the phagosome. Most likely, the PEST tag ensures that once freed from the vacuoles, the proteins are destroyed or disabled before they make gaping holes in the cell membrane, killing the cell and wrecking Listeria's temporary safe house. “This mechanism, protein breakdown, is a good solution” to the problem of turning off the protein very quickly before it can damage the cell, says Decatur. By contrast, shutting down its gene might not reduce the amount of listeriolysin in the cell for several hours.

    Work on listeriolysin and its relatives has reinforced the critical role these pore-forming proteins play in a pathogen's toxicity. Increasingly, says Portnoy, understanding pathogenesis is becoming a “question of how these proteins are modified and regulated.” Yale microbiologist Craig Roy agrees: “There are spatial and temporal constraints” that pathogens evolve to get the most of the host before harming it. And for Listeria, it took just a small innovation to trick its host into helping control one of its more pathogenic proteins.


    A More Cautious NASA Sets Plans for Mars

    1. Andrew Lawler

    Twice burned by mission failures last year, NASA managers last week unveiled a new 15-year blueprint for Mars exploration. The revamped strategy allows for doing more science, but at a slower pace, while delaying a sample return until well into the next decade.

    “Mars has a tendency to surprise us,” NASA space science chief Ed Weiler said dryly at a 26 October press conference in Washington, D.C. The 1999 loss of two craft—the Mars Climate Orbiter and Polar Lander—and the new evidence of recent water on the planet are only the latest surprises. To cope with both scientific and technical uncertainties, a NASA team has developed a two-pronged approach using orbiting spacecraft followed by surface rovers. This strategy will take longer, but agency managers are betting that its flexibility will benefit researchers eager to explore the climate, geology, and possible signs of extinct or existing life on the planet. Although the less aggressive schedule has led to some grumbling within the scientific community, many researchers seem relieved with what they say is a more realistic plan.

    NASA's original plan relied on favorable orbital mechanics to send out fleets of orbiters and rovers every 2 years over an 8-year period. But that schedule left little margin for error, and the twin 1999 failures raised questions about the soundness of the hardware and software to be used in future missions. The new schedule addresses that problem by alternating orbiter and rover missions. The 2001 orbiter called Odyssey will be followed in 2003 by two small rovers that will land independently. Two years later, a sophisticated reconnaissance spacecraft capable of seeing objects as small as a beach ball will be launched.

    The pace of exploration will ratchet up in 2007, with the launch of an Italian communications satellite to provide much-needed data transfer capability. NASA would send a smart lander—perhaps equipped with drills to look for moisture under the surface—and a long-range rover. The plan also calls for a debut that year of a new line of explorers called Scouts that could include balloons or planes.

    In 2009, a joint U.S. and Italian orbiter equipped with radar would provide more accurate mapping of the surface. A spacecraft to bring martian soil and rock samples back to Earth wouldn't be launched until 2011 or 2014—6 to 9 years later than planned. That effort would cost between $1 billion and $2 billion, and likely would be done with substantial cooperation with the French space agency CNES. Samples would be returned 2 or 3 years later.

    The rationale behind the new approach, says NASA Mars exploration program scientist Jim Garvin, is to seek “the most compelling places from above, before moving to the surface.” That approach will allow the agency “to change and adapt over time in response to what we find with each mission,” says Scott Hubbard, NASA's Mars program director. It will also provide more data over the long haul. Most researchers seem to agree. “Under the old plan, the fear was we might not know where to get the samples [from], or that, once we got there, we wouldn't have the technology to choose” the best samples, says Steven Squyres, a Cornell University astronomer who is principal investigator of the planned 2003 rover. “This way, we will know a hell of a lot more first.”

    But that's cold comfort to Carl Agee, chief scientist for astromaterials at NASA's Johnson Space Center in Houston. “We'd like to see a sample return earlier rather than later,” he says. “We think the sample return is the biggest payoff.”

    Some observers think that NASA is being too cautious. “It's a good, but limited, plan,” says Louis Friedman, executive director of the Planetary Society, based in Pasadena, California. He believes that NASA would have an easier time getting sufficient funding if it linked the robotic probes to a future human mission. Weiler disagrees, saying that “the program is not driven by human exploration but by science.”

    Hubbard says NASA intends to spend between $400 million and $450 million annually on the effort during the next 5 years, nearly one-third more than originally planned. In return, he promises, future missions will not only deliver good science but avoid technical snafus through stronger oversight. “This is a program,” he says, “not just a collection of projects.”


    Long-Wavelength Lasers Sniff Out New Uses

    1. Charles Seife

    Just a few weeks after two physicists won the Nobel Prize for figuring out how to make lasers out of semiconductors (Science, 20 October, p. 424), researchers at Lucent Technologies' Bell Labs in Murray Hill, New Jersey, announced that they have made those lasers much more useful. A new technique permits the lasers to shine light in regions of the infrared spectrum previously inaccessible to similar devices. The advance may open the door to cheap devices to sniff explosives and other robotic sensors.

    Because long wavelengths of light are absorbed by different molecules in different ways, a spectrometer that uses these new light sources would be able to detect faint whiffs of chemicals in the air. “There is a potential here to produce robot laser sensors” tunable over the appropriate region of the spectrum, says Richard Zare, a laser chemist at Stanford University. “They don't exist today.”

    The work, by physicist Federico Capasso and his colleagues at Bell Labs, involves quantum cascade lasers, which are made of many layers of semiconducting materials. Each of these layers is pumped full of electrons and “holes”—spaces for the electrons to nestle in. When an electron settles into a hole, it releases light, which, in turn, induces other electrons to duck into holes, which releases more light, and so forth.

    The problem with these lasers is that they can't produce light very deep into the infrared region. They can only generate wavelengths of a few micrometers before running into trouble. For a semiconductor laser to work efficiently, electromagnetic waves must be confined within the “active region” where all the electrons are falling into holes, so the light can induce more electrons to find more holes. Lasers traditionally do this with dielectric waveguides, devices that work rather like fiber-optic cables. Made of materials with different refractive properties sandwiched together, they force light to bounce around inside them.

    Unfortunately, the longer the wavelength, the thicker the waveguides have to be—and the harder it gets to deposit the waveguide layers on the chip. “It's fine and dandy until you get to really long wavelengths,” says Capasso. “If you wanted to make a dielectric waveguide for a 20-micron laser with conventional dielectrics, you would need a prohibitively thick material, 8 to 10 microns thick.”

    To address the confinement problem, Capasso and his colleagues have exploited a property of electrons, called surface plasmons, that reside at the interface between a semiconductor and a conductor. Plasmons are waves of electrons that slosh back and forth when excited by, say, an incoming photon. Roughly speaking, a surface plasmon behaves like light trapped at a conductor-insulator interface. This trapping is exactly what the Bell Labs team wants; the light is trapped without the need for bulky waveguides. “You squeeze the light close to the interface,” says Capasso.

    By building a sandwich of conductors and semiconductors in the chip, the team ensures that the laser light creates a plasmon where the two materials meet. The plasmon waveguide focuses 80% of the light on the laser's active region, compared to about 50% for a traditional waveguide, making the laser much more efficient. The plasmon version is also half the thickness of the traditional 8-micrometer waveguide and still within the capabilities of existing deposition processes.

    Best of all, the new technique permits longer wavelength infrared beams. These allow the emitted light to be tuned to make molecules bend, revealing them to a detector as surely as fingerprints pinpoint a criminal. The surface plasmon laser emitted beams of 19-micrometer infrared light—the longest wavelength emitted by a semiconductor laser so far. And Capasso hopes to go even further: “We think we'll be able to do 60 to 80 microns.”

    Zare hopes that the prototype device might lead to many new applications. “This was a gap where we now have neat light sources,” he says. For instance, a laser in that region of the spectrum might be tunable to detect the vibratory motion of various molecules, allowing a robot to detect the molecules' presence. “The applications [include] detecting explosives and chemical and biological agents, looking at disease states, and medical diagnosis,” says Zare. Although the lasers still must be kept at temperatures too cold to be widely used, Zare thinks that room-temperature lasers are possible: “I've got lots of hope.”


    First Upright Vertebrate Lived Fast, Died Young

    1. Erik Stokstad

    Early land vertebrates were a cloddish crew. When amphibians first sloshed ashore, some 360 million years ago, they waddled like soldiers crawling under barbed wire. Even after reptiles began to spread into drier landscapes and diversified, these landlubbers still plodded on all fours using the same sprawled stance. Paleontologists thought that the pace didn't pick up until fleet-footed bipedal dinosaurs appeared in the Late Triassic, about 210 million years ago.

    Now a fossil discovery shows that at least one reptile was dashing around on two legs in the Early Permian, as much as 80 million years before the first dinosaur. On page 969, Robert Reisz of the University of Toronto in Mississauga, Ontario, Canada, and David Berman of the Carnegie Museum of Natural History in Pittsburgh and their colleagues describe Eudibamus cursoris, a 25-centimeter-long herbivore that is the earliest known vertebrate able to run on its hindlimbs. “When I first heard about this fossil, I was just amazed,” says Hans-Dieter Sues of the Royal Ontario Museum in Toronto. “I didn't expect a bipedal creature that far back in time.” The find suggests that bipedalism may be more common than previously thought—but not necessarily a sure route to evolutionary success.

    The 290-million-year-old fossil was discovered by Stuart Sumida of California State University, San Bernardino, in 1993 in a quarry near Gotha, Germany. For about a decade, scientists working with Thomas Martens, a paleontologist at the Museum der Natur Gotha, have been uncovering relatively complete, well-preserved specimens from the quarry. The spot represents an upland environment quite different from the lowland deltas and floodplains in which most Paleozoic fossils have turned up. It took 2 years to prepare the small, delicate specimen. Once the bones were revealed, the group realized that the fossil was unique, too.

    “What's really exciting is that this fossil is the first instance of an animal built for speed,” Reisz says. Eudibamus had hindlimbs that were 64% longer than its forelimbs and 34% longer than its trunk, proportions comparable to those of modern lizards that run on two legs. Feet sporting long digits would have given the animal a substantial stride. “It ran on its toes, especially when it got going,” Berman says. “That's what all fast animals do.”

    Its tail also helped it move quickly, Berman says. Like the tails of modern bipedal lizards (and unlike those of its fellow Permian vertebrates), the tail of Eudibamus makes up more than half the length of the creature's body. Muscles attached to such a hefty appendage could have made Eudibamus's hindlimbs powerful enough for two-legged sprinting. The arrangement also kept the animal's center of gravity close to its hip, a necessary feature for balancing a two-legged gait.

    Eudibamus had also evolved a new kind of knee joint—one that allowed it to run with its feet directly underneath its body. In other vertebrates of the time, the legs jutted outward from the body. That's because one of the paired shinbones (tibia) connected with the underside of the mostly horizontal thigh bone (femur), while the other shinbone (fibula) attached to the end of the femur. By contrast, both shinbones in Eudibamus fit onto the end of the femur, forming a hingelike joint that puts all of the leg in one plane, just as in humans and dinosaurs. The result is an energy-efficient posture that allows the bones, not just muscles, to help support the animal's weight.

    Eudibamus probably wasn't an ideal biped; its limbs might have tended to splay out to the side when it wasn't running. Even so, Reisz and his colleagues speculate, its two-legged posture could have given the animal a crucial edge over four-legged predators. That evolutionary advantage may explain the widespread dispersal of Eudibamus—inferred from fragmentary fossils of its relatives—across the northern continent of Laurasia.

    But bipedalism didn't guarantee Eudibamus a future. “Clearly for this little guy, it didn't make much of a difference,” Sues says. “This was a very short-lived evolutionary lineage, as far as we know.” Michael Caldwell of the University of Alberta in Edmonton suspects that bipedalism may have evolved many times in vertebrate history before dinosaurs, birds, and primates made the innovation an evolutionary success. In giving lug-necked predators a run for their money, Eudibamus may have been just one of any number of creatures darting briefly ahead of their time.


    Digital Music Safeguard May Need Retuning

    1. Charles Seife

    A hacker-professor says he and his graduate students have cracked the four leading methods proposed for thwarting audio pirates. Ed Felten, a computer scientist at Princeton University, says his achievement shows that so-called digital watermarks—identifying signals hidden inside streams of digital data—cannot protect music from illegal copying. But the music industry begs to disagree.

    The charges and countercharges center on a competition sponsored by the Secure Digital Music Initiative (SDMI), a forum of music, technology, and electronics companies that is designing a method to thwart illegal copying of audio files. SDMI champions a protection scheme analogous to the ghostly image of Andrew Jackson that appears next to the Treasury department seal when you hold a new $20 bill up to the light. “A device can scan for a watermark, detect the watermark, and make a decision based upon whether the watermark is there,” says Scott Craver, a graduate student and computer scientist at Princeton. For instance, the watermark might indicate that an audio file may be copied only once, or not at all—orders that audio players and recorders would be constructed to obey. But such instructions would be moot if hackers could wash off the watermark at will.

    SDMI's quest for a secure digital watermark went public in September, when the consortium posted four proposed watermarking schemes and two supplementary technologies on one of its Web sites ( An accompanying letter offered $10,000 to anyone who could hack any of the security schemes within 3 weeks. “Attack the proposed technologies,” read the letter. “Crack them.”

    Many computer-security experts flatly refused. Don Marti, the technology editor of Linux Journal, arguing that SDMI's scheme is a unilateral attempt by the music industry to recast intellectual property rights in its favor, called for a boycott of the HackSDMI effort. “I wanted to call people's attention to the legal rights SDMI is planning to take away,” Marti says. Others dismissed the competition as a waste of time. “Challenges and contests are stupid ways of assessing security,” says Bruce Schneier, chief technology officer of Counterpane Internet Security in San Jose, California. “If I challenge people to break into my house and it's not robbed in a week, can I conclude that my house is secure? It's bizarre.” Craver agrees: “A 3-week challenge could not be taken seriously in the cryptographic community.” Nevertheless, Felten, Craver, and others ignored the boycott and attacked the watermarks.

    Last week, Felten and Craver's team declared that it had defeated all four watermarking schemes. “Basically, for each of the technologies, we figured out where in the signal each watermark was put and then washed it out,” Felten says. “For instance, if it's all stored in a narrow frequency band, you can add a bit of noise in that frequency band.” Felten claims that removing the watermarks didn't damage the quality of the music. The SDMI consortium agreed that Felten's sample had no watermark and sounded just fine, at least in a preliminary inspection.

    The result proves that “watermarking technology is not mature enough to do what SDMI wants it to do,” Felten says. But SDMI isn't convinced. “The word we received was that all 153 attacks have failed to meet the criteria,” says David Leibowitz, chair of San Diego-based Verance, which provided one of the four watermarking schemes. SDMI officials say the Princeton team did not submit technical information showing that it had devised a general strategy for defeating watermarks. As Leonardo Chiariglione, SDMI's executive director, explains, “If every bit of new music is a new challenge, if repeatability is not guaranteed, it is not considered a successful attack.”

    Some experts, though, see Felten's attack as a confirmation that copy-protection schemes will never deter any but the most inept would-be pirate. “Digital bits can be copied; it's the natural way, and any procedure that tries to go against the tide will fail,” Schneier says. “Watermarks can't possibly work. Copy protection can't possibly work. Get over it. Accept the inevitable, and figure out how to make money anyway.”

  10. INDIA

    New Guidelines Promise Stronger Bioethics

    1. Pallava Bagla

    NEW DELHI—The Indian government has issued new guidelines for conducting medical research on humans that would raise standards and tighten oversight at most institutions. The voluntary guidelines, released on 18 October, are also expected to bolster international collaborations by putting Indian practices on a par with standards in the West.

    Although the guidelines will mean more paperwork for an already clogged bureaucracy, most scientists say that they are an important step toward ensuring ethical research. “It is expected that all institutions that carry out any form of biomedical research involving human beings should follow these guidelines,” says Nirmal Kumar Ganguly, director-general of the Indian Council of Medical Research (ICMR) in New Delhi.

    Four years in the making, the new guidelines would create a network of institutional review boards. That in itself would be a major change: An ICMR survey last year of 30 leading research institutions found that most had no ethical committees overseeing experiments involving humans. The few committees that did exist were generally moribund, meeting rarely and having little influence on major research decisions.

    The new guidelines, titled “Ethical Guidelines for Biomedical Research on Human Subjects,” stipulate that each research proposal that involves human testing will be vetted by an institutional ethics committee. Its five to seven members must include a legal expert, a social scientist, a philosopher, and a community representative in addition to researchers. All committee decisions will be made at a “formal meeting” and not “through the circulation of a proposal.” Once cleared, the protocols would receive no further ethical review.

    In addition to enshrining the principles of informed consent and confidentiality, the guidelines specify the nonexploitation of vulnerable groups such as the poor and mentally challenged people. It also says that anyone in a trial who has an adverse reaction should receive the “best possible nationally available care.”

    The guidelines were unveiled at a meeting here of the Indo-U.S. Biomedical Research Policy Forum, which seeks to resolve obstacles to collaborative biomedical research between the two countries. Gerald Keusch, director of the Fogarty International Center of the U.S. National Institutes of Health, who attended the meeting, called the guidelines “comprehensive.” He said they “have the same philosophic context” as those that federally funded researchers and their U.S. institutions must follow.

    Absent binding legislation and additional resources, the success of the voluntary guidelines will depend on the response of the scientific community. “There is no way the ICMR can be the policing agency,” says Vasantha Muthuswamy, chief of basic biomedical research at ICMR and secretary of the Central Ethics Committee on Human Research, which formulated the guidelines. And that puts the burden on those who fund the research, as well as those who carry it out. “Now that a strong ethical framework has been put in place, it is up to the grant-giving agencies to ensure that funding is not given in instances where ethical violations are noticed,” says Prakash Narain Tandon, a neurosurgeon and professor emeritus at the All Indian Institute of Medical Sciences in New Delhi.


    Can the Kyoto Climate Treaty Be Saved From Itself?

    1. Richard A. Kerr

    The climate treaty being hammered out this month at The Hague may be doomed to failure; the key, some say, will be keeping the treaty going now and rethinking its controversial goals later

    Later this month, representatives from 160 countries will convene at The Hague to work out details of one of the boldest attempts at international diplomacy ever: reining in the gusher of gases threatening to warm the planet. Taking their cue from the successful Montreal Protocol for the control of ozone-destroying emissions, governments crafted the outlines of a “big bang” approach to controlling greenhouse gas emissions at a meeting in Kyoto in 1997. Negotiators established strict targets mandating how much industrialized countries would have to reduce their gas emissions by 2008–12. But they left vague the rules of exactly how countries could achieve these reductions—for instance, how much they could rely on emissions trading or carbon “sinks” (see p. 922). Those details are now on the table at the Hague, and it's the details, some say, that could make or break the protocol.

    But even before the meeting, there are murmurings that the negotiations are bound to fail. The United States simply won't ratify any treaty that requires such wrenching reductions, numerous observers say. “I don't know anyone who believes the U.S. is going to ratify this agreement” as it stands now, says economist Henry Jacoby of the Massachusetts Institute of Technology (MIT). Others are less pessimistic, but nobody is truly optimistic. “As it is currently configured, U.S. ratification would be really tough,” says economist James Edmonds of the Washington, D.C., office of the Pacific Northwest National Laboratory. And if the United States bails out, the protocol is, if not dead, in very deep trouble. “You don't absolutely have to have the United States,” explains Jacoby. “But without the U.S., all of Europe, Japan, and Russia are needed” to meet the requirement that countries responsible for 55% of greenhouse emissions must ratify the treaty to put it in force. Already, policy wonks on the fringes of the negotiations are scrambling for alternatives. Some think that by tweaking the rules, the negotiators at The Hague can sweeten the deal enough so the United States could eventually sign on. But if it is too sweet, other countries may balk. The United States, for example, would like to buy its way out of many of its obligations through deals reducing emissions beyond its borders.

    Other analysts say that, eventually, the targets themselves will have to be delayed. Still others are planning how to reduce emissions in a post-Kyoto world if the U.S. bails out completely. None of these options would be popular with many European developing nations, who expect the United States to shoulder emissions cutting at home.

    The dim prospects for ratification center on how disruptive and how expensive it would be for countries, particularly the United States, to achieve their target reductions. The protocol calls for an average 5% reduction of emissions below their 1990 level. For the United States, the world's biggest emitter, it mandates a 7% reduction below 1990 levels. What with the robust economic expansion of the past decade, the required U.S. reduction amounts to “a 30% reduction beneath business as usual,” notes climate researcher Tom Wigley of the National Center for Atmospheric Research in Boulder, Colorado. “Can you imagine the United States in the next 10 years doing that?”

    Eileen Claussen can't. She is president of the Pew Center on Global Climate Change in Arlington, Virginia, an organization dedicated to reducing greenhouse emissions. Even so, she says, “I think it's going to become clear to a lot of countries—not just the U.S.—that they're not going to meet their targets. It's already clear the U.S. won't meet its target.” Indeed, a Pew Center study of five European countries suggests that only the United Kingdom is on track to meet its Kyoto target, and Germany is perhaps close. Not coincidentally, it's the United Kingdom that vehemently opposes U.S. efforts to buy its way out of substantial emission reductions in its domestic energy sector.

    Costs to the United States are “highly uncertain,” says economist John Weyant of Stanford University. Given the range of assumptions about Kyoto and the economy, says Weyant, “model projections range from relatively low cost—a couple of tenths of a percent of U.S. gross domestic product [per year]—up to 3% to 4%.” For instance, if countries bring online new energy-efficient technologies—everything from light bulbs to hydrogen fuel cells for cars—costs would drop significantly. But major technology changes are unlikely before 2012, Weyant maintains.

    For that reason, U.S. negotiators want to adjust the basic rules, often called the “framework” for the Kyoto Protocol, to allow for maximum flexibility. Emissions trading among nations may enable the most wiggle room. As outlined in the protocol, an industrialized nation that doesn't want to reduce its own emissions could buy a permit from another industrialized nation to emit so many tons of greenhouse gas, presumably at a lower cost. But there's a catch. Trading is already restricted to industrialized countries, and the United Kingdom has floated a proposal that restricts the proportion of a country's reductions—read, the United States—that can be taken this way.

    Another means of adding flexibility is the protocol's Clean Development Mechanism. The CDM would allow an industrialized country to join with a developing country, which under the protocol has no obligation to reduce emissions, in an emission-reducing project in that country. The idea is that the developing country would reap the benefits of a nonpolluting energy source and the industrialized country would get credit for the reduced emissions. But again, the devil is in the details. What projects would qualify? A nonemitting nuclear power plant? An ecologically disruptive hydroelectric dam? Some proposals stipulate that only renewable energy and energy-efficiency projects qualify.

    Claussen, who played a key role in negotiating the protocol while at the State Department, thinks getting the right rules in place is the first step. Basically, she would like to see minimal restrictions on flexible mechanisms such as CDM and on carbon sinks. Then, “after the framework is in place, people may still say, ‘Oh my, we're not going to make it,’ and there will be some adjustment of the targets.”

    Some think Claussen is being overly pessimistic. Daniel Lashof of the Natural Resources Defense Council in Washington, D.C., says, “It looks like the U.S. will get a lot of the flexibility it wants” at The Hague. Even so, the country “should and can get the majority of reductions domestically,” he contends. “What will decrease future emissions is requiring firms to invest in emission reduction now.”

    Environmentalists may not see the necessity of delaying implementation of big emission reductions, but a lot of economists do. “Kyoto is a political compromise designed to get us moving on carbon-emission reductions,” says Weyant. But “studies suggest it's not an optimum path” to the unspoken goal of Kyoto: stable greenhouse gas concentrations a century or two from now. Whereas the environmentally inclined insist that the world must tackle the greenhouse with vigor now, economists like Michael Toman and his colleagues at Resources for the Future (RFF) in Washington, D.C., argue that the world can reach its long-term goal much more cheaply by putting off much—but not all—of the needed emission reductions. This “back-loading” of deep cuts in emissions would be cheaper, Toman argues, because it would allow an orderly replacement of long-lived, fuel-burning equipment and the use of technology not yet available, among other advantages.

    Economists also have alternatives intended to keep costs down and reassure countries that costs won't skyrocket. William Pizer of RFF, for instance, proposes a “safety valve” approach. The costs of emission permits could float until they hit a predetermined ceiling, so governments would know in advance the worst case, or most expensive, scenario. MIT's Jacoby agrees: “You need some sort of safety valve so governments aren't committing to something they can't meet. That's going to take time.” He notes that it took 50 years for the General Agreement on Tariffs and Trade to evolve into the 138-nation World Trade Organization. Kyoto might evolve the same way, he says. “A few countries agree on really narrow things and gradually build up a system over time, in contrast to the ‘big bang’ approach of Kyoto. That way, it doesn't die.”


    A Well-Intentioned Cleanup Gets Mixed Reviews

    1. Richard A. Kerr

    Climate researcher James Hansen just wanted to help. By publishing an alternative, and decidedly upbeat, scenario for how greenhouse warming might play out in the next half-century, the director of NASA's Goddard Institute for Space Studies (GISS) in New York City hoped to open new prospects for attacking the problem. Instead, he got a lot of grief. “Some very thoughtful people didn't understand what we were saying,” he said at a recent workshop on his alternative scenario. “The paper has been misconstrued by both ends of the spectrum.”

    Rather than abandoning his position that rising levels of carbon dioxide from the burning of fossil fuels pose a serious threat to society, as some observers supposed, Hansen merely was trying to emphasize that there is more to the greenhouse problem than carbon dioxide. Specifically, controlling many of the components of what's popularly regarded as “pollution”—dirty hazes and throat-searing smog—would also help, perhaps through the use of more renewable energy and inherently clean fuels like natural gas.

    Hansen's proposed scenario, published in the 29 August issue of the Proceedings of the National Academy of Sciences, rests on the observation that the warming effect of carbon dioxide so far seems to have been largely counterbalanced by the cooling effect of pollutant hazes, which reflect solar energy back to space. That cancellation, Hansen and four colleagues from GISS write, points up that there are additional targets for reducing warming in the next 50 years, including such pollutant greenhouse gases as methane from rice paddies, chlorofluorocarbons from air conditioners, and the ozone of smog—as well as dark, soot-laden aerosols from such sources as diesel engines and agricultural burning. Holding these pollutants in check over the next 50 years is plausible, they argue—indeed, much of it is already being done, at least in the United States, under the Clean Air Act and the Montreal Protocol. It is also possible to reduce the growth rate of carbon dioxide in the atmosphere so as to hold the warming from that gas to a modest amount, says Hansen, who reiterates: “We're not de-emphasizing carbon dioxide.” Although resource economist Henry Jacoby of the Massachusetts Institute of Technology doesn't see much new in Hansen's latest proposal, he does see an upside. “The point is, you have to go after everything.”


    Soaking Up Carbon in Forests and Fields

    1. Jocelyn Kaiser

    The climate treaty left open the rules for using managed forests, rangelands, and croplands to help meet Kyoto targets. How should it be done?

    Is it fair for global bookkeepers to let countries subtract carbon sequestered by their farmland and forests from the carbon they spew by burning fossil fuels? If so, how do you measure how many tons of carbon an Iowa cornfield has socked away? Those questions will be high on the agenda as negotiators meet later this month to nail down the details of the Kyoto Protocol (see p. 920). Forests and other land sinks, as they are called, could offset a sizable chunk of the extra CO2 that humans pump into the atmosphere and protect biodiversity as well. But sinks are controversial, both because of uncertainties about how to measure the carbon they absorb and because some countries view sink proposals—particularly the United States'—as a distraction to avoid cutting fossil fuel emissions.

    The Kyoto Protocol includes land sinks because they're a big part of the global carbon equation. Carbon dioxide taken up by plants and soils through photosynthesis balances a whopping 2.3 of the 7.9 petagrams of the carbon belched into the atmosphere annually by human activity. (Conversely, cutting and burning forests adds 1.6 petagrams.) That's why the Kyoto Protocol stipulates that countries will be credited for planting new forests and docked for cutting down existing ones.

    Still to be decided, however, is exactly how to define these forests, as well as whether to include other lands managed since 1990 to absorb carbon, for example by sustainably harvesting timber and using no-till methods on farmlands. Carbon sinks are no panacea—forests and fields would absorb less and less carbon as decades pass—but “it could make a heck of a difference” in the short term, says soil scientist Neil Sampson, a consultant in Alexandria, Virginia, who helped write a recent report on sinks from the Intergovernmental Panel on Climate Change (IPCC) (Science, 12 May, p. 942). Letting U.S. farmers make money from sequestering carbon could also win much-needed support for the treaty from Midwestern conservatives in the U.S. Senate.

    But crediting countries for such sinks would require massive surveys. For forests, it's fairly straightforward: Most industrialized countries already track the growth of their forests for timber-harvesting purposes. They typically use a combination of remote sensing, modeling, and on-the-ground measurements, such as carbon analysis of trees, leaf litter, and soil. Even many environmental groups who have some qualms about sinks are fairly comfortable with forest sink accounting, as long as there are provisions to prevent unintended ecological harm, such as mowing down old-growth forest to create tree plantations. “There are some questions about how good the inventory systems are, but in my view they can be overcome,” says Daniel Lashof, a senior scientist with the Natural Resources Defense Council in Washington, D.C.

    With farmlands and rangelands, however, monitoring is more uncertain because no system is in place. For example, the National Resource Inventory at the U.S. Department of Agriculture tracks nitrogen content and soil erosion on farmlands but doesn't routinely measure carbon. Measuring the carbon added by, say, no-till practices could be horrendously difficult, says ecologist Mac Post of Oak Ridge National Laboratory (ORNL) in Tennessee. For one, the amount of carbon absorbed would be tiny—overall, an annual change of 50 grams per 7 kilograms of soil—and it would vary with crop type, weather, and even from furrow to ridge within a field.

    Improving these numbers by sampling each farmer's field just wouldn't be practical: “You'd probably produce more CO2 than you gained,” says biogeochemist Ben Ellert of Agriculture and Agri-Food Canada (AAFC). However, a pilot project in Saskatchewan has convinced some experts that a statistical approach can bring down the costs of measuring carbon uptake. The 3-year project, supported by energy utilities interested in buying carbon credits from farmers, combined statistical sampling with modeling on 150 farms. It concluded that carbon absorbed by changes in land use could be measured for a relatively low 10 to 15 cents per hectare, according to Brian McConkey of AAFC. And better technologies are on the way, says ecologist Keith Paustian of Colorado State University, Fort Collins: A group at Los Alamos National Laboratory in New Mexico, for example, has invented a sensor for detecting carbon just by sticking the tool in the soil, eliminating the need to cart samples to a lab.

    Even if monitoring sinks is doable, a host of policy questions remain. Protecting a forest in one part of a country, for instance, may lead to logging elsewhere. Another concern is the impermanence of projects: A credited forest might eventually be destroyed by a hurricane, for example. One solution laid out by the IPCC sinks report and now endorsed by many groups is to count the carbon going in and out of all of a country's lands, no matter the type, instead of giving credit for specific activities. “Looking at the whole landscape will bring us closer to what the atmosphere is actually seeing,” says biophysicist Darren Goetze, a global change consultant in Ottawa.

    Still, the uncertainties over measurement are one reason why some want to hold off on giving credit for sinks until the second phase of the treaty, after 2012. Including sinks also faces fierce opposition from the European Union, which rejects the idea because it would allow countries to avoid reducing their fossil-fuel emissions.

    Even some sink proponents see the U.S. position as too greedy. It seeks credit for part of the 310 million metric tons of carbon per year that U.S. forests and fields will absorb between 1990 and 2012—even without any new intervention. That adds up to half of the U.S. target emissions cuts. Most other countries, arguing that only deliberately created sinks should count, won't be willing to accept these credits, says geochemist Gregg Marland of ORNL.

    Whether or not countries get credit for their sinks, many scientists look forward to a global effort to monitor the carbon sucked up by the world's green spaces. As Paustian says, “Irrespective of carbon trading, we need to understand the role of the carbon sink” to improve global models and predict how much the world may warm in the future.


    On the Trail of Ebola and Marburg Viruses

    1. Michael Balter*
    1. *Symposium on Marburg and Ebola Viruses, Marburg, Germany, 1–4 October.

    Hemorrhagic fever viruses have played starring roles in books and movies. Now researchers are making headway on understanding these real-life threats

    MARBURG, GERMANY—This quiet town north of Frankfurt does not seem like the kind of place that would have a deadly virus named after it. On a typical afternoon, university students chatter by the fountain in the Marktplatz, where half-timbered Renaissance houses lean this way and that. Nearby, tourists crane their necks to admire the soaring spires of the 13th century St. Elizabeth's Church, the first Gothic church built in Germany.

    But in August 1967, this peaceful scene was shattered when workers in a commercial lab fell ill with a series of alarming symptoms: fever, diarrhea, vomiting, massive bleeding, shock, and circulatory system collapse. Local virologists quickly traced the outbreak—which also occurred at labs in Frankfurt and Belgrade—to monkeys from Uganda that the three labs were using for polio vaccine preparation and other research. A total of 37 people, including lab workers, medical personnel, and relatives, caught the baffling disease, and a quarter of them died. Three months later, a team of German experts isolated the culprit: a dangerous new virus, shaped like a sinister, snaking rod, which had been transmitted to humans from infected monkeys.

    The Marburg virus disappeared as mysteriously as it appeared, showing up next in 1975 when a single case was recorded in South Africa. But in 1976 a close cousin, the Ebola virus, made its first and fearsome appearance in the Democratic Republic of the Congo (DRC, formerly Zaire), killing 280 people. Since then, Ebola, Marburg, and other dreaded “hemorrhagic fever viruses” have achieved nearly mythical status. Last month, on the eve of the latest Ebola outbreak in Uganda, and in the midst of an ongoing Marburg outbreak in the DRC, about 100 Marburg and Ebola experts met in Marburg to share their latest results. Although many questions remain—including where these viruses hide between epidemics and how they cause such devastating symptoms—the new research raised hopes that treatments and vaccines might one day be a reality. The findings included the creation of a genetically engineered Ebola virus, which should provide a powerful molecular tool to analyze how these viruses cause disease, and promising, though preliminary, results in monkeys with an Ebola vaccine.

    Mysterious reservoir

    Although the deadly Ebola outbreak in Uganda has been grabbing headlines, the DRC is suffering a less publicized outbreak of Marburg virus, described in talks by Jean Muyembe-Tamfum of the National Institute for Biomedical Research in Kinshasa and Stuart Nichol of the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta. The epidemic began in November 1998 in the northern town of Durba. Workers at a gold mine just outside of Durba were the first to succumb. But the area's remoteness and ongoing local warfare prevented experts from CDC and the World Health Organization from arriving until the following May. Although the outbreak peaked in mid-1999, Nichol told those attending the meeting that new cases were still appearing as late as September 2000, at which time 99 people had been infected, with a mortality rate of more than 80%. Just over half of the victims were gold miners, which provided a possible clue to the virus's origin.

    Working with virologist Robert Swanepoel of the National Institute of Virology in Johannesburg, South Africa, Nichol and his colleagues sequenced portions of the Marburg virus's genome. To the surprise of the researchers, the viruses showed an extraordinary genetic diversity—up to 16% difference in their nucleotide sequences—even though they came from what appeared to be a single outbreak. In contrast, the virus strain responsible for a dramatic 1995 Ebola epidemic in Kikwit, DRC, which infected 315 people, showed no genetic diversity at all. From this analysis at Durba, the team concluded that the Marburg virus must have been introduced into the populace at least seven separate times. This finding suggests that this once rare microbe is making new inroads into populated areas, Nichol and Muyembe-Tamfum said.

    Searching for the animal reservoir for the virus, the team also trapped more than 500 bats in the gold mine. Many scientists suspect that the natural reservoirs for both Marburg and Ebola are animals, such as rodents or monkeys, with which humans come into regular contact (Science, 22 October 1999, p. 654). Bats were a prime suspect because Swanepoel had earlier shown that they could be experimentally infected with Ebola. But so far, that hunch appears to have been wrong. By the time of the meeting, “the vast majority” of the bats had been examined, and none showed signs of infection with Marburg, Nichol said. Although there was “still a glimmer of hope” that some of the remaining bats might harbor the virus, other reservoirs—including possibly arthropods such as insects and spiders—now had to be considered, he concluded.

    Leaky capillaries?

    Equally perplexing is how Marburg and Ebola cause such devastating symptoms as shock and massive bleeding. Previous research has demonstrated that the virus targets many cell types, especially the macrophages of the immune system and liver cells. Less clear is whether the endothelial cells that make up the inner surfaces of blood vessels are directly attacked by Ebola and Marburg. Some—but by no means all—researchers believe that damage to these cells, resulting in an uncontrolled flow of blood from capillaries into surrounding tissues, is responsible for the circulatory system collapse that can lead to rapid death.

    Both CDC pathologist Sherif Zaki and virologist Gary Nabel, director of the National Institutes of Health's (NIH's) Vaccine Research Center in Bethesda, Maryland, argued for a key role for endothelial cells. When Zaki examined autopsy tissues from victims of the Kikwit Ebola outbreak, he found that the capillary endothelium was severely damaged. To explore the cause of this damage, Nabel's team, in collaboration with others at NIH and CDC, genetically engineered cultured human endothelial cells to express the Ebola protein GP, which makes up the virus's outer coat. The results, published in the August issue of Nature Medicine, were dramatic. Within 24 hours, the cells could no longer adhere to each other; they died within a few days. And when the gene coding for GP was introduced directly into blood vessels that had been removed surgically from pigs or humans, the vessels suffered massive endothelial cell loss within 48 hours and became much more permeable to fluids. “Increased endothelial permeability and injury to the microvasculature appear to be central to the pathogenesis” of Ebola and Marburg infections, virologist Brian Mahy of CDC told Science.

    Yet other researchers question the relevance of these findings during a real-life outbreak. Virologist Susan Fisher-Hoch of the Jean Mérieux Laboratory in Lyons, France, argued that Ebola and Marburg victims do not show characteristic signs of leaky capillaries, such as pulmonary edema and swelling of the head and neck. What's more, she added, survivors recover too quickly from even serious shocklike symptoms to have suffered extensive endothelial cell damage. Supporting that view, Thomas Geisbert of the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland, reported preliminary results from a study of monkeys experimentally infected with Ebola. When the USAMRIID researchers examined the animals at various stages of infection, they saw little endothelial cell damage until the end stages of the disease, when severe symptoms had been present for several days.

    “The question is still open at the moment,” says virologist Heinz Feldmann of the Canadian Science Center for Human and Animal Health in Winnipeg, Canada.

    Designer Ebola

    Sorting out the mechanisms of infection may soon be easier, thanks to an advance described by molecular virologist Viktor Volchkov of Claude Bernard University in Lyons, France: a genetically engineered Ebola that will enable researchers to mutate the virus at will to see which of its genes and proteins are most responsible for its deadly effects. “This is a great tool,” says Fisher-Hoch. USAMRIID virologist Mike Bray agrees: Volchkov's work was at the “top of the list” of important talks at the meeting, he told Science. Last year, Volchkov, in collaboration with colleagues at the Institute for Virology in Marburg, determined the complete nucleotide sequence of the Ebola genome: an 18,959-base-long single strand of RNA. The researchers have now engineered copies of the virus by constructing a DNA molecule with a nucleotide sequence complementary to that of the Ebola genome. When they introduced this complementary DNA into cultured cell lines, along with genes coding for four key Ebola proteins, including the structural protein GP, the cells proceeded to make new Ebola RNA. The result was a lab-created virus that is fully infectious when transferred to other cell lines.

    “We can now answer a lot of questions about virulence and pathogenesis,” says Bray. By altering the sequence of the complementary DNA, Volchkov's team has already created a mutant Ebola with which to explore how the virus modulates its deadly effects. The mutation in the gene coding for GP—which is highly toxic to target cells—causes the virus to make many more copies of the protein. Volchkov identified a mechanism that gives the virus some “self-control” over how much GP it produces, so as not to kill off infected cells before it can spread efficiently to noninfected cells. Feldmann sees possible applications in vaccine strategy as well: “If we knew how to attenuate the virus, we could make a genetically engineered vaccine.”

    For years, various teams have been struggling to make vaccines against Marburg and Ebola, with some success in animal models such as guinea pigs and monkeys. At the meeting, Nabel reported that he had made some headway with a DNA vaccine. Using the so-called “prime-boost” vaccine technique, Nabel and his collaborators—including postdoc Nancy Sullivan of NIH and Anthony Sanchez of CDC—injected four monkeys with a prototype vaccine consisting of “naked DNA” complementary to the Ebola GP gene, followed by later injections of the same gene using an adenovirus vector. The vaccinated monkeys, as well as four unvaccinated controls, were then infected with Ebola. Within 7 days, the controls were either dead or dying, while the vaccinated monkeys were still alive and healthy many months later.

    These findings, in press at Nature, drew mixed reviews from the assembled scientists. “I think [the vaccine] worked, because [the challenge virus] killed the controls,” says Bray. Even so, Bray and others cautioned that further experiments are needed before concluding that Nabel's team has a working vaccine. They were especially concerned that Nabel did not specify in his talk the “challenge” dose of Ebola virus used to infect the monkeys—the higher the challenge an animal can withstand, the greater the protection. In earlier work with an experimental vaccine against Marburg, USAMRIID virologists Alan and Connie Schmaljohn had protected monkeys challenged with high doses of that virus. A number of researchers at the meeting told Science privately that they believed Nabel's challenge dose was much lower than that used by the Schmaljohns.

    Nevertheless, “I am cautiously optimistic that [Nabel's vaccine] is a significant step forward,” says Alan Schmaljohn, while also stressing the need for further studies to verify the results. “I will be more comfortable once it is repeated with a higher challenge dose.”

    Although the meeting revealed considerable progress in understanding these deadly viruses, it may be many more years before this research pays off in terms of help for their future victims. Says Mahy: “Ebola and Marburg will continue to cause severe illness and deaths in Africa … until we have an effective drug treatment or vaccine. This should be a high priority.”


    Gamma Ray Bursts May Pack a One-Two Punch

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Unexpected lines in x-ray spectra hint that the universe's most energetic explosions are triggered by a delayed-action fuse

    Scientists have fingered a new suspect in a case of unimaginable cosmic violence. The mystery, which has haunted astrophysicists for 3 decades, is what causes gamma ray bursts—short, intense flashes of high-energy photons that occur about once a day somewhere on the sky. In early 1997, observations by the Italian-Dutch BeppoSAX satellite and follow-up studies with ground-based telescopes traced the flashes to cataclysmic explosions in distant reaches of the observable universe. But what could cause the explosions, which produce more energy in 1 second than the sun will emit in its entire 10-billion-year lifetime?

    Most astrophysicists now agree that the answer is a hypernova, the blast of energy released when a supermassive star collapses into a black hole. A rival hypothesis—that the implosion occurs when two neutron stars collide—is less popular, although some astrophysicists invoke it to explain very short gamma ray bursts with a distinctive energy distribution (see sidebar).

    Two papers in this issue of Science (pp. 953 and 955), reporting on new x-ray observations of two gamma ray bursts, embrace a modified form of the hypernova model. On its way to becoming a black hole, the authors propose, the supermassive star actually collapses twice. “The classical hypernova model is dead,” says Mario Vietri of the Third University of Rome, a co-author of both papers.

    The new observations were the talk of the day at a recent workshop.* “This is the second most important BeppoSAX discovery” after the first bursts were pinpointed in 1997, says Filippo Frontera of the CNR Institute of Technology and Cosmic Ray Studies (TeSRE) in Bologna, Italy. “It's very exciting,” adds theorist Peter Mészáros of Pennsylvania State University, University Park.

    According to standard stellar evolution theory, a massive star ends its short life in a supernova explosion, blasting most of its mass into space. The star's core implodes into a small, compact neutron star, or, if it is massive enough, into a black hole. Although the details of black hole formation are unknown, astronomers believe that the final stages of the process may release huge amounts of energy, as trillions of tons of superheated gas are instantaneously sucked into the hole at almost the speed of light.

    The hypernova or collapsar model was first proposed by Bohdan Paczy'nski of Princeton University and independently by Stan Woosley of the University of California, Santa Cruz. It describes how the core of a supermassive, rapidly rotating star collapses all the way into a black hole after it runs out of nuclear fuel, while two powerful jets of matter and energy shoot into space in opposite directions. Internal shocks in the jets and the interaction of the jets with circumstellar material create the gamma ray burst. The model helps to explain why many gamma ray bursts occur in star-forming regions, where massive stars still reside at the end of their relatively short lives. It won additional converts in April 1998, after a gamma ray burst was observed to coincide with a strange supernova (Science, 19 June 1998, p. 1836).

    The new x-ray observations show further evidence of stellar explosions in the redshifted radiation from two bursts. On page 953, a team led by Lorenzo Amati of TeSRE reports BeppoSAX observations of iron- absorption features in the x-ray spectrum of GRB 990705 (the gamma ray burst of 5 July 1999). On page 955 Luigi Piro of the CNR Institute of Space Astrophysics in Rome and his colleagues report observations by NASA's Chandra X-ray Observatory of iron emission in the spectrum of GRB 991216. (Another group, led by Angelo Antonelli of the Astronomical Observatory of Rome, has since detected iron in the x-ray spectrum of a third gamma ray burst, GRB 000214.)

    Astrophysicists believe that all the iron in nature forms in nuclear reactions inside stars and escapes when the stars explode. “You really need a supernova” to produce the large amounts of iron that Chandra detected, Piro says, a finding consistent with the hypernova model. The hitch, Vietri says, is that the observations by Amati's group show that dense, iron-rich material already appears to have traveled millions of kilometers from the center of the explosion by the time the burst takes place. “In the hypernova model, there's no time [for the iron] to cover these large distances,” he says.

    So Vietri, together with Luigi Stella of the Astronomical Observatory of Rome, has proposed a new scenario, the “supranova model.” It assumes that a massive star first explodes as a supernova, shedding its iron into space and leaving a spinning neutron star behind. For a few months or years, the rapid rotation of this stellar remnant keeps it from collapsing into a black hole. Eventually, though, the neutron star slows down, probably because of magnetic braking. Then it implodes, touching off a gamma ray burst.

    Not everyone is convinced. “The ‘supranova model’ is not a model but a wish,” Woosley says. “There are no detailed dynamical calculations to back it up.” Mészáros admits that the iron-absorption features may be difficult to explain in the original hypernova model, but he says that Vietri's model is only “a possibility. The case is not yet proven.”

    Martin Rees of Cambridge University is bothered by what he calls an “unnatural” long delay between the supernova and the gamma ray burst. “Most people would guess that the [final] collapse would happen after minutes, not months,” he says. Rees and Mészáros say they have worked out a simpler explanation for the new x-ray observations, in a paper in press at Astrophysical Journal Letters.

    Some critics even doubt whether the spectral lines themselves are real. Rees says it is curious that Chandra spotted the faint emission features at an energy to which its detector is most sensitive; it would be quite a coincidence, he says, if the spectrum were redshifted into just the right range. And Woosley wonders why the Japanese ASCA x-ray satellite hasn't detected similar spectral features.

    Piro admits that his team needs to do more work, starting with a reanalysis of x-ray spectra of old gamma ray bursts. “We're basically in the same stage as when the first iron lines were observed in [the spectra of] active galactic nuclei,” he says about disputed sightings that finally won acceptance.

    The clues that crack the case may well come from a newcomer to the investigation. In December scientists at the Massachusetts Institute of Technology in Cambridge will start receiving observations from the HETE-2 orbiting gamma ray observatory, launched on 9 October. Its early-warning capability, Rees says, might enable astronomers to study the crucial first hour of a gamma ray burst's afterglow. “It's terribly important to see what the [iron] line does” at that time, he says. “If we discover line changes, that would be a very important clue.”

    • *Gamma ray bursts in the afterglow era, 17–20 October, Rome.


    X-ray Satellites Seek Clues to Bursts

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    Scientists rejoiced last month when the High Energy Transient Explorer 2 (HETE-2), the first satellite dedicated to spotting gamma ray bursts, rocketed successfully into orbit from Kwajalein Missile Range in the Pacific Ocean. The new orbiting observatory bolsters a handful of x-ray satellites whose instruments are trained on the mysterious explosions. But researchers say setbacks to the fleet have left unfortunate gaps in coverage.

    Launched on 9 October, HETE-2—built by the Massachusetts Institute of Technology in Cambridge, in collaboration with NASA and institutes in the United States, France, and Japan—is expected to be the field's workhorse for the next 4 years. It will provide precise positional information for about one burst per week. The data will reach ground-based observatories within seconds, through a continuous satellite link and a dedicated Internet service. Large optical and radio telescopes will then study the burst, which may still be in progress, as well as its afterglow and the distant galaxy in which it occurs.

    Astronomers hope HETE-2 will solve the riddle of short bursts—a distinct group of explosions, each lasting less than a second, that have never been studied in detail. Because short bursts are much briefer than ordinary gamma ray bursts and have a different distribution of energies, many astronomers suspect that they result from a different explosion mechanism. “HETE-2 can give precise positions for these short bursts and enable counterpart identification to determine if they have the same properties as long bursts,” says Neil Gehrels of NASA's Goddard Space Flight Center in Greenbelt, Maryland.

    Because HETE-2 cannot detect x-ray spectra, it can play only a supporting role in such crucial work as scrutinizing gamma ray bursts and their afterglows for traces of iron (see main text). For studies like that, satellites such as NASA's Chandra X-ray Observatory and the Italian-Dutch BeppoSAX are the instruments of choice. What would have been an even better x-ray observatory, the Japanese ASTRO-E satellite, plunged to Earth in February after its launch rocket misfired (Science, 18 February, p. 1178).

    Another potential burst-hunter is out of the lineup by fiat. According to Luigi Piro of the CNR Institute of Space Astrophysics in Rome, the European XMM-Newton x-ray satellite, launched in December 1999, is actually better equipped than Chandra to observe spectral features in gamma ray bursts. Unfortunately, he says, the European Space Agency doesn't accept “target of opportunity proposals” that let researchers respond rapidly to unexpected events such as gamma ray bursts. “They're missing a big opportunity here,” Piro says. “I hope they will change their minds.”

    Meanwhile, BeppoSAX may get a new lease on life. Launched in April 1996, the mission was originally funded for 5 years. But according to Filippo Frontera of the CNR Institute of Technology and Cosmic Ray Studies in Bologna, Italy, a request to extend its life for 1 year has recently been presented to the Italian Space Agency ASI. A funding decision is expected early next year.

    • *Gamma ray bursts in the afterglow era, 17–20 October, Rome.