News this Week

Science  23 Apr 1999:
Vol. 284, Issue 5414, pp. 562

    Two Former Grad Students Sue Over Alleged Misuse of Ideas

    1. Eliot Marshall

    Not long ago, it would have been unusual for a student to accuse a senior faculty member of misconduct—and almost unheard of to back up the accusations with a lawsuit. But the culture of academia may be changing. Several years ago a nutritionist sued the University of Alabama, Birmingham, and faculty over alleged misappropriation of her graduate school research (Science, 31 January 1997, p. 610). Now, two former Ph.D. candidates have sued professors, alleging misappropriation of research and charging the universities with complicity.

    Both new cases—one involving Cornell University and the other, Columbia University—revolve around the prickly academic issue of who owns ideas, especially in the unequal pairing of professor and student. The universities say that's not a question for the courts to decide, but the students say that academic grievance procedures failed them. Both cases could come up for review this spring in state courts. Cornell and Columbia, which investigated and dismissed the complaints, say they acted properly and predict the courts will side with them.

    In the most recent suit, filed on 29 March, an education researcher from Cornell, Antonia Demas, claims that a member of her thesis committee misappropriated her ideas on teaching nutrition, then used them to get a research grant. Demas, formerly a nutrition consultant, began working on a Ph.D. in Cornell's department of education in 1991. In her complaint she says that she hoped to validate a method of teaching children about unfamiliar but healthy foods by cooking them in the classroom, then serving the food in school lunches. Demas says she also developed a method of measuring the children's responses.

    Demas claims that in 1993, Cornell nutrition professor David Levitsky “pushed” to be added to her committee of Ph.D. advisers, partly because he hoped to use her study group for obesity research. Demas agreed. She contends that Levitsky never completed the obesity study, but within a year, he began taking credit for her work in lectures and interviews. Eventually, she claims, Levitsky used her research in a grant application, failed to credit her properly, and shut her out of the project. She did, however, receive a Ph.D. from Cornell in 1995. Levitsky, who denies these allegations, declined to comment on grounds that doing so might affect the litigation.

    Demas took her complaints to the Cornell ombudsman in 1995. After a review, the ombudsman issued a ruling that Levitsky should co-author a paper with Demas. But that never happened. Demas, still seeking redress, appealed to the dean of faculty, Peter Stein. Nothing happened, her brief says, until the three original members of her thesis committee—who have taken her side throughout the dispute—personally intervened. Stein then conducted an inquiry. Stein confirms that in May 1996 he found that 22 of the 23 allegations against Levitsky were not covered by the scientific misconduct rules of the Department of Health and Human Services. He dismissed them. He asked that one allegation be investigated further; it was later dismissed. Stein referred other ethical questions to other deans, who imposed no sanctions.

    The main question, Stein now says, is whether a professor can use a student's ideas as the basis of his own research grant. It would be wrong to do so, Stein says, if the student had not published the work or received credit for it. But in this case, Stein says, Demas had published her thesis. Using ideas in the public domain is not misconduct, Stein says, even if it preempts a student from getting a grant. In his report, Stein wrote that “Levitsky's preemption of Demas's ideas (i.e. the concept and the recipes) lies within the boundary of permissible academic entrepreneurial behavior and does not warrant further investigation.”

    Last month, Demas's attorneys filed a 58-page complaint charging Levitsky and Cornell with a litany of misdeeds, including fraud and “breach of fiduciary duty.” Cornell's counsel, James Mingle, says the university will ask the court to dismiss the complaint because it has no merit, and, in any event, this type of academic disagreement “is not actionable.”

    Columbia's attorneys are battling a similar complaint brought by a former Columbia mathematics student, Sheng-Ming Ma, against former mathematics department chair Duong Phong. In a complaint filed in March 1998 in the New York Supreme Court for New York City, Ma claims Phong took a math proof he had done as a thesis project and published it as his own in a paper co-authored with Elias Stein of Princeton University. Phong has denied the allegation in a response to the suit.

    Ma's complaint alleges that Phong assigned Ma a problem for his thesis on a mathematical topic involving oscillatory integrals and that he finished a proof in 1995. Phong first rejected the work as flawed, Ma's brief says, but encouraged the student to continue and even suggested that the two might co-author a paper. Later, in March 1997, Ma says, Phong told him the manuscript was “totally wrong” and urged him to focus on another topic.

    Ma claims that he sent his draft to other mathematicians for advice. One of them, Stein, gave him a “terrible shock,” Ma says: Stein wanted to know why Ma was working on a problem that he and Phong had recently solved and were planning to publish in Acta Mathematica. Stunned at first, Ma says, he soon accused Phong and Stein of plagiarizing his work. Ma acknowledges that his work was not completely original: Phong and Stein had been working on this topic since 1991. But Ma claims that Phong had set this specific problem aside by the time his thesis work began.

    Ma sought help from Columbia administrators and others to stop publication of the paper. But few could understand the text, and nearly all the mathematicians Ma contacted sided with Phong. They told Ma that his thesis work was inadequate, and that he was wrong to claim that the Phong-Stein paper, which Acta Mathematica published in November 1997, was plagiarized.

    At Ma's insistence, graduate school dean Eduardo Macagno looked into the case, reviewing comments by Phong, Stein, two other Columbia mathematicians, and a Harvard mathematician. Macagno concluded that no plagiarism had occurred. The math department told Ma that he would have to apologize to Phong before he would get a new mentor and that without a mentor, he would have to leave. Ma refused, and Columbia dismissed him in 1997. For a time, Ma says, he worked at a Subway sandwich shop. But, with two master's degrees in math from Columbia, he found a computer-related job. He filed suit in 1998. Mathematician Lawrence Alan Shepp of Rutgers University in New Brunswick, New Jersey, is supporting Ma's case but says others must judge whether Ma's work is correct.

    Phong declined to comment. Stein says that Ma's argument is “without merit.” It was Ma who used his professor's ideas, Stein says, not the other way around: Ma's “got it upside down.” He recalls that Phong tried to get Ma to work on a different and difficult subset of the problem the two professors were working on, but that Ma, making no progress, decided to try to duplicate their efforts.

    Columbia is asking to have the suit dismissed because it claims to have made a “diligent, complete, and unbiased” investigation before rejecting the complaint. The university also argues that Ma is trying to involve the court in “purely academic decisions” which New York has “repeatedly held to be beyond judicial review.” Finally, the university suggests that mathematical principles cannot be plagiarized in any case because they “simply cannot be copyrighted.”

    If the case does go to trial, it could create a unique problem: The judge, and possibly a jury, might be asked to rule in a few days on who contributed what to a complex scientific proof—the kind of controversy that can take years to resolve among mathematicians.


    Forecasters Learning to Read a Hurricane's Mind

    1. Richard A. Kerr

    Hurricane forecasting has come a long way since one sneaked up unannounced on Galveston Island, Texas, in 1900 and killed 8000 people. Nowadays, meteorologists know when a storm is on its way, but predicting just where it will hit land still isn't easy. For most of the past half-century, forecasters have struggled to narrow their predictions of a hurricane's next move, but as recently as the 1970s, guesses of a hurricane's position 24 hours ahead of time were off by an average of more than 200 kilometers. Now hurricane researchers finally have something to celebrate.

    “It's been a pretty exciting 5 years,” says hurricane specialist Russell Elsberry of the Naval Postgraduate School in Monterey, California. Better observations of the streams of winds that carry hurricanes toward land are feeding new computer models for predicting how those winds will shift. And, as recent analyses—including one in last month's Bulletin of the American Meteorological Society—show, these new tools are getting results. “It's quite clear that the [U.S.] National Hurricane Center has been making much improved track forecasts” of future storm movement, says Elsberry. The new forecasting skill means that crowded coasts will have more time to prepare for storms, and warnings can be limited to smaller sections of coast, saving millions of dollars on unnecessary evacuations.

    Hurricane forecasting has spent a long time in the doldrums. In the 35 years after record keeping was begun in 1954, forecasts of a storm's position 24 hours in the future improved by only about 1 kilometer per year, even after satellite images made it easier to track the position, winds, and extent of a hurricane. One problem was that neither satellite images nor the scattered data from weather buoys and ships offered many clues about the stream of air surrounding a storm, which determines its speed and direction.

    “There is no substitute for in situ observations,” says meteorologist Kerry Emanuel of the Massachusetts Institute of Technology. For 15 years, researchers had been collecting those observations by flying aircraft near the storms and releasing instrumented packages called dropwindsondes—a sort of weather balloon in reverse that radios back wind speed and direction, temperature, pressure, and humidity as it falls. But those efforts were sporadic until 1997, when the National Weather Service (NWS) made such observations routine and introduced a new dropwindsonde that tracks itself using the satellite-based Global Positioning System, allowing more precise wind mapping. The NWS also acquired a Gulfstream-IV jet, which could fly higher and faster around storms than the traditional hurricane-hunter aircraft, probing more of the nearby atmosphere.

    In the March Bulletin of the American Meteorological Society, Sim Aberson and James Franklin of the National Oceanic and Atmospheric Administration's (NOAA's) Hurricane Research Division in Miami, Florida, describe the payoff: The 1997 dropwindsonde observations improved storm-track forecasts by 31% 24 hours ahead, by 32% at 36 hours, and by 12% at 48 hours, they report, compared to computer forecasts made without the observations. The tropics were relatively quiet in 1997, prompting just five missions by the Gulfstream-IV, so “you don't want to make too much of the numbers,” says Franklin. Still, he says, “we're fairly confident '98 will be like '97.”

    Along with better data, forecasters have better tools for interpreting the information. Their primary aid is computer modeling that incorporates the latest observations to create a picture of the storm and its surroundings and calculates how the storm will move and develop. “There has been a quantum increase in the skill of the models,” says Stephen Lord, a deputy director at the NWS's National Centers for Environmental Prediction in Camp Springs, Maryland.

    The prime example has been the hurricane model developed by Yoshio Kurihara, Morris Bender, and Robert Tuleya of NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey. The GFDL model works on two scales. Like standard global atmospheric models, it simulates the atmosphere in broad strokes to capture the river of air, thousands of kilometers across, that sets the hurricane's overall course. But it also zooms in on the hurricane's vortex, using the latest satellite and in situ data to model the storm and the way it interacts with its surroundings in fine detail.

    In tests prior to becoming operational at the National Hurricane Center (NHC) in 1995, the GFDL model outperformed its predecessor, logging average track errors that were about 12%, 24%, and 28% better at 24, 48, and 72 hours, respectively. Since then, “it's been the best performer” of the half-dozen models that NHC forecasters consult before issuing an official forecast, according to James Gross of the NHC.

    Even so, it can be hard to tell whether better data and models are actually improving the official forecasts, because the improved tools are new and forecasters have always had good seasons and bad, depending on the nature of the storms. But meteorologist Colin McAdie of the NHC thinks track forecasts are improving at an accelerating pace. His recent analysis shows that at all forecast times, the predictions improved twice as fast during 1992 to '96, the period when the GFDL model debuted, as they had during the previous 2 decades. The routine dropwindsonde observations that began in 1997 seem to have helped sustain that progress.

    Such improvements should allow the NWS to target its hurricane warnings more precisely. When the weather service issues a hurricane warning, prompting an evacuation, it generally includes a stretch of coast three times longer than the section that eventually suffers high winds, just to be sure—which means that hundreds of kilometers are cleared but suffer little damage. With costs averaging half a million dollars per kilometer of evacuated coast, according to the NWS, not to mention a toll in public goodwill, that's an expensive insurance policy. If the improvements of the '90s can be continued, averting hurricane disasters should be cheaper and less disruptive.


    R&D Takes a Hit, But Don't Count It Out

    1. Eliot Marshall

    Dividing along party lines, Congress narrowly approved a Republican budget resolution on 15 April that would hold the line on federal spending and, in the process, slash most civilian R&D budgets. The $1.7 trillion budget for fiscal year 2000, which begins 1 October, would channel surplus revenue into tax cuts and the Social Security program while requiring steep reductions in future “discretionary” domestic programs. Over the next 5 years, according to an estimate by the American Association for the Advancement of Science (AAAS, which publishes Science), the cuts would range from 6% for the National Institutes of Health (NIH) to 14% at the National Science Foundation ( But the gloomy resolution comes with a silver lining: There is almost no chance that Congress will stick to its numbers.

    Congressional leaders took great pride in getting the budget resolution approved early, only the second time in 12 years that they have met the deadline of 15 April. But legislators are already planning ways of getting around a measure that presents a politically unpalatable set of fiscal options. The first opportunity may arrive in a few weeks as Congress takes up an emergency bill to pay for current U.S. military operations in Kosovo. This “veto-proof” supplemental spending bill could become a vehicle for other budget-busting military expenditures as well as a means to negotiate increases in education, transportation, and other popular programs.

    How did Congress get itself into such a fix? The problem goes back to a 1997 law that imposed “caps” on specific budget areas. Adopted during a time of deficits, the caps have become a headache now that the government anticipates annual budget surpluses. Last year, Congress and the Clinton Administration retained the caps but circumvented them by labeling many programs as “emergency” measures. That label exempted them from a requirement that any increase be offset by a cut of equal or greater size. As a result, federal outlays officially remained below the caps in 1999. In reality, however, Congress overshot its target by some $20 billion. Analysts say that the same thing is likely to happen in 2000.

    Among Republicans, the most outspoken critics of the budget gimmicks are members who draft the spending bills—the chairs of appropriations committees. Early this month, for example, Senator Ted Stevens (R-AK), chair of the Senate Appropriations Committee, said: “I don't think we can live under these caps.” On 14 April, Representative John Porter (R-IL), chair of the House subcommittee that writes the appropriations bill for NIH, told the National Health Council, a biomedical interest group, that he wanted to duplicate last year's 15% increase for NIH. Porter said that both Democrats and Republicans want to change the rules to allow hefty increases but that neither wants to be the first to propose it. “In the end,” Porter predicted, “the White House and Congress will sit down and quietly raise the caps.”

    Democrats were even more critical. Senator Jay Rockefeller (D-WV), a member of the Senate Commerce subcommittee for science, described the 15 April vote as a setback for research. “Now is not the time to turn our back” on science and technology, Rockefeller said at a Senate hearing on the Administration's R&D budget request for 2000. Representative George Brown (D-CA), ranking member on the House Science Committee, summed up the prevailing skepticism about the fate of the budget resolution in a press release issued last week. The bad news, Brown said, is that the budget resolution “treats R&D very poorly. … The good news is that this budget is almost entirely irrelevant.”


    Black Holes Enter the Middleweights

    1. Mark Sincell*
    1. Mark Sincell is a free-lance science writer in Tucson, Arizona.

    Black holes have seemed to come in only two varieties: “supermassive” ones, which power brilliant galaxies called quasars and weigh millions to billions of times more than the sun, and “stellar mass” black holes, which have about the mass of one large star. But at the meeting of the High Energy Astrophysics Division of the American Astronomical Society in Charleston, South Carolina, last week, two groups reported the discovery of a new class of black holes right in the middle.

    Astronomers believe that stellar mass black holes form when a massive star reaches the end of its life and collapses to a point of infinite density. Supermassive black holes are more mysterious. “No one really knows” where they come from, says astronomer Richard Griffiths of Carnegie Mellon University in Pittsburgh. One theory holds that they form in so-called starburst galaxies, which contain seething cauldrons of young, hot stars that flare up suddenly in the galaxy's core and burn out just as fast, leaving behind a pile of stellar debris, including stellar mass black holes. These may lump together and feed off the remains of other stars, growing into giant black holes.

    To test this hypothesis, astronomers have searched nearby galaxies for the intermediate-size black holes that should form along the way. Black holes are invisible, of course, but the hot, gaseous accretion disks that encircle and feed them are not. The hot gas emits copious x-rays, and its spectrum also has an x-ray “tail,” thought to result as ultraviolet photons from deep inside the disk collide with fast-moving electrons at the surface, gaining energy. The total disk luminosity fluctuates dramatically, but theorists think that the maximum luminosity is proportional to the mass of the central black hole. Earlier searches turned up several x-ray sources bright enough to be intermediate-mass black holes, but these sources did not seem to have the expected tail or the rapid variability.

    Now, two groups have taken a closer look at several of these x-ray sources. Griffiths and his Carnegie Mellon colleague Andrew Ptak pointed the Japanese x-ray satellite ASCA at one source in the starburst galaxy M82. They found a fluctuating x-ray source whose luminosity and variability pattern matches that of a disk around a black hole weighing 460 times the mass of our sun. X-ray astronomers Ed Colbert and Richard Mushotzky of the Goddard Space Flight Center in Greenbelt, Maryland, examined 39 archived galaxy spectra compiled by the x-ray satellite ROSAT and found the telltale x-ray tail in six sources. The high luminosities of another 15 sources suggest that they are also black holes ranging from 100 to 10,000 times the mass of the sun, although they lack the complete spectral fingerprint of an accretion disk.

    “The two studies complement each other nicely,” says Griffiths. He adds that the studies, which are appearing in this month's Astrophysical Journal and Astrophysical Journal Letters, are “a major clue” that the objects are supermassive black holes in their infancy.

    The observers “have done a very uncertain exercise very carefully,” says astrophysicist Jean-Pierre Lasota of the Meudon Observatory in France. But not everyone agrees that these middle-sized black holes are newly formed from collapsed stars and are on their way to becoming even bigger. Astrophysicist Fred Lamb of the University of Illinois, Urbana-Champaign, for example, thinks it is more likely that both middleweight and supermassive black holes condensed out of primordial material in the early universe.

    Sorting out these possibilities will take some time. “No one ever thought much about” middleweight black holes, Mushotzky points out, “because no one had ever seen one.”


    Starving Black Holes Sound an SOS

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    An x-ray satellite may have heard the whimpers of dying quasars. The Japanese Advanced Satellite for Cosmology and Astrophysics (ASCA) has picked up feeble high-energy x-rays from six old, nearby galaxies —the distress signals of supermassive black holes starving to death. Or so say American and British astronomers who presented their results last week at the meeting of the High Energy Astrophysics Division of the American Astronomical Society in Charleston, South Carolina.

    The results suggest that the giant black holes powering quasars, brilliant galaxylike objects in the early universe, did not shut down completely as their food supply— interstellar gas—dwindled. Instead, say Tiziana Di Matteo of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, and Steven Allen and Andy Fabian of the Institute of Astronomy in Cambridge, U.K. such black holes continue to emit a whisper of x-rays, generated by a slow trickle of very hot gas. Some other researchers aren't convinced by this picture, which the astronomers also describe in a paper submitted to the Monthly Notices of the Royal Astronomical Society. But if it's correct, it implies that part of a mysterious glow of x-rays that fills the universe might come from starving quasars.

    Astronomers already suspected that the giant elliptical galaxies where ASCA picked up the x-ray signals harbor black holes millions or billions of times more massive than the sun. In five of the six, says Fabian, stars and gas whip around the center at high speeds, apparently in the grip of a powerful gravitational field. But these black holes had seemed quiescent, like those thought to sleep at the centers of our own galaxy and others. The black holes may once have produced the prodigious radio and x-ray emissions that emerge from active galactic nuclei and quasars, but they long ago fell silent.

    Or so astronomers thought. Di Matteo, Allen, and Fabian say that the small quantities of high-energy x-rays that ASCA picked up are just what you would expect of a quasar still being fed by a trickle of gas. Instead of forming the flat, dense disk of infalling material thought to surround the black hole in a quasar or active galaxy, the meager infall should form a bloated, tenuous disk, or torus. According to theoretical models, the ionized hydrogen in such a low-density disk would grow very hot, because hydrogen nuclei, or protons, radiate energy slowly. In a denser disk they can transfer energy to electrons, which radiate millions of times more efficiently, but in a rarefied disk, collisions between the protons and electrons would be infrequent. The superheated gas would slowly leak very high-energy x-rays.

    “It's a plausible model,” says Bram Achterberg of Utrecht University in the Netherlands, “although it's not completely clear that such hot, thick disks can remain dynamically stable over long periods of time.” Julian Krolik of Johns Hopkins University in Baltimore also questions the assumption that heat would be bottled up in the protons. “Laboratory experiments indicate that there are many more mechanisms [for electrons and protons] to exchange energy” than the models allow, he says.

    Fabian concedes that he and his colleagues also can't be sure the faint x-rays really are coming from the cores of the elliptical galaxies; ASCA's positional accuracy of half an arc minute is simply not high enough. “There's a lot of galaxy in half an arc minute,” he says. But he says that NASA's Chandra X-ray Observatory, due to be launched later this year, will “without doubt deny or confirm our model.”

    If it does hold up, such faint x-ray signals could support astronomers' suspicions that droves of supermassive black holes lurk in nearby galaxies. The murmurs of starving black holes could also be a major part of the universe's diffuse x-ray background, say Di Matteo and Allen, although Fabian is not so sure. “Here I disagree with my co-authors,” he says, noting that most astronomers think very distant active galaxies are the source of the pervasive x-rays. “That's the model I believe in for 6 days of the week.”


    Study Sounds Alarm on Yellowstone Grizzlies

    1. Jocelyn Kaiser

    Drive through Yellowstone National Park on a late spring day, and there's a good chance you'll see some of its thriving black bears—a young adult foraging near a stream, or a mother with cubs clambering up a hill. But odds are you won't spot a grizzly: Only a few hundred of these elusive animals roam the Yellowstone ecosystem. Just as elusive, however, is whether the grizzly is prospering out of the spotlight. The Interior Department, which runs the park, thinks so, and in June intends to release a strategy for managing the bear after its eventual removal from the threatened species list. Others disagree and are emboldened by a new study suggesting the Yellowstone grizzly is not yet out of the woods and that the government's victory declaration may be premature.

    Wildlife biologists have dueled for years over how many grizzlies inhabit Yellowstone. An accurate census of the reclusive bears is out of the question, so both sides rely in part on estimates of the population's growth rate to determine whether the grizzly can survive without federal protection. Much of the rancor stems from differing interpretations of data on grizzlies tracked by radio or spotted year-round. Using a new model of population dynamics based on field data, ecological modeler Craig Pease of Vermont Law School and David Mattson, a U.S. Geological Survey (USGS) grizzly biologist, estimate that Yellowstone grizzly numbers grew only about 1% a year from 1975 to 1995—much lower than the 5% annual rise over the last decade claimed by Interior. Their report, in this month's issue of Ecology, also portends harder times for the grizzlies, thanks to poor yields of whitebark pine seeds, a favorite food. Some experts applaud the work. “I'm absolutely convinced [they] have the right answer,” says University of California, Santa Cruz, population biologist Dan Doak.

    Interior officials beg to differ. “The population's been going up for some time,” says Chris Servheen of Interior's U.S. Fish and Wildlife Service, who's coordinating an evolving agency plan for managing the grizzly after delisting. Still, he says, to help determine whether the population is growing sustainably, Interior has asked a panel of The Wildlife Society—primarily field biologists and resource managers—to review grizzly data and report back in the next few months.

    In the 1800s, up to 100,000 grizzlies roamed the lower 48 United States, scientists estimate; less than 1000, it appears, were left by 1975. For decades, rangers tolerated bears feeding at garbage pits. Enlightened managers stopped these practices in the early 1970s, hoping to reduce maulings and allow the bears to lead a more natural life. But scores of bears, unable to break the habit of looking for handouts or snatching sheep, were killed. In 1975, the government put the remaining grizzlies on the threatened list.

    Pease and Mattson began examining grizzly numbers around 1992, after obtaining Interior monitoring data on 202 radio-collared bears it had tracked since 1975. The duo folded these data into a model of births and deaths that takes into account factors—such as age, sex, and whitebark pine yield—that influence bear survival. They also corrected for a problem they claim was overlooked in previous studies: Bears collared in the backwoods for research are less likely to pose a problem to humans and be shot; thus, any population growth estimate based only on data for these bears is likely to be inflated, Pease says.

    Some scientists dispute this analysis. The model is “way too complex for the available data,” says ecologist Mark Boyce of the University of Wisconsin, Stevens Point, who co-authored studies finding a 5% rise. “There are so many different sources that point to the population increasing, it's almost incomprehensible that these guys could claim that the bears haven't increased.” For example, counts of females with new cubs in 1996 were the highest since 1959. And grizzlies, which stake out large territories, appear to be pushing southward and eastward. “Bears are occupying habitat where they haven't been for the last 40 or 50 years,” says Servheen.

    Pease dismisses the cub counts as “biased and ad hoc.” He speculates that bears may be straying farther from the park because a scarcity of whitebark pine is forcing them to forage at lower elevations. In addition, nobody has explored whether the 1988 fires forced bears to shift their ranges or made them otherwise easier to spot, grizzly modeling pioneer Mark Shaffer of Defenders of Wildlife noted in Science last week (p. 433).

    Even if optimistic population estimates are accurate, the grizzlies may face a hard road. A disease called blister rust is devastating the whitebark pine, Pease and Mattson note. Other grizzly food sources are declining, too. Cutthroat trout, which the bears fish out of streams during spawning, are getting eaten up by lake trout, and park managers are shooting bison and collecting the carcasses (instead of leaving them for bears) to avoid the spread of brucellosis to cattle. “The number of bears is unlikely to grow unless we can close roads and restrict hunting and grazing,” says Pease. Removing the grizzly from the endangered list at this time, Boyce adds, “doesn't make a lot of sense.”

    Servheen says Interior is forging ahead with its management plan but is keeping an open mind on the delisting, pending The Wildlife Society's report. In the meantime, he says, a USGS tracking study could yield a better ballpark number of grizzlies by summer. “If the status is good, we should celebrate that and move on to other problems,” says Shaffer. “If it hasn't recovered, we need to get back to work.”


    Italy's KLOE Sets Sights on CP Violation

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    NAPLES, ITALY—The titans of the particle physics world, the CERN laboratory near Geneva and Fermilab near Chicago, are racing to confirm that matter and antimatter are not always completely equivalent—in technical parlance, they are searching for violation of CP symmetry. But at Frascati, south of Rome, a more modest outfit hopes to rob them of that prize. Last week, this upstart machine, called KLOE, recorded its first real data. KLOE is a new detector purpose-built to look for CP violation in particles produced by DAFNE, Italy's new electron-positron collider at the National Institute for Nuclear Physics (INFN).

    In contrast to its bigger particle-smashing cousins, INFN aims to make a virtue of its low-energy status by producing events that are cleaner and recording them more completely. To achieve that, KLOE has the world's largest drift chamber—where the tracks of particles are recorded—surrounded by a calorimeter to measure their energy and a huge 6-meter superconducting solenoid, which bends the paths of charged particles. “Essentially it is a very simple detector, but very large and very precise,” says Juliet Lee-Franzini, physics leader at INFN.

    The hunt for a matter-antimatter imbalance was sparked in 1964 by Val Fitch and James Cronin. They were studying the neutral kaon, a short-lived particle that cannot decide whether it is matter or antimatter—it switches continually between the two states. Fitch and Cronin, in collisions at Brookhaven National Laboratory on Long Island, found that for a small fraction of neutral kaons the “mixing” between particle and antiparticle followed a different path, resulting in different decay products. This suggested a breakdown of so-called “charge-parity symmetry” and became known as “indirect” CP violation because the CP violation takes place in the “mixing” and not in the decay itself. In the late 1980s, researchers at CERN detected the first hints of “direct” CP violation, in which some kaons and their antiparticles decayed in different ways. Those hints were strengthened earlier this year when the KTeV group at Fermilab made the first clear observation of direct CP violation in kaons produced by colliding protons (Science, 5 March, p. 1428). And another CERN group, the NA48 collaboration, is now analyzing data in search of CP violation.

    Despite these high-profile efforts, the researchers at Frascati hope to steal a march using what KTeV co-spokesperson Bruce Winstein of the University of Chicago calls “a completely different way of studying the [neutral kaon] system and CP … violation.” Whereas the Fermilab and CERN groups produce kaons by colliding protons with a fixed target, DAFNE speeds electrons and their antiparticles, positrons, to an energy of 510 million electron-volts in two 100-meter-long rings and collides them inside the KLOE detector. They annihilate and produce short-lived entities called phi particles, which is why DAFNE is sometimes called a “phi factory.” The creation of phi particles is normally very rare, but DAFNE is designed to produce them at high rates by using electron and positron beams of very high intensity. The advantage of this relatively low-energy approach is that it produces much less background noise in the detector than higher energy collisions.

    The phi particles decay into kaon-antikaon pairs, and KLOE locks onto any pairs of neutral kaons. Each kaon in such a pair has two components, a K-short that decays almost instantaneously into two pions, and a K-long that can travel for several meters before decaying into three pions. This, explains Lee-Franzini, is why KLOE is so large: It can capture the decay of both varieties. About one in 1000 K-long particles should change spontaneously into a K-short, which in turn produces two pions—an indirect CP violation. But the Frascati team will also look for K-long particles that decay directly into two pions instead of three—a direct CP violation. This is predicted to happen once in every one million events.

    Paolo Franzini of Rome University, KLOE's spokesperson, says it will take some time to record enough events to get a good fix on CP violation. “For a first measurement, which is of the same accuracy as KTeV, we will need 6 to 9 months of collecting data,” he says. To improve on that, “we have to collect at least 500 million events, and so far we have seen five events.” Over the next few months, engineers will fine-tune the detector and adjust the energy of the colliding electrons and positrons to produce the maximum number of phi particles. “Our ultimate aim is to collect 50 billion events,” says Franzini. This would increase the accuracy to 10 times that of the present KTeV result, a level all three groups will try to achieve. “We have just started, the machine is new, the detector is new, and everything is working very promisingly.”


    What Future for France's IN2P3?

    1. Michael Balter

    PARIS—French physicists are nervously awaiting plans for a major shake-up of French research in nuclear and particle physics. An unpublished report, prepared by particle physicist Jean-Jacques Aubert at the University of the Mediterranean in Marseilles at the request of science minister Claude Allègre, is said to recommend some form of merger between the two main bodies responsible for subatomic physics in France: the National Institute of Nuclear and Particle Physics (IN2P3), which is part of the giant CNRS basic research agency; and the Atomic Energy Commission's (CEA's) Department of Astrophysics, Nuclear Physics, Particle Physics, and Associated Instrumentation (DAPNIA). Although this marriage would be consistent with Allègre's long-stated desire to end duplication of research efforts and enhance scientific collaboration, some physicists argue that it would weaken the role of the CNRS and give the CEA too much influence over research priorities.

    Vincent Courtillot, the science ministry's director-general for research, told Science that although no final decisions have been made, a “soft merger” between IN2P3 and DAPNIA is the leading candidate among several proposals that have been discussed. Such a union would create a physics powerhouse: IN2P3 employs about 500 researchers in 18 laboratories through-out France, while DAPNIA's 200 physicists work at accelerators and other facilities across Europe and the United States. Both have their headquarters in Paris. Under the “soft merger” plan, the two organizations would come together under single administrative and scientific councils, but physicists would maintain their current status as either CNRS or CEA researchers.

    Proponents of the merger say that many CNRS and CEA physicists already work closely together and that formalizing this arrangement would strengthen these collaborations and increase efficiency. “The labs often work in common,” says Edouard Brézin, president of CNRS's executive board. “If this common work is concretized with a joint scientific council, it would be a good idea.” Brézin points to the GANIL heavy-ion accelerator in the northern city of Caen—which is jointly run by the CNRS and CEA—as a model for future collaboration “that works extremely well.”

    But many physicists are not so sure. IN2P3 researchers are already upset by Allègre's decision not to name a new IN2P3 director when Claude Detraz left the institute's helm last October to take a position at CERN, the European particle physics center near Geneva. Ministry officials have said they do not want to appoint a replacement for Detraz while IN2P3's future is still being discussed, but last month leading CNRS physicists wrote to French Prime Minister Lionel Jospin to protest that the lack of a director was “paralyz[ing] the activities of our laboratories.”

    And many researchers say they do not believe the proposed merger is necessary. “I do not see a reason to upset everything,” says André Rougé, a particle physicist at the Ecole Polytechnique in Palaiseau, outside Paris. “There are already collaborations and joint labs. If the structures become too complex, it could stifle new initiatives.” Researchers are particularly concerned that a rapprochement between IN2P3 and DAPNIA might be a first step to IN2P3 being swallowed up by the CEA. With responsibility for research into nuclear energy and atomic weapons, the CEA is seen by many scientists as having different priorities from the basic science mission of the CNRS. “We will get lost in the CEA's objectives, even in basic research,” says physicist Harry Bernas of the University of Paris at Orsay, a former director of the IN2P3 laboratory on that campus. Bernas adds that such a development would present a “great danger,” especially in politically sensitive areas such as nuclear waste research, where the “CNRS provides the only independent evaluation” of government policy.

    Courtillot counters, however, that CNRS researchers' fears about the CEA's nuclear priorities are “not warranted,” arguing that DAPNIA has long had a reputation for doing independent fundamental research on its own. The question should be settled sometime in the next month or two, when Allègre is expected to take action on the Aubert report's recommendations. Says Brézin: “France cannot have two research strategies in this domain. There must be one French policy.”


    Discovery of 'Gay Gene' Questioned

    1. Ingrid Wickelgren

    Six years ago, molecular geneticist Dean Hamer and his colleagues at the National Cancer Institute (NCI) announced to great fanfare that they had found a genetic link to male homosexuality. Their work indicated, they said, that an as yet unidentified gene on the X chromosome influences who develops the trait (Science, 16 July 1993, p. 321). Researchers were excited by the possibility of one day learning the biological basis for sexual orientation but also wary, given that initial reports of genetic linkages for other complex traits, such as manic depression and schizophrenia, had fallen apart under further scrutiny. Now the “gay gene” linkage may be suffering a similar fate.

    On page 665, clinical neurologists George Rice and George Ebers at the University of Western Ontario in London and their colleagues report failing to find a link between male homosexuality and Xq28, the chromosomal segment implicated by the NCI team's study. In addition, unpublished work from a group led by psychiatrist Alan Sanders at the University of Chicago does not provide strong support for a linkage. Taken together, Rice says, all the results “would suggest that if there is a linkage it's so weak that it's not important.” He adds that genetics may still contribute to homosexuality, but researchers should be looking elsewhere for the genes.

    Hamer disagrees that the Xq28 linkage is weak, citing possible problems with how Rice's team selected their study subjects. And other observers say that the jury is still out. Elliot Gershon, a psychiatric geneticist at the University of Chicago, calls the Ontario team's finding “interesting and important” but cautions that more data are needed. “Failure to find linkage in this study does not mean it doesn't exist,” he says.

    That genes may contribute to homosexuality in males became clear in 1991 when psychologist Michael Bailey of Northwestern University in Evanston, Illinois, found that fully 52% of the identical twins of gay men were also gay, compared to just 22% percent for fraternal twins. Then in 1993, Hamer's team pointed to a place where a putative “gay gene” might reside.

    They homed in on the X chromosome, which males inherit only from their mothers, because they noticed a preponderance of gay relatives on the maternal side of the families of the gay men they studied. When the researchers took a closer look at the X chromosomes of 40 pairs of gay brothers from the families with maternal gay relatives, they saw that the brothers were far more likely to share certain DNA signposts, or markers, on the Xq28 region of the chromosome than would be expected by chance. The team confirmed the linkage in a second study of 33 new families with gay brothers, published in Nature Genetics in 1995. In this X chromosome snippet, the researchers concluded, lay a gene that could nudge males toward homosexuality.

    Meanwhile, intrigued by the initial report, Rice and Ebers undertook their own study to see if the result would hold up. They recruited families with two or more gay brothers through ads in Canadian gay news magazines. The families responding to the ads included 52 pairs of brothers willing to donate blood, which the researchers examined for the presence of four markers in region Xq28, using methods similar to those employed by Hamer's group.

    But the Ontario team found that gay brothers were no more likely to share the Xq28 markers than would be expected by chance. And although a statistical analysis of the data could not rule out the existence of a gene in this region with a small influence on the trait, it could exclude the possibility of any gene in Xq28 with a major genetic influence, say, doubling a male's chances of being gay. Ebers interprets all these results to mean that the X linkage is all but dead. “What is troubling is that there is no hint or trend in the direction of the initial observation,” he says.

    Hamer, however, thinks that the way the Ontario researchers selected the families would tend to hide the Xq28 contribution. He always said, he points out, that the gene does not influence all cases of male homosexuality but only those that are transmitted maternally. And in contrast to his group, Hamer says, the Ontario team did not select families based on the presence of maternal transmission. “Maybe there was an X chromosomal linkage in some families, but those families weren't analyzed,” Hamer says.

    Ebers says they didn't select their families based on maternal transmission because they found no convincing evidence for such transmission in the family pedigrees. What's more, even after his group removed two families that might wash out an X chromosome effect because there were signs of the trait in females or in the father, the results remained the same. Nor was the effect evident in a study led by Sanders, which he reported last June at a meeting of the American Psychiatric Association. His team had found only a weak hint—that wasn't statistically significant—of an Xq28 linkage among 54 gay brother pairs.

    A much larger study, using, say, 200 gay brother pairs, could probably resolve the issue, researchers say, but funding for such a project has been hard to obtain. So could any successful efforts to pluck out a gene in Xq28, something Hamer's group is pursuing. But the Ontario team doubts that route will pay off. “We're looking for a link on other chromosomes,” Rice says.


    A New Human Ancestor?

    1. Elizabeth Culotta

    Ethiopian fossils reveal a new branch on the hominid family tree: a small-brained hominid that is a candidate for the ancestor of our lineage

    About two and half million years ago, on a grassy plain bordering a shallow lake in what is now eastern Ethiopia, a humanlike creature began dismembering an antelope carcass. Nothing remains of the hominid, but the antelope bones show that it wrenched a leg off the carcass, then used a stone tool to slice off the meat and smash the bone. After several tries, it managed to break off both ends of the bone and scrape out the juicy marrow inside.

    At just about the same time, two other hominids died near the lake. One, perhaps 1.4 meters tall, had long legs and a human gait but long, apelike forearms. The other, a male, lay some distance away. His limb bones are gone, but the remains of his skull show he had a small brain, big teeth, and an apelike face.

    These new fossils, described in two papers beginning on page 625 of this issue, give different glimpses of each hominid, and no one can be sure all three belonged to the same species. But even if not, their details are starting to fill in a mysterious chapter of human prehistory. According to the international team that made all three discoveries, the big-toothed skull represents an unusual new species that is the best candidate for the ancestor of our own genus, Homo. Not everyone in the contentious field of paleoanthropology agrees, but the new species, which Ethiopian anthropologist Berhane Asfaw and his colleagues have named Australopithecus garhi (garhi means “surprise” in the language spoken by the local Afar people), is certain to shake up views of the transition from the apelike australopithecines to humankind. And the scored bones from the first hominid's feast are the earliest recorded evidence of hominids butchering animals, bolstering the notion that meat eating was important in human evolution.

    “They've put together a whole package here, so that you can say a fair amount about a time we don't know much about,” says anthropologist F. Clark Howell of the University of California (UC), Berkeley. With its surprising mix of traits—primitive face and unusually big teeth—the new australopithecine doesn't match the profile many researchers expected for a human ancestor at this stage. “It's very exciting,” says paleoanthropologist Alan Walker of Pennsylvania State University in University Park. “Until now it's all been just scraps of teeth and bits of mandible from this time. And this [morphology] is a surprise.”

    But this rare glimpse of a murky period in human evolution raises as many questions as it answers. A. garhi has few traits that definitively link it to Homo, and like other hominids from the same period, it may simply be an evolutionary dead end that brings us only slightly closer to understanding our own ancestors, says paleoanthropologist Bernard Wood of George Washington University in Washington, D.C. The debate is complicated by the fact that paleoanthropologists are deeply divided over who the first humans, or members of Homo, were, and indeed what makes a human. “These are magnificent fossils,” says Wood, but he's not ready to admit A. garhi into the gallery of our ancestors. “At this point it's impossible to tell what's ancestral to what,” he says. “This won't be the last ‘surprise.’”

    Anthropologists have long been itching to know just what East African hominids were doing between 2 million and 3 million years ago, says one of the team's leaders, paleoanthropologist Tim White of UC Berkeley. Decades of fieldwork and analysis have allowed researchers to identify many characters in the human evolutionary story (see diagram), starting with apelike species such as the 4.2-million-year-old A. anamensis. Next in line, known from 3.7 million to 3.0 million years ago, is A. afarensis, best known for the famed “Lucy” skeleton: a meter-tall, small-brained, upright hominid that retained apelike limb proportions and a protruding lower face.

    More than a million years separate Lucy from the first specimens usually considered to be part of our own genus, which appear in East Africa around 2 million years ago and tend to have larger brains and a more human face, although they are highly variable. In the interim, the South African fossil record is diverse and confusing, and the East African record has been sparse. The period includes three species that fall into the “robust” australopithecine group—heavy-jawed hominids with skull crests and large back teeth, perhaps for eating hard roots and tubers—that are not part of our own lineage. More promising for those seeking a human ancestor is A. africanus, known from South Africa starting at around 2.8 million years ago, which has a more humanlike face than the Lucy species.

    But the A. africanus fossils were found half a continent away from the East African cradle of Homo, and some anthropologists have been hoping for a stronger candidate for the root of our lineage. “After the split with the robust lineage, we have very little evidence,” says Walker. That's why White and his team zeroed in on sediments in the desert of Ethiopia's Afar depression. They struck gold with three separate discoveries, all dated securely to 2.5 million years ago by radiometric techniques on an underlying volcanic rock layer.

    One dramatic find came in 1997, when El Niño-driven rains washed away stones and dirt on steep slopes near the village of Bouri. Berkeley graduate student Yohannes Haile-Selassie spotted fragments of the skull—the color and thickness of a coconut shell—on the surface. A closer look revealed teeth poking out of the ground. Much of the rest of the skull had washed down the hill, so the team, which includes 40 members from 13 countries, took the slope apart. They dug tons of material from the hill, then sieved it and picked through it for bone—twice. “It was probably the most difficult fossil recovery we've ever done,” says White. “We spent 7 weeks on that slope.” Although the delicate bones of the middle face were gone for good, the team found many more skull fragments.

    After reconstructing the skull, the researchers were confronted with a face that is apelike in the lower part, with a protruding jaw resembling that of A. afarensis. The large size of the palate and teeth suggests that it is a male, with a small braincase of about 450 cubic centimeters. (A modern human brain is about 1400 cubic centimeters.) It is like no other hominid species and is clearly not a robust form. And in a few dental traits, such as the shape of the premolar and the size ratio of the canine teeth to the molars, A. garhi resembles specimens of early Homo. But its molars are huge—the second molar is 17.7 millimeters across, even larger than the A. robustus average. “Selection was driving bigger teeth in both lineages—that's a big surprise,” says Walker.

    The other dramatic skeletal find had come a year earlier: leg and arm bones of a single ancient hominid individual, found together. The new hominid femur or upper leg bone is relatively long, like that of modern humans. But the forearm is long too, a condition found in apes and other australopithecines but not in humans. The fossils show that human proportions evolved in steps, with the legs lengthening before the forearms shortened, says co-author Owen Lovejoy of Kent State University in Ohio.

    The third major find, at the same stratigraphic level and only a meter away from the skeletal bones, preserves dramatic evidence of hominid behavior: bones of antelopes, horses, and other animals bearing cut marks, suggesting that butchery may be the oldest human profession. One antelope bone, described by a team including archaeologist J. Desmond Clark of UC Berkeley and White, records a failed hammerstone blow, which scratched the bone slightly and caused a bone flake to fly off; a second blow was struck from exactly the same angle. Both ends of the bone were broken off, presumably to get at the marrow.

    Similarly, an antelope jawbone bears three successive curved marks, apparently made as a hominid sliced out the tongue. In cross section under the microscope, these marks show a parallel series of ragged V-shaped striations with rough inner walls—the telltale signature of a stone tool rather than a predator's teeth, says White. Marks on the leg bone of a three-toed horse show that hominids dismembered the animal and filleted the meat from the bone.

    The Bouri sites yielded few of the stone tools the hominids must have used, perhaps because there is no local source of stone. The hominids “must have brought flakes and cobbles in from some distance, so that obviously shows quite a bit of forethought,” says Clark. Tool use by this point is no surprise: At other sites, anthropologists have found tools dated to 2.6 million years ago. But there had been little hard evidence of what the oldest tools were used for. The new find shows that tools enabled hominids to get at “a whole new world of food”—bone marrow, says White.

    Marrow is rich in fat, and few animals other than humans and hyenas can get at it. Anthropologists have theorized that just such a dietary breakthrough allowed the dramatic increase in brain size (Science, 29 May 1998, p. 1345), to perhaps 650 cc or larger, that took place in the Homo lineage by 2 million years ago. Two researchers recently proposed that cooked tubers were the crucial new food source (Science, 26 March, p. 2004), but most others have assumed it was meat. The cut marks present convincing evidence that they were right, says Yale University anthropologist Andrew Hill.

    Whether or not the three finds can be connected, A. garhi, as based on the new skull, is now a prime candidate as an ancestor of our genus. The species is in the right place—East Africa—and the right time—between the time of A. afarensis and that of early Homo—says White. But making the link to the human lineage isn't easy, in part because the nature of “early Homo” is itself something of a mystery. White notes that some of the early Homo specimens have large teeth, and that in the teeth “there's not much change at all from A. garhi to those specimens of early Homo.

    The link between A. garhi and Homo would be strengthened, of course, if researchers could show that the humanlike long bones come from A. garhi rather than from some other humanlike hominid. For now White is willing only to “make up a hypothesis to be tested”: A. garhi, a small-brained, big-toothed hominid with humanlike leg proportions, began butchering animals by 2.5 million years ago. Thanks in part to the better diet, brain size rapidly increased to that seen in early Homo, and the trend toward large back teeth reversed—changes that quickly transformed other parts of the skull as well, such as flattening the protruding jaw.

    But some other researchers don't buy that as a likely scenario. There's no reason to expect that every new branch on the hominid tree is our ancestor, says George Washington's Wood. He adds that he is not surprised by A. garhi's mix of humanlike and robust features, because, given that climate was changing, “we should expect a variety of creatures with mixtures of adaptations at this time.” Other researchers note that the dental data linking the species to Homo are weak. “Nothing here aligns garhi closely with Homo,” says paleoanthropologist Fred Grine of the State University of New York, Stony Brook. “It's a possible candidate [for Homo ancestry], but no better than africanus.”

    Some anthropologists also say that there may not have been enough time for evolution to have transformed A. garhi into Homo. The oldest known specimen assigned to Homo, a 2.33-million-year-old palate from Hadar, Ethiopia, is more humanlike than A. garhi, with smaller teeth. That requires either a burst of evolution or some other explanation, such as sexual dimorphism, if A. garhi is to be considered part of our lineage, notes paleoanthropologist Juan Luis Arsuaga of the Universidad Complutense de Madrid in Spain. White says that only further discoveries and analysis will show just where the hominids of that long-vanished plain stand in relation to our own species: “A. garhi isn't the end; it's the first step.”


    As Salmon Stage Disappearing Act, Dams May Too

    1. Richard A. Lovett*
    1. Richard A. Lovett is a writer in Portland, Oregon.

    Opponents are squaring off over a controversial proposal to save salmon by breaching four dams on Washington's Snake River

    Each spring, millions of young Chinook salmon in the Snake River have to get past four killers as they make their way to the open sea. They go by the names of Lower Granite, Little Goose, Lower Monumental, and Ice Harbor. These dams, erected in Washington state in the 1960s and 1970s to generate power for the Pacific Northwest, can be just as deadly as any predator: Smolts can get pureed by turbine blades or plunge over spillways to their deaths. Survivors are delayed by sluggish water behind the dams that might cripple their ability to adapt to salt water.

    The Army Corps of Engineers, which is supposed to run the dams while protecting the salmon, has spent years and hundreds of millions of dollars to try to reduce the annual slaughter by capturing smolts and trucking or barging them to the Columbia River, upstream of Portland, where the fish have an unfettered run at the Pacific. But this strategy is failing, experts say: According to tagged-fish studies, less than 0.5% of the barged salmon survive to return a few years later to their spawning grounds. Wild salmon from the Snake River Basin have declined nearly 90% in the last 30 years, and every population has either been driven to extinction or is so threatened it is shielded by the Endangered Species Act. Now, the federal government is considering a drastic, and controversial, solution: tearing down the Snake River dams.

    Endangered species

    The dams that may be breached.


    The lead agency for ensuring the salmon's protection under the act, the National Marine Fisheries Service (NMFS), has asked the Army Corps to recommend by year's end a course of action to save the imperiled fish. Their options boil down to four: leave the river be, step up efforts to haul smolts around the dams, modify turbines and spillways, or drain the reservoirs and tear out the dams' earthen portions to allow the Snake River to flow freely. An NMFS report last week gave lukewarm support to the last option—a remedy that could cost up to $1.2 billion—saying it is “more likely than any other” option to help salmon recover.

    The NMFS study comes on the heels of a much stronger statement from 200 scientists, mostly fisheries biologists, who argued in a letter last month to President Clinton that dam breaching is “the surest way” to restore fish populations. “The needs of salmon are clear: If dams stay, salmon go. If dams go, salmon stay,” says Ted Koch, a Fish and Wildlife Service biologist in Idaho who, like many others who signed the letter, says he's not speaking for his employer. Joining their cause are some 300 organizations, ranging from the Center for Marine Conservation to the Federation of Fly Fishers, that have endorsed a dam-breach petition. “Every year we spend millions more on bizarre schemes to try to save these fish, and every year fewer and fewer fish return to spawn,” says Rebecca Wodder, president of American Rivers, which led the petition drive along with the group Taxpayers for Common Sense. “The science is in,” Wodder says. “There is no longer any excuse for delay.”

    Not everyone agrees that the dams should go. But supporters can point to a precedent: Last summer, Secretary of the Interior Bruce Babbitt announced plans to boost Atlantic salmon, sturgeon, and other migratory fish by removing a small hydropower dam on Maine's Kennebec River, a project slated to begin later this year (Science, 8 August 1997, p. 762). However, observers say, punching holes in that dam is child's play compared to the much more massive Snake River dams.

    Debate over the Snake's fate caught fire last December, when a task force formed in 1995 by NMFS and the Bonneville Power Administration, which markets power from the Snake dams, weighed in on how best to save the salmon. Using computer modeling of population dynamics and bringing to bear their own expertise on salmon biology, the 35-person Plan for Analyzing and Testing Hypotheses (PATH) group, primarily fisheries scientists, concluded that restoring free flow to the Snake River stands the best chance of saving the salmon. Barging spreads disease among the fish too readily, they said, and making the dams more fish-friendly would fail to address the critical time element. “These fish are undergoing changes in their kidneys, gearing up for going into salt water,” says PATH member Earl Weber, a fisheries scientist with the Columbia River Intertribal Fish Commission. “Delay is thought to be harmful.” In an analysis of the PATH report released last week, NMFS highlighted the uncertainties of predicting salmon survival but backed PATH's bottom line, noting that delays in undertaking any rescue operation could drive populations to extinction.

    The Army Corps, in the meantime, has been putting the finishing touches to a $22 million study on the technical requirements and costs of bypassing the dams versus better smolt hauling. The Corps has released a section on breaching that indicates just how complicated a task it would be.

    First, engineers would throw open the turbine intakes to drain the lakes behind the dams. Battalions of earthmovers would then tear away the earthen embankments that are part of each dam, chasing the receding waterline. Next, levees would be built to guide the river through its new meanders, bypassing the dam structures and leaving them high and dry. In a race against time, engineers would have a 4-month window, starting in August, when the Snake's waters are low enough to undertake the tricky operation without triggering catastrophic flooding, says Army Corps civil engineer Steve Tatro. Up to 6 million cubic meters of earth must be removed at each dam in less than 70 days, he says.

    In the short term, the breaching could harm salmon—particularly those that spawn in the fall. According to Army Corps study manager Greg Graham, about 100 million cubic meters of sediment—half mud, half sand—has settled behind the four dams. Although the mud should erode quickly and get flushed out to sea, the sand will tumble downstream slowly. “Mother Nature is going to take charge and redistribute sediments as she sees fit,” says Graham. Because the project's early stages would kick up so much sediment, Tatro's group has drafted plans to capture salmon heading upstream and truck them around the dams. A more prolonged problem is that after the Snake is channeled around Lower Monumental and Ice Harbor, it could run too swiftly for upstream-bound fish. The Army Corps may stud channels with boulders to create artificial rapids with eddies where fish can rest.

    The Army Corps isn't expected to release its full report—including its favored option—until the end of this year. But opponents are already taking aim at any dam breach. One prominent critic is Senator Slade Gorton (R-WA), who argues that the cost of such an operation outweighs its uncertain benefits. Business and agriculture leaders are also up in arms. For instance, Bruce Lovelin, executive director of the Columbia River Alliance, a Portland-based trade association, predicts that if the dams are retired, utility customers would foot the bill for the lost 1200 megawatts generated each year, about 5% of the supply in the Pacific Northwest—enough to power Seattle.

    After choosing a strategy, the Army Corps will have to sell it to other agencies, the public, and finally to Congress, which must approve funds to pay for it. Graham says it's too early to bet against the dams. “We see a lot of news articles [saying] the Corps wants to tear out dams,” he says. “We haven't concluded anything.” Many scientists, however, are pushing for strong measures, and fast. “Unless something is done soon,” says Koch, “most of the remaining runs will go extinct.” That would deprive the region of a resource even more valuable, perhaps, than megawatts.


    From Embryos and Fossils, New Clues to Vertebrate Evolution

    1. Elizabeth Pennisi

    LONDON—Nearly 200 paleontologists, developmental biologists, molecular phylogenists, and other researchers gathered here on 8 and 9 April for a meeting on “Major Events in Early Vertebrate Evolution.” They heard that studies revealing the molecular programs underlying embryonic development are helping paleontologists better interpret their fossils, while new fossil finds show that organisms once had a wider range of shapes and sizes than thought. Together, these efforts are changing our view of how vertebrates came to be.

    Something Fishy About Fin Evolution

    One of the key unanswered questions about the evolution of fish is how they got their fins, appendages that have helped this group of organisms be so successful. It's an answer that concerns landlubbers as well, as fins eventually became limbs for seagoing creatures venturing onto drier habitats. Now it appears that fins evolved multiple times in primitive fish.

    The current view holds that both sets of the paired fins of modern fish arose from the same precursor tissue on the belly of an ancestral fish. But new 400-million-year-old fossils imply separate origins for the two sets of paired fins, which are especially important for the successful adaptations of modern fish and are also the precursors of land animals' limbs. “The materials are truly fantastic and eye-opening,” says Xiaobo Yu, a paleontologist at Kean University in Union, New Jersey. “My entire repertoire of existing ideas on fins needs to be reorganized.”

    Two of the fish fossils that are roiling the waters came from a rich deposit of fish fossils located high up in the Mackenzie mountains of Canada's Northwest Territories, near the Yukon border. Collected by paleontologist Mark Wilson and his colleagues at the University of Alberta in Edmonton, both of the fossils have elaborate sets of spines and fins not seen before.

    Modern fish have a set of pectoral fins, one on each flank behind the gills, and a set of pelvic fins, located on the belly just before the anus. A theory dating to the late 1800s says that both pairs of fins evolved from flaps of skin extending all along the bottom of the fish's body. Then the pair nearest the front of the fish somehow migrated upward on the body to form the pectoral fins while the backmost pair remained in their original location as the pelvic fins. According to a more recent theory, proposed by Michael Coates of University College London and Martin Cohn of the University of Reading, both in the United Kingdom, pelvic fins were a later invention brought about when the genetic program for the first set of fins somehow got turned on further back along the body.

    But Wilson says the new fossil fish don't fit comfortably with that picture. One fossil, called Kathemacanthus (meaning necklace of spines), has a large pectoral fin and spine lying high on each side of the animal just behind the head and gill slits. So a series of spines, suggestive of a necklace, runs down each side of the fish below the fin. Kathemacanthus also has a second series of paired spines along its belly that get progressively larger and culminate in what appear to be real pelvic fins. The other species, Brochoadmones, has the series of intermediate pelvic spines but only a single pectoral fin spine.

    Because the pectoral fins are located so high up even in these early fish, Wilson thinks the fins may have first appeared there and not along the belly, as the other views suggest. In addition, he says, “the pectoral [spines] get more finlike as they go up and the pelvic [spines] get more finlike as you go back.” Finding such advanced fin structures in two places suggests to Wilson that the early fish didn't start with just the pectoral fins but with pelvic fins as well, likely relying on different sets of genetic instructions for producing the two types.

    Some of Wilson's colleagues think this indeed may prove to be how fins arose. “[Wilson] may be right that pelvic and pectoral fins arose independently on two different levels of the flank,” says Philippe Janvier, a paleontologist at the Natural History Museum in Paris. But others doubt that it happened that way. Coates, for one, thinks that the spines and “fins” are not true fins, with muscles and supportive internal structures, and so are not telling us much about how true fins evolved.

    Wilson, however, points to another recent fossil find, a jawless fish named Sheilia, discovered by Tiiu Märss and her colleagues at Tallinn Technical University in Estonia. Previous jawless fish fossils had only pectoral fins, but Sheilia also appears to have a set of paired pelvic fins, supporting Wilson's notion that pelvic fins were an early invention. And at the meeting, paleontologist Hans-Peter Schultze from the Museum of Natural History at Humboldt University in Berlin presented a new fish fossil, called Dialipina, that he and his colleagues had collected from Arctic Canada. Dialipina was from the same time period as Wilson's fish and, like them, had pectoral fins located high on the body and not ventrally, as the older theories predicted.

    “We're seeing an upsurge of fish which have character combinations that confound our expectations,” says meeting organizer Per Ahlberg of the Natural History Museum in London. Yu adds that the fossils have “mind-boggling [features] that keep paleontologists and developmental biologists busy during the day and sleepless at night.”

    Just Where Did That Jaw Come From?

    A pelican scooping up a fish, a lion crunching the leg of a gazelle, a parrot fish nibbling on coral—each creature has jaws specialized for its own way of life. For more than a century, biologists have tried to reconstruct how those adaptations evolved by comparing jaw, head, and skull bones in both fossil and living species. But such comparisons can't always establish a link between the jaw components of very different species, so their evolutionary history is often a blank. New findings about how jaws and heads develop could help paleontologists draw the missing connections.

    Developmental biologist Georgy Köntges and his colleagues at Harvard University have found that discrete clusters of cells in an embryonic tissue called the neural crest grow into specific parts of the skull and jaw. By identifying the basic developmental modules of all vertebrate skulls, the work “gives us an explanation for the organizational pattern for skulls,” says Per Ahlberg, a paleontologist at the Natural History Museum in London. It should also make it easier for researchers to do species-to-species comparisons and thus help them gain a better understanding of how one kind of jaw evolved into another—and perhaps how jaws originated in the first place.

    Köntges has been tracing the origins of vertebrate craniofacial components for several years. He began with the chicken, transplanting bits of tissue from the neural crest of quail embryos into the neural crests of early chick embryos. By applying a dye-labeled antibody specific for quail cells, Köntges and his colleagues could then follow the fates of the transplanted neural crest cells. More recently, Köntges and his Harvard colleagues have been tracing the fate of genetically labeled neural crest cells in developing mice.

    Parallel parts

    In the diagrams, colors (beige, pink, or purple) denote brain structures derived from the same parts of the developing hindbrain. This helps maintain the proper bone-muscle connections in the head, as seen in the chick (left) and predicted in the lamprey (right).


    In both organisms, he finds that neural crest cells emerge from discrete compartments called rhombomeres in the developing hindbrain. Cells from a particular rhombomere then migrate to help form a specific part of the head, such as one of the gill arches, the bony supports for the gills. In chicks and mice, the cells in these arches then continue developing, forming other features of the head. And along with a patch of bone or cartilage, each rhombomere also forms the connective tissue that attaches the bones to each other and to their muscles.

    During development, the data of Köntges's and others show, these modules form a complex mosaic, so that a mature cranial bone can consist of patches of cells from different rhombomeres. But the cells from different rhombomeres never mingle. They “maintain an identity which prevents them from mixing with their neighbors,” Köntges explains. The fidelity of this modular organization “ensures proper connectivity,” he adds. It guarantees that even if evolution exaggerates one part of the jaw or eliminates another, the remaining structures retain the proper muscle, tendon, and nerve connections.

    It could also be a boon to evolutionary biologists. By creating wiring diagrams of the skull and jawbone showing the connections of motor nerves and muscles in a wide range of organisms, researchers should be able to link specific jaw and skull structures to their parent rhombomeres. And that, says Ahlberg, should provide “a framework from which you can interpret the fossils,” identifying structures that have common evolutionary roots.

    Indeed, Köntges has already begun applying this method to studying jaw evolution. For decades, researchers have debated which components in the heads of jawless fishes such as lampreys correspond to various structures in the heads of jawed fishes—a possible clue to how jaws first evolved about 500 million years ago. One theory holds, for example, that jaws are derived from the first of the bony gill arches in jawless fishes.

    But Köntges has taken a fresh look at the jawless lamprey in light of his current findings, and his results cast doubt on that scenario. By tracing the cartilage produced by neural crest cells from the animal's rhombomeres, he finds that the first and second gill arches are fused, so that they look like one continuous structure—something that wasn't supposed to have happened until jawed fish appeared. If the first two arches are already fused in lampreys as they are in jawed fish, then the first gill arch may have never been an independent structure, as current theories of jaw evolution suggest.

    Besides helping paleontologists draw connections between different jaw structures, the new work could also unlock the mystery of how the structures evolved in the first place. “We're getting to the point where we can make the link between gene expression patterns and morphology,” explains Ahlberg. Learning which genes sculpt rhombomeres into mature skulls and jaws could help paleontologists reconstruct the gene changes that created the diversity of modern life-forms. The effort to achieve that, Ahlberg notes, “is one of the most important things happening in modern biology.”


    Neurons and Silicon Get Intimate

    1. Robert F. Service

    Researchers may be a long way from making a cyborg, but they are coaxing neurons to grow in patterns that could form a link between biology and silicon circuits

    You don't have to look very far into the annals of science fiction, be it Star Trek, Star Wars, or the dark thrillers of futurist William Gibson, to find examples of cyborgs and other marriages of computer hardware with biology. Real-world researchers attempting to pull off this union have had only meager success. But at the American Chemical Society meeting in Anaheim, California, last month, a handful of research groups reported closing in on a key enabling technology: simple networks of neurons atop transistors and other microelectronic devices that can communicate with cells and listen in on their chatter.

    These networks, usually just a few cells patterned into a rectangle or other circuitlike configuration, aren't likely to give even a pocket calculator a run for its money anytime soon, although researchers can't help speculating about such possibilities as hybrid computers, prosthetics, and sensors. Says chemist James Hickman of George Washington University in Washington, D.C. “We're getting to the point now where we can actually think about making devices.” But well before hybrid circuits realize their promise outside the lab, they can serve another crucial purpose, says George Whitesides, a cell patterning pioneer at Harvard University. With such circuits, he says, “you can begin to set up really good tests of fundamental neurophysiology.” In particular, say Whitesides and others, they will allow neuroscientists to explore basic questions such as how the nature of a neuron's connections with its neighbors affects its ability to fire.

    Although neuroscientists have spent decades using needlelike electrodes to eavesdrop on the firing patterns of single neurons, they have been far less successful at monitoring networks of neurons. And even where they have come up with techniques to do more complex monitoring—such as using arrays of electrodes—“it's hard to know what anything means,” says Peter Fromherz, a physical chemist at the Max Plank Institute for Biochemistry in Martinsried, Germany. The network of connections between neighboring neurons is typically so complex, he points out, that it is nearly impossible to decipher what signals any given cell is responding to.

    Building neuronal networks from the ground up, one neuron at a time, and communicating with them via microelectronics offers a way of paring down this complexity. In 1991, Fromherz and his colleagues first reported in Science (31 May, p. 1290) that they were able to grow a leech neuron atop a silicon-based field effect transistor (FET), which monitored the neuron's activity. FETs pass a tiny electrical current through a semiconductor channel between two electrodes. When a small voltage is applied to a third “gate” electrode above the channel, it makes the channel more conductive and thus allows more electrons to flow. Fromherz and his colleagues showed that the cell's firing could alter the gate voltage and thus the current through the transistor. In 1995, they turned the tables, showing that a charge-releasing capacitor could provide a tiny electric jolt tailored to fire a cultured neuron sitting above it.

    Now, researchers are moving to more complex systems: small groups of cells, coaxed into simple circuit patterns with the help of cell-friendly substances laid out on a glass or silicon surface like tape marks on a stage. In the first efforts, groups including Hickman's have patterned collections of cells on simple glass substrates, without microelectronic listening devices built in. At the meeting, Hickman detailed his team's latest success in creating a simple neuronal circuit by using conventional lithographic techniques, akin to those used to pattern computer chips, to lay out a chemical pattern that guided hippocampal neurons from the brains of rats into a simple rectangular circuit.

    The researchers started by coating a glass surface with a molecule-thin layer of DETA, a neuron-friendly organic compound. They then shone ultraviolet light onto it through a thin, stenciled metal mask: Wherever the light hit the surface, it removed the DETA, leaving a residue of hydroxyl groups, while the DETA remained intact in the masked areas. The researchers then added a neuron-repelling, Teflon-like substance, which bound to the hydroxyls. That gave them a rectangular-shaped pattern of DETA. They then coated the glass with culture media spiked with neurons and watched as a pair of cells migrated to the DETA and then sent out their tendril-like axons along the cell-friendly material, eventually making connections to one another.

    Finally, Hickman's team showed that the neurons were in synaptic communication: They stimulated one cell with a needlelike electrode, causing it to fire, and then measured a similar firing in the neighboring cell with another sensing electrode. “It shows that we can begin to make simple structures and control how individual neurons connect to each other,” says Hickman. Next up, he says, he hopes to create such neural circuits on actual microelectronics.

    A trio of other groups reported at the meeting that they're already pushing into that territory. In this case the teams—led by Harold Craighead at Cornell University in Ithaca, New York, Andreas Offenhäuser at the Max Plank Institute for Polymer Research in Mainz, Germany, and Bruce Wheeler at the University of Illinois, Urbana—all used a technique known as microcontact printing to pattern their neurons. This printing technique—developed by Whitesides and his team at Harvard—relies on traditional lithography to etch microscopic features into a silicon wafer or another rigid material. The patterned wafer then serves as a mold to cast a rubbery stamp.

    The teams “inked” their stamps in solutions spiked with cell-friendly compounds and stamped them on various substrates, creating patterns that would guide the growth of cells. Craighead and Wheeler, for instance, stamped patterns of compounds such as laminin, a protein found in the extracellular matrix between cells in the body, onto glass or silicon surfaces to position cells over electrodes embedded in the material. Offenhäuser's group, meanwhile, stamped out a gridlike pattern of a laminin fragment called PA-22 to orient his cultured neurons atop silicon FETs.

    The researchers aren't sure whether the neurons in these networks are actually sending signals to each other. Still, there are some early signs of success. Offenhäuser, for example, reported that after his team created patterned cell networks and then let the cells grow in a culture medium for 5 to 10 days, the cells fired spontaneously, which typically occurs only if neurons have made synaptic connections to their neighbors. “This is our hint that we have synaptic connections,” says Offenhäuser. Wheeler says his team has seen similar signs. Now all the teams hope to trigger one cell to fire and watch the impulse propagate to its neighbors.

    If the new hybrid circuits do pan out, their eventual use is anybody's guess. “Ultimately we want to use this for prosthetic devices, sensing, and interfacing with the nervous system,” says Craighead. “But the idea of applying this to a therapeutic use is quite far away,” as are hybrid computers. One of the biggest problems is that neurons cultured by themselves typically die within a month. Still, Hickman points out that researchers at the University of Virginia have shown that when they culture neurons alongside support cells known as glial cells, the neurons survive for more than a year. “This field is just getting started,” says Hickman. “We're at the same spot where they were 50 years ago with the transistor. Nobody at that time could envision making a PC.”


    Bypassing Nervous System Damage With Electronics

    1. Robert F. Service

    Although the effort to marry neurons and microelectronics into hybrid circuits is still in its infancy (see main text), neural prostheses that artificially stimulate the nervous system to partially restore lost vision, hearing, or movement are, paradoxically, much further along. In part, that's because they need only stimulate groups of cells rather than contact individual neurons. Heading the list of successes are cochlear implants, which use implanted electrodes to stimulate auditory nerves and provide rudimentary hearing to the deaf and have already been received by over 20,000 people. And the U.S. Food and Drug Administration recently approved related implants to improve bladder control and restore hand grasping abilities to quadriplegics.

    Propelled by these successes as well as by a bevy of new tools coming from advanced microelectronics technology, “the bandwagon is starting to move,” says Richard Norman, a bioengineer at the University of Utah, Salt Lake City. “A large number of groups are starting to work in this area.” Still, he adds, the ultimate goal of making advanced neural prostheses that can fully restore a patient's motion or vision is “a bit of a long shot.” The obvious problem is communicating with the body's complex set of neurons. In just the eye, for example, 1 million nerves carry stimuli from light receptors in the retina to the brain. Stimulating all those nerves independently remains, for now, an impossibility.

    Surprisingly, however, much has been accomplished with relatively crude electrical inputs. Cochlear implants, for example, connect at most 22 electrodes to auditory nerves in the cochlea that respond to different sound frequencies. When a tiny microphone outside the ear picks up sound, it passes the sound to a signal processor behind the ear that analyzes it and signals an implanted electrical pulse generator to stimulate the appropriate electrodes in the cochlea. Although this doesn't provide perfect hearing, people with the implants typically pick up enough sound to carry on a conversation.

    Other researchers are now trying to forge related technology to restore sight by stimulating cells in the retina, the optic nerve, or the brain's visual cortex. At Johns Hopkins University School of Medicine in Baltimore, Maryland, for example, Mark Humayun and his team have temporarily implanted a 3-millimeter-wide array of 25 electrodes atop the retina of one eye in each of two elderly patients with retinitis pigmentosa. (This hereditary condition slowly degrades the eye's light sensors, known as rods and cones, eventually leaving patients totally blind.) An external unit sends electrical signals to the electrodes via wires passing through a tiny slit in the eye.

    In an upcoming issue of Vision Research, Humayun and his colleagues describe how the retinal stimulation allowed both patients to perceive complex shapes, such as squares and letters. The team is already working to create more complex arrays and control circuitry that would transmit signals through the skin via radio waves so that the arrays could be permanently implanted in the eye. “There's fundamentally no reason why you can't take a blind person and get them to see coarse features consistent with mobility,” says Humayun.

    Devices that provide rudimentary muscle control have also made strides. P. Hunter Peckham and his colleagues at Case Western Reserve University in Cleveland, Ohio, for example, have shown that a series of eight implanted electrodes that directly excite different muscle groups in the forearm and hand can restore hand gripping movement to quadriplegics. In a recently commercialized version of the device, patients control their hand movements by thrusting their opposite shoulder forward and backward, activating implanted sensors that then relay the information to a signal processor and electrical-pulse generator implanted just below the collarbone.

    Peckham says that he and his colleagues are currently working on adding electrodes so as to restore fine motor control of the hands and arms. The team is also making steady progress with a variety of other neural prostheses, such as one that helps paralyzed patients stand and even walk, as well as an advanced version of a bladder-control device now on the market. Says Peckham, “It's almost unlimited what you can conceive of being able to accomplish.”


    Blowing the Dust Off the French Academy

    1. Michael Balter

    France's premier scientific society risks becoming a relic of the past. Reformers want to expand the membership rapidly, but the proposal may split its ranks

    PARIS—On a recent Monday afternoon, several dozen members of the French Academy of Sciences assembled for the academy's weekly gathering in the Great Meeting Hall of the Institute of France, a palatial 17th century building overlooking the river Seine. The academicians, most well into retirement age, sat on stuffed chairs beneath an ornate chandelier and surrounded by oak-paneled walls laden with marble busts, stone statues, and faded portraits depicting long-departed giants of French culture. Four experts had been invited to address the academy on the subject of the “impact of climatic changes on the evolution of biodiversity.” But as the lights were dimmed for the slide projector, the struggle to stay awake proved to be a losing battle for some academicians. After an hour or so, with the discussion over, the public was excluded so that the academy's Secret Committee could discuss internal matters such as the election of new members and the awarding of scientific prizes.

    France's premier scientific society has been conducting its business in this genteel manner for as long as anybody can remember. But many French scientists don't see the charm of this rarefied, old-world atmosphere. They argue that the academy—which was founded in 1666 during the reign of Louis XIV—has for too long been an elite club, a relic of science past rather than a living body of science present and future. “The Academy of Sciences is the dustiest place in the world,” says one French researcher, a nonmember who prefers to remain anonymous. The academy has the potential to be far from dust-laden, however—especially as it counts virtually all of France's Nobel laureates as well as other highly accomplished figures of French science among its ranks—and if the current leadership gets its way, it could be in for some spring cleaning.

    In January, a reform-minded academician, chemist Guy Ourisson, took office as president of the academy. Ourisson has made clear his desire to remove most vestiges of days past, when, as he put it in his inaugural speech, “the academy was a sort of club for retired Parisian scientists, happy to be able to come together once a week to talk about science for 2 hours after lunch and a little nap.” Ourisson's reform plans have received substantial support from academy members, including France's research minister, geochemist Claude Allègre, who has been prodding the academy from behind the scenes to expand its membership and become more representative of the nation's diverse scientific community. Nevertheless, there are important pockets of resistance to reform, and Ourisson and his allies may still face an uphill battle on some of the more far-reaching changes they are proposing.

    The most fundamental reform, which is still being discussed only informally, would be to end what some have called the academy's “two-caste” structure. The academy is divided into 145 full-fledged members (academicians)—nearly two-thirds of whom are retired—and roughly 200 “correspondents,” a second tier of members-in-waiting. Correspondents must usually wait for the death of an academician to free a slot before they can be elected to full status. Other initiatives include the creation of a new academy of technology—which would raise the stature of researchers working in industry and other applied fields—as well as a concerted effort to improve the academy's series of scientific journals, the Comptes Rendus de l'Académie des Sciences.

    The reformers say that these measures are critical if the academy is to regain its influence in French society, which has diminished greatly in past decades. For example, the government only occasionally calls upon the academy for advice: The French academy produced only five reports on scientific questions in 1998, while its American counterpart, the National Academy of Sciences (NAS), published 189 reports last year. To help boost the academy's relevance, Allègre has asked it to report every 2 years on the state of French research and technology, but the academy's ability to focus scientific firepower on specific questions remains limited.

    So far, the proposed reforms have met with a mixed reaction. The creation of an academy of technology has received broad support from the members, but there is an undercurrent of resistance among some academicians to proposals that would lead to more fundamental changes in the Academy of Sciences—notably, turning correspondents into full voting members. “I have asked the opponents to speak up and be explicit [about their objections],” Ourisson told Science. “So far none have done it, which I regret.” Nevertheless, in Science's discussions with numerous academy members, as well as in a survey of academicians on this issue (see sidebar), some members expressed clear reservations about broadening their ranks, ranging from outright opposition to a belief that correspondents should be transformed into full members only on a case-by-case basis.

    For the reformers, a dramatic increase in the academy's membership is essential if the body is to truly represent French science. Correspondent Gérard Toulouse, a physicist at the Ecole Normale Supérieure (ENS) in Paris who has written articles sharply critical of the academy in the French press, deplores “the idea that the fewer members there are, the more their quality is elevated.” And member Yves Meyer, a mathematician at the ENS in Cachan outside Paris, says that “the danger of the current situation is that so many members of the academy are retired … and reflect a science that has ceased to be active.” Toulouse, Meyer, and other critics compare their own academy unfavorably with its American and British counterparts, the NAS and the Royal Society, which have continually expanded their numbers of active members to keep up with the growth and diversity of their scientific communities.

    The rate of growth of the French Academy has been, at best, glacial. In the mid-1960s, the academy counted about 80 academicians, an increase of roughly 20 members during the previous 150 years. Then a series of reforms led to a slow increase in the membership to its present number of nearly 150. But in the view of many reformers, this number is still woefully inadequate. “If one compares the size of our academy with that of the NAS [nearly 1800 active members], and the respective population of the two countries, we should have between 400 and 500 members,” says academician Moshe Yaniv, a molecular biologist at the Pasteur Institute. And academician Jean Rosa, a retired biologist, says he supports increasing the membership to “a number equivalent to that of the Royal Society,” which currently counts 1150 Fellows in its ranks.

    Although some members and correspondents believe that eliminating the difference between these two classes should be the first step toward expanding the ranks—a measure that would immediately boost the membership to about 350—this view is not shared by everyone. “There are some correspondents who deserve to be members, but there are others who are perhaps too young or who are not known outside their own specialty,” says molecular biologist Marianne Grunberg-Manago, a past president of the academy and the only woman ever to hold this post (and one of only five female academicians and 10 female correspondents currently on the academy's rolls). “I think we should open up the academy a little bit, but it must remain very selective.” And physicist Hubert Curien, the academy's vice president and a former French research minister, says that while he supports an expansion in the academy's ranks, it should not be done “too rapidly and too massively.”

    Toulouse, however, argues that the two-tiered structure compromises the independence and integrity of the academy when it has to grapple with controversial scientific or ethical questions. The academy's “greatest merit,” he says, should be to “permit people to express themselves without fear.” The two-tier system, Toulouse says, “encourages conformism and servility. … The correspondents have an excuse to say, ‘I cannot speak up because I am waiting to be a member,’ and then when they get there they are too old and don't have anything to say.”

    Yet both supporters and opponents of reform agree that a significant expansion of the academy will cause a major upheaval in its clublike atmosphere, which has changed very little in the 333 years since its founding. One victim of the changes, for example, would be the cozy Monday afternoon meetings, which Ourisson says would have to be eliminated, in large part because active scientists—particularly those from outside Paris—cannot regularly attend. Meyer admits he would miss these gatherings, which have the “ambiance of an 18th century salon. … If we bring together 1000 people, we will not have this quality of discussion.” But in the end, it seems likely that the majority of members will opt for change rather than risk allowing their academy to fall into irrelevance. Says Toulouse: “It is better to have a larger and more prestigious academy than one that remains small and contemptible.”


    Academy Reform: Members Have Their Say

    1. Michael Balter

    PARIS—In 1973, the Nobel Prize-winning French physicist Alfred Kastler published a scientific paper entitled “Evolution of the average age of members of the Academy of Sciences since the founding of the Academy.” The report, which appeared in the academy's Comptes Rendus (Proceedings), plotted the average age of election into the academy, as well as the average age of death, of all academicians since the organization's founding in 1666. Kastler, now deceased, found that beginning around 1840, the average age at election began to rise precipitously, while the average longevity rose much more slowly. If the trend continued at the same pace, the paper concluded, by the year 2100 the average academy member would be elected only after his or her death and the organization would eventually cease to have any living members.

    Although Kastler's tongue was firmly in his cheek, serious concerns about the graying of the academy have led its president, chemist Guy Ourisson, and others to propose expanding the number of members. As a first step, the reformers have suggested eliminating the distinction between the 145 full-fledged members and the academy's 205 “correspondents,” a second tier of nonvoting members-in-waiting (see main text). To get an idea how controversial these proposals are likely to be, Science conducted a confidential survey—by e-mail, fax, and letter—of all 145 academicians, asking for their position on these issues.

    Out of 43 members who responded to the survey, about half clearly favored ending the distinction between correspondents and members, although in optional supplementary remarks some thought it should be done gradually rather than all at once (see chart). A slightly higher proportion favored increasing the total number of members, although the respondents differed considerably on how large the organization should be: The ideal size ranged from a proposed 20% increase in the current 145 members to a total number as high as 1000. A few academicians objected to the survey itself, arguing that these issues had not yet been debated within the academy and that airing them publicly would be divisive.

    Although the survey promised confidentiality, a small number of respondents exercised an option to make their opinions publicly known. Biologist Jean Rosa, who was favorable to both propositions, commented that “these modifications seem to me indispensable to include new knowledge in the academy, which is appearing at a rhythm that did not exist at the time of Pasteur or the Curies.” And Nobel-winning physicist Pierre-Gilles de Gennes said that the distinction between members and correspondents is “obsolete.” De Gennes also commented that some areas of science are underrepresented in the academy: “Researchers working on more novel areas have difficulty being accepted.”

    None of the academicians who clearly opposed the reform measures agreed to be quoted. Nevertheless, it seems likely that Ourisson and other reformers within the academy will encounter some opposition as they push forward with their plans to rejuvenate the aging organization. “I am sure there will be resistance,” Ourisson says. “It will probably not be an abrupt change, but a gradual one.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution