News this Week

Science  02 Oct 1998:
Vol. 282, Issue 5386, pp. 18

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    NIH Embraces Citizens' Council to Cool Debate on Priorities

    1. Bruce Agnew*
    1. Bruce Agnew is a writer in Bethesda, Maryland.

    The hottest ticket in Washington, D.C., right now isn't for a Redskins game—and not just because the city's hapless football team isn't having much of a season. If you belong to a biomedical research pressure group, the most sought-after seat in town may be on a new National Institutes of Health (NIH) advisory council, to be known as the Director's Council of Public Representatives (COPR). The panel, whose outlines began to take shape last week, is meant to ease tensions between NIH and dissatisfied patient-advocacy groups. But it doesn't look as though seats are being reserved for the most disgruntled of the specific-disease lobbies.

    Popular cause.

    Marchers in Washington, D.C., made a pitch last week for more government funds for cancer research.


    Some of these groups complain that funding decisions at NIH are based too much on political concerns and too little on the severity of particular diseases. They took their complaints to Capitol Hill, and Congress last year asked for a review by the Institute of Medicine (IOM). An IOM panel delivered its judgment in July: NIH needs to do a better job of explaining its priorities and talking to the public (Science, 10 July, p. 152).

    NIH director Harold Varmus held a daylong brainstorming session last week with 23 people from outside NIH, heavily weighted with patients or their relatives, to explore how the new council of citizen advisers should function and how its members should be chosen. No sharp blueprint emerged from the 23 September meeting, but Varmus said afterward that “I got some very good advice.” He hopes to name COPR members in time for the council to hold its first meeting next spring.

    COPR alone won't make research-funding controversies go away, of course. Many specific-disease advocacy groups “still believe … that NIH funding priorities do not correspond to the severity of the diseases that they represent,” said Alan Brownstein, president and chief executive officer of the American Liver Foundation, early in the meeting. Staking a claim on this year's expected budget increase—NIH's funding is likely to rise more than 12%, to more than $15 billion, in the new year that began on 1 October—Brownstein said NIH now can “correct” past inequities without harming existing research programs. As if to underline that message, cancer research advocates staged a “March to Conquer Cancer” on the Washington Mall the following weekend, calling for increased spending on cancer research.

    Creation of COPR was one of a dozen recommendations by the IOM committee that studied how NIH sets priorities. In its 8 July report, the panel endorsed NIH's existing criteria for allocating research funding—public health needs, scientific quality, potential for scientific progress, research-portfolio diversification, and infrastructure support. But it said NIH should do a better job of describing the process and called on NIH to “strengthen its analysis and use of health data, such as the burdens and costs of diseases.” This could force NIH to make explicit correlations between particular diseases and research expenditures, an idea that NIH officials found “troubling.”

    At first glance, COPR might appear to be a fifth wheel bolted into the NIH organizational chart alongside the more traditional Advisory Committee to the Director, composed mostly of scientists, physicians, and executives of research institutions. But Varmus may find a layperson-oriented, patient-oriented panel quite useful. For example, he plans to involve the new council in figuring out how to quantify the costs and burdens of various diseases and how to weigh disease burden in deciding research priorities. The issue is now under study in Varmus's Office of Science Policy. “At the very least,” Varmus says, “I would expect COPR to hear about the [policy] office's work and give me advice about the credibility of such analyses and about how they might be used.”

    Panelists at Varmus's 23 September meeting suggested a variety of other roles for COPR, not all of them consistent. Among them: COPR could serve as a way for NIH to promote itself to the public or as a means to call NIH's attention to public concerns that are not being met. It might check whether funding decisions by individual institutes truly reflect NIH's stated criteria, but it should not serve as a court of appeals. Varmus said he hopes COPR will help NIH achieve better accountability “without turning meetings into divisive debate among constituencies that would like bigger shares of the pie.”

    To give the new council credibility, Varmus plans an unusually open process for choosing its members: Selection criteria will be clearly stated, and a panel of outsiders will screen candidates. But he doesn't want “a United Nations” representing every constituency that deals with NIH. Such a group would be “too large, too unwieldy, and frankly, too provincial.” Nor does he want simply to round up “the usual suspects”–prominent advocates for major disease and patient groups.

    The makeup of the planning group invited to last week's meeting may provide a model: It included former patients, relatives of patients, current or former members of institute advisory councils, a representative of a major scientific society, and several patient-advocacy group representatives—but not those who have been most loudly challenging “inequitable” funding decisions, such as the Parkinson's Action Network or the American Diabetes Association.

    The IOM committee didn't include scientists among the groups that it said COPR should represent, but Varmus almost certainly will put a scientist on COPR. Scientists “are one of our major constituencies,” he said last week, and several panelists agreed. “I would like to see the scientific lion lie down with the public lamb,” said Robert Abendroth of the Amyotrophic Lateral Sclerosis Foundation.

    “Or vice versa,” said Varmus.

  2. KENYA

    Leakey Back as Head of Wildlife Service

    1. David Malakoff

    Politics has again created strange bedfellows in Kenya. Just a week after ousting conservationist David Western as head of the embattled Kenya Wildlife Service (KWS), President Daniel arap Moi has reappointed one of his most prominent critics—anthropologist Richard Leakey—to the job of overseeing some of Africa's best known parks and protecting the country's rich biodiversity. The move comes just 4 years after Moi picked Western to replace Leakey, who resigned from the KWS in 1994 after complaining of political interference by Moi's cronies.

    The latest switch, announced on 24 September, marks yet another twist in a political tale that has captivated and concerned conservationists around the world (Science, 25 September, p. 1931). In May, Moi fired Western, only to rehire him 6 days later following complaints from international donors and conservationists who supported Western's efforts to downsize the KWS and involve people living outside the agency's 53 parks in conservation. Some of Western's supporters charged that the ouster had been engineered by Leakey, who has been critical of Western's community-based wildlife policies and management style. At the time, Western himself ascribed the firing primarily to his opposition to granting mining concessions in the parks.

    When Moi abruptly fired Western again on 17 September, few observers publicly predicted that the president would try to woo one of his leading opponents back into the government. In the past, Moi has reportedly called Leakey a “racist” and “arrogant” and has threatened to have him arrested for sedition. And since January, Leakey has been a member of Parliament representing the Safina party, a small but vocal opposition group. On 24 September, however, Leakey announced he was reclaiming his old job after direct negotiations with Moi assured him that KWS would be insulated from political meddling. “I did due diligence and believe I have the government's commitment,” Leakey told Science. “Obviously, one does not knowingly put his head in a noose,” he commented at an earlier press conference.

    Some observers say Leakey's reinstatement was primarily driven by Moi's increasingly frenetic efforts to shore up his sagging regime and Kenya's shattered economy. In particular, says Gilbert Khadiagala, a Kenyan who teaches African politics at the Johns Hopkins School of Advanced International Studies in Washington, D.C., Moi has sought to regain support among Kenya's powerful Kikuyu ethnic group as political parties begin talks over a new democratic constitution that could sharply curtail his Kanu party's power, and as the government negotiates an aid package with the International Monetary Fund. Last month Moi took one step toward that goal by politically rehabilitating Charles Njonjo, a Kikuyu elder statesman, by naming him chair of the KWS board of directors. Njonjo, whom Moi forced into internal exile in 1984, is Leakey's lifelong friend and mentor. One likely scenario, Khadiagala says, is that Moi brought Leakey into the government to provide Njonjo with a trusted ally and to demonstrate to foreign governments that he is ready to share power. “The appointment makes Moi look like a moderate, not an ethnic leader,” Khadiagala says.

    Kenyan politics aside, Western's supporters are concerned that Leakey—who focused on protecting animals within the parks during his first stint as KWS chief—will undo Western's community conservation programs, which attempt to preserve biodiversity in areas around the parks. “The worry is that Leakey will return to policies that are no longer supported by conservation science,” says University of California, San Diego, biologist David Woodruff. Some donors also fear an abrupt shift. “There is quite a lot of donor concern,” says a knowledgeable source. Funders such as the European Union and Germany, which have pumped millions of dollars into the community projects and other reform efforts, “would like to be assured by the new management that major changes are not going to take place.”

    Leakey, who says he “can't imagine why donors should have any concern about changes,” says his first priority will be to find funds to pay off a $3.5 million deficit, caused largely by declining tourism and the end of some outside grants to KWS. “We simply don't have any incoming money to pay bills and salaries,” he told Science. “We are going to have to cut costs.”


    Tracks of Billion-Year-Old Animals?

    1. Richard A. Kerr

    Could paleontologists have missed a third of the preserved history of animals? That's the implication of a startling claim on page 80. Researchers have grown accustomed to competing claims about when multicellular animals first appeared. In February, new fossil embryos from China pushed the date back tens of millions of years to just before 600 million years ago (Science, 6 February, p. 803), and some molecular biologists sorting through animals' genes have inferred an even earlier origin. Now the new find may extend the fossil record of animals more than 400 million years to 1.1 billion years ago, supporting the oldest molecular estimates of the origins of animals.

    In this issue of Science, an international team of scientists argues that wiggly grooves on the surface of ancient sandstone from central India are the tracks of burrowing, half-centimeter-thick, wormlike animals. “If it's true, it's staggering,” says paleontologist Charles Marshall of the University of California, Los Angeles (UCLA). “It would be the first evidence of macroscopic animals.”

    For now, experts in such trace fossils—most of whom haven't yet seen these specimens—are divided on the claim, torn between the convincing appearance of the tracks and their appearance in rock radiometrically dated to hundreds of millions of years before any other animal traces. “I'm a believer,” says Tony Ekdale of the University of Utah, Salt Lake City, who has seen one specimen. “I find them convincing.” Others are not so sure that these squiggles are traces of life. “I wouldn't be surprised if they turn out to be inorganic,” says Sören Jensen of Cambridge University.

    To the authors of the study—paleontologist Adolf Seilacher of Yale University and the University of Tübingen in Germany, and sedimentologists Pradip Bose of Jadavpur University in Calcutta and Friedrich Pflüger of Yale—the ancient tracings paint a detailed picture of one creature's life 1.1 billion years ago. The wormlike animal, about the thickness of a drinking straw, plowed through the sediment a few millimeters below the floor of a shallow sea, the group suggests. They argue that the creature propelled itself with rhythmic muscle contractions, or peristalsis, leaving open burrows with raised edges like those of modern worms that move by peristalsis. The animal was probably grazing on the decaying base of a thin mat of microbial life on the sea floor, says Pflüger, because the burrows follow the base of a thin veneer of darker sandstone that may be the remains of the mat. (At press time, Seilacher was in the field in Libya.)

    Burrowing by peristalsis suggests to Seilacher and his colleagues that the animal was rather complex. Peristalsis implies a fluid-filled cavity that can be contracted by muscles, and they argue that it also implies the existence of a coelom, a lined cavity between the gut and body wall. Coeloms are common to mollusks, annelid worms, and arthropods but are absent in the simpler flatworms and roundworms. If so, the fossil evidence would support one date offered by some molecular biologists: a 1.2-billion-year age for a major evolutionary split among the coelomate animals, between a group including the annelids and one including the echinoderms.

    Pflüger admits that distinguishing true trace fossils from all manner of sedimentary cracks, wrinkles, and ripples is a tricky business, but says that he is “85% confident” that the features were left by an animal. He points out that the burrows are too irregular to be the type of cracks commonly found in such sediments and too sharply delineated to be wrinkles in the sediment surface. The grooves vary in width, but each has a constant width throughout its length, unlike a crack. “If they were 700 million years old,” says Pflüger, “there would be no reaction [challenging] the paper.” But given the antiquity of the finding, “there will be people contesting it.”

    Indeed there are. “This is not the smoking gun,” says paleontologist and early life expert Bruce Runnegar of UCLA. “It is almost impossible to tell trace fossils from tubular body fossils [of large algae] when they are poorly preserved, as these are. I'd say the jury is out.”

    Paleontologist Mary Droser of UC Riverside is more persuaded, agreeing with Pflüger that “if we found this in the Paleozoic [younger than 544 million years], we would say it is a trace fossil.” But she notes that “there have been a lot of examples [of sedimentary features] that people thought were trace fossils and they were not.” And because no large worm burrowings turn up again in the rock record until about 600 million years ago, “I wonder why we go 400 million years without another one,” she says. Paleontologist Andrew Knoll of Harvard University agrees that “if you see centimeter-scale, coelomate organisms and then don't see them for 400 million years, you have a lot to explain.”

    It's possible that relatively complex animals did appear very early but died out, says Marshall, only to evolve again later. Or perhaps there are older animal fossils that haven't been found yet, and the gap is only apparent. “I'm not sure enough people have looked at the right rocks for the right thing,” he says. “Five or 10 years from now, are the gaps in the record going to be filled in? That will be the proof of the pudding.”


    Mexican Fires Charge Up U.S. Clouds

    1. Robert Irion*
    1. Robert Irion is a science writer in Santa Cruz, California.

    The ancient Greeks believed that lightning bolts sprang from the rage of Zeus in his home on Olympus. Now an odd new discovery suggests that Zeus' moods have a long reach indeed: Last spring, smoke from massive fires in Mexico spawned stronger, more sustained lightning than normal over the Great Plains, thousands of kilometers away. According to lightning records, storms that had absorbed the smoke zapped the ground with three times the usual number of positively charged lightning strokes, which typically last longer than negatively charged ones and can inflict worse damage. Moreover, these positive bolts carried twice as much current as similar flashes produced by smoke-free storms.

    The study, reported on page 77 of this issue, has put a charge into veteran lightning researchers, who think it may hold an important new clue to the mystery of how thunderclouds generate lightning. “It's fascinating that smoky air could modify the [electrical properties] of storm clouds so readily,” says meteorologist Charles Moore of the New Mexico Institute of Mining and Technology in Socorro. Adds atmospheric scientist Andrew Detwiler of the South Dakota School of Mines and Technology in Rapid City: “This is a wake-up call for those who thought they understood thunderstorm electrification.”

    The amped-up storms struck between 8 April and 7 June, when a pall of smoke from drought-related fires in southern Mexico drifted north and dirtied the air from Texas to Canada, especially over the southern Great Plains. Atmospheric scientist Walter Lyons and his colleagues at FMA Research Inc. in Fort Collins, Colorado, noticed unusual behavior in storms over the plains: They triggered huge numbers of “sprites”–ghostly red glows of excited nitrogen molecules often seen high above violent storms.

    The team then checked data from the National Lightning Detection Network, run by Global Atmospherics in Tucson, Arizona, which tracks the location, charge, and strength of most bolts by monitoring the bursts of radio static they produce. The network data revealed that the lightning from smoke-enriched storms was stronger. “There were an abnormal number of positive cloud-to-ground flashes, and the peak currents [of those flashes] doubled,” Lyons says. “That really startled us.”

    Researchers had never seen such widespread outbursts of positively charged lightning, which usually accounts for just 10% of all bolts. A garden-variety negatively charged bolt flickers on and off a few times within its channel like an old neon sign. But a positive lightning stroke releases its charge in a sustained pulse that “looks like an arc welder,” as co-author Earle Williams of the Massachusetts Institute of Technology in Cambridge puts it. Those bolts are more likely to damage electrical systems and spark fires. The Texas hill country had a severe fire season in May and June, Lyons notes, although scientists can't say for sure that supercharged lightning was a factor.

    Nor do they know how the smoke boosted the rate and strength of the positive lightning bolts, Williams says. “Ordinarily we see negative charge close to the ground, but somehow these clouds have lots and lots of positive charge there.” He and Lyons do suspect that the extra positive charge may arise because tiny smoke particles provide more nuclei around which cloud droplets condense. That makes the droplets smaller, which in turn may alter how they acquire electrical charge when they freeze into ice grains and are churned high in the thunderstorms.

    Clearer answers could come from airplane studies of smoky thunderstorms, says Lawrence Radke of the National Center for Atmospheric Research in Boulder, Colorado. “It would be thrilling to have observations in these clouds to see what actually changes,” Radke says. That will take another season of fires. In the meantime, “it's all speculation.”


    Researchers Wary of Red-Green Coalition

    1. Robert Koenig*
    1. Robert Koenig is a writer in Bern, Switzerland.

    German researchers were both buoyed and apprehensive this week following voters' surprisingly strong rejection of the long-running coalition led by Chancellor Helmut Kohl on 27 September. For the past 16 years, even during the tumultuous days of German reunification, scientists had enjoyed a relatively stable research environment under Kohl's Christian Democrat-led government. But early this week things looked certain to change as Social Democrat leader Gerhard Schröder began to form a new government that seemed likely to include the environmental-minded Green party.

    On the positive side, the Social Democrat Party (SPD) has promised significant increases in the federal budget for research and higher education, with emphasis on rebuilding the university system. But some scientists feared the possible effects of a coalition with the Green Party—which attracted 6.7% of the vote—because the Greens have opposed some forms of genetic engineering and nuclear power research.

    Biologist Hubert Markl, president of the Max Planck Society, and biochemist Ernst-Ludwig Winnacker, who heads the basic-research granting agency, the Deutsche Forschungsgemeinschaft (DFG), both told Science that they were generally optimistic that the Social Democrats—especially if they directly control the research ministry—will strongly support basic research, perhaps even beyond the 5% increases currently slated for next year's Max Planck and DFG budgets. But both scientists also expressed concern about possible policy disputes in some areas of research if the Greens become junior partners in the ruling coalition.

    “I would foresee difficult discussions with the Greens relating to both atomic-energy research and the applications of biotechnology to crops and food products,” says Markl. For example, he says, some Green politicians might criticize research at the Max Planck Institute for Plasma Physics on the feasibility of fusion energy. And Winnacker—a veteran of Germany's genetic-engineering battles of the 1980s—says he worries about the Greens' strong criticism of certain aspects of biotech research.

    As Science went to press this week, the SPD was engaged in coalition talks and deciding who would head the government ministries. Edelgard Bulmahn, the SPD's “shadow minister” for research who has been the party's chief parliamentary spokesperson on science issues, told Science that the research and education ministry “is very important to the Social Democrats. We agree on the importance of scientific research for Germany's future.” Bulmahn says an SPD-led coalition would seek a “significant increase” in the research and education budget over the next 5 years and would block any effort by the Greens to further restrict genetic-engineering research: “We would never agree to any major changes” in biotechnology policies, she says, but added that the Social Democrats might go along with the Greens' idea of sponsoring more research on the potential risks of certain genetic engineering methods.

    Manuel Kiper, a member of parliament who is the Greens' chief science spokesperson, says his party recognizes “the need for strengthened basic research.” Whereas he says the party would like to see greater emphasis on environmental research, Kiper adds that “we reject the idea of a political steering of science.” And he says that, although the Greens “are critical of genetic engineering, we do accept the necessity of certain gene-technology methods, especially in basic medical research.”


    House Report Takes Middle Ground

    1. David Malakoff

    The United States must commit to “stable and substantial” funding for basic research if the country is to prosper in a post-Cold War world. That's the main conclusion of a much-anticipated congressional report* released last week by Speaker Newt Gingrich (R-GA) and members of the House Science Committee. Although some observers predict the report will help politicians to focus on the problems facing the scientific community, critics say its three dozen recommendations offer few fresh insights into such thorny issues as guidelines for participating in international projects and improving science education.

    Key report.

    Local students “unlock” the new report by Rep. Ehlers, left, while Speaker Gingrich and George Brown look on.


    In June 1997, Gingrich asked committee member Representative Vern Ehlers (R-MI), the first research physicist elected to Congress, to take a fresh look at U.S. science policy (Science, 4 July 1997, p. 28). Ehlers's charge was to write a sequel to “Science: The Endless Frontier,” the 1945 report by engineer Vannevar Bush that has guided U.S. science policy for decades. Ehlers pledged to “keep it simple” and to avoid the mistakes of an earlier panel that labored in the mid-1980s on a massive report that was never completed. And he kept his word: “Unlocking Our Future” was delivered on time and at a relatively concise 70 pages.

    However, the tepid reaction of many veteran policy watchers suggests that the report may still have fallen well short of its mark. “It's an excellent and welcome statement on behalf of basic science, but it falls short of breaking new ground,” says Lewis Branscomb, a science policy analyst at Harvard University Kennedy School of Government in Cambridge, Massachusetts.

    To be sure, the report does not call for a radical overhaul of the country's approach to supporting science. “The message of this report is that, while not exactly broke, America's science policy is nonetheless in need of some important maintenance,” says Representative James Sensenbrenner (R-WI), chair of the House Science Committee. In fact, the panel's ranking Democrat, Representative George Brown (CA), complains that the report's biggest flaw is that “it fails to take on some of the issues I think are most important to the future health of the scientific enterprise,” including the need to support engineering and the social sciences and to ensure that all Americans benefit from research advances.

    One of the report's more novel suggestions is a proposal to revamp the peer-review system to encourage “creative … speculative” studies. “There are no rewards for risky science: It is too important to publish,” the report quotes one postdoc as saying. But its solution—a new granting process that “depends on peer-review but takes into account the speculative nature of the proposed research”–is seen as lacking sufficient detail to be useful. The report “tries to have it both ways,” says one analyst.

    Although many recommendations echo a variety of past reports—expanding public-private research partnerships, improving the use of science in the courts and regulatory agencies, and strengthening science and math education at all levels—others tackle issues fresh on the minds of the science committee. For instance, a call for “a clear set of criteria for U.S. entry into, participation in, and exit from an international scientific project” appears to be rooted in the debate over three recent projects: the Large Hadron Collider now being built at Europe's high-energy physics laboratory in Geneva, the moribund International Thermonuclear Experimental Reactor, and the international space station. The focus on developing such “clear, predictable ground rules” for international projects is welcome as scientists grapple to understand global issues such as climate change, says retired Admiral James Watkins, who headed the Energy Department during the rise and fall of another big-science project, the Superconducting Super Collider.

    Ehlers hopes that the House will adopt a resolution endorsing the report as a first step toward implementing its recommendations. And although Brown and other key Democrats are already voicing their concerns, many lawmakers have backed the report after being lobbied by university administrators in their districts who like its message. “It's not an especially partisan document, so it could spur a very useful debate,” says one House staffer. Most important, he says, Gingrich's support means “it has what most reports lack—a powerful patron.”


    Distant Star's Radiation Jolts Earth's Atmosphere

    1. David Kestenbaum

    On 27 August at about 3:22 a.m. Pacific Daylight Time, a tidal wave of x-ray and gamma ray radiation washed over Earth, turning night to day in the upper atmosphere and shocking some satellite instruments into a self-preserving “safe hold” mode. The burst was reported at a NASA press conference in Washington, D.C., last Tuesday, but it apparently got its start 20,000 years ago and as many light-years away, when a superdense, supermagnetized neutron star suffered a massive “star-quake.” Neutron stars are well-known x-ray sources, but this massive burst is “about the wildest thing in 30 years since we've been monitoring these things,” says astrophysicist Kevin Hurley at the University of California, Berkeley.

    The intensity of the 5-minute pulse was negligible at Earth's surface, but in space, it “was about a tenth of a dental x-ray dose,” Hurley estimates. “It's a hell of a lot of radiation for a source that far away.” An astronaut a tenth of a light-year away would have received a fatal dose in less than a second; near Earth the deluge was enough to trigger the momentary shutdown of equipment on NASA's Rossi X-ray Timing Explorer and on the NEAR and Konus-Wind spacecraft. It was also enough to leave its fingerprints on Earth's atmosphere.

    One of the many phones that rang that August morning when the satellites felt the assault belonged to Umran Inan, a Stanford University physicist. Inan jumped out of bed to check the state of the ionosphere—the ionized layer of the upper atmosphere. He runs the Holographic Array for Ionospheric Lightning research (HAIL), a string of 50 radio antennas located in the backyards of high schools from Wyoming to New Mexico that monitors very low frequency radio broadcasts (with wavelengths tens of meters long), which the U.S. Navy uses to communicate with ships and submarines. By tracking the strength and the phase of the waves, HAIL can detect changes in the altitude of the ionosphere. A thicker ionosphere tends to act as a large pillow, weakening radio signals as they bounce between it and Earth's surface.

    Inan and colleagues found that the strength of Navy radio signals from Hawaii and Seattle suddenly plummeted at the time the pulse swept over Earth. Those signals had bounced off the nighttime ionosphere, which normally hovers some 85 kilometers above Earth's surface. During the day radiation from the sun ionizes more molecules, substantially thickening the ionosphere. The weakening of the signals during the radiation pulse, Inan says, shows that the ionosphere's inner edge briefly plunged to 60 kilometers, about where it sits during the full force of daylight sun. It's the first time a pulse from outside the solar system has had such a drastic effect on the atmosphere, he says. Budding scientists at participating high schools weren't allowed to share in the excitement, however. “We couldn't tell them because of the news release” scheduled for this week, Inan says.

    The origin of the pulse became clear when researchers noticed that its intensity varied with a 5.16-second cycle—the exact frequency of the x-ray source SGR 1900+14, located in the constellation of Aquila (the eagle), which had recently been acting up. The relative timing of the wave as it hit each of the satellites also pegged SGR 1900+14 as its origin. The object is thought to be a neutron star, which spews out x-rays from a hot spot as it rotates. “It's an x-ray lighthouse,” says Hurley.

    The surge, however, pointed to an unusually big convulsion on SGR 1900+14. Neutron stars, the dense embers of burned-out stars, inherit the magnetic fields of their parent stars and can concentrate those fields to enormous strengths. The magnetic field of such a “magnetar” would periodically tear apart the star's hard crust of heavy elements, relieving stress as in an earthquake. Particles shot upward in the quake would be accelerated by the magnetic field, producing a strong wave of radiation. To generate a burst of the magnitude that struck Earth, Hurley says, the field would have to be at least 1014 gauss, about 100 trillion times stronger than Earth's.

    Could a nearby magnetar threaten the human race? “Yeah, I did that calculation,” Hurley says. To trigger chemical reactions that would destroy the ozone layer, the explosion would have had to occur as close as the comet belts that girdle the solar system. A magnetar hiding there would have been sniffed out long ago, he says.


    Japan Urged to Open Up Planning for Lab

    1. Dennis Normile

    Tokyo—In a country that prizes consensus, a group of university professors is planning to skirt official channels in a bold step to influence plans for a new national laboratory. The researchers fear that plans for the lab, tentatively named the National Informatics Institute, might be overly influenced by one person: electrical engineer Hiroshi Inose, a senior scientist formerly at the University of Tokyo. Meeting informally 2 weeks ago, the group put together a plan to seek government support for a broader review of the field before plans for the institute are finalized. In addition to buying time, the approach is in line with efforts to place all areas of scientific decision-making under closer public scrutiny (Science, 4 September, p. 1435).

    “We may be losing the chance to set up a truly vibrant new national institute,” says Tuneyoshi Kamae, a physicist at the University of Tokyo who has closely followed the planning for the new institute. Adds Hideo Miyahara, a computer scientist at Osaka University, “We are trying to have the opinions of the entire information science community reflected in the plans for this new institute.” But Inose says that the established process provides ample opportunity for those interested in speaking up. “There is always some controversy between different views,” he says, leaving some to feel that their opinions “aren't fully reflected” in the final plans.

    The critics emphasize that they are not criticizing Inose personally but rather are trying to reform a process that places too much authority in the hands of a few senior scientists. Indeed, they readily agree that Inose has earned his place at the top of Japan's scientific establishment. A graduate of the University of Tokyo, he spent several years in the late 1950s at Bell Laboratories in Murray Hill, New Jersey, where he began work on a key digital switching technology now at the heart of nearly all digital telephone switches. He returned to the University of Tokyo's engineering department, eventually becoming dean. Upon retiring from the University of Tokyo in 1987, he became the founding director-general of the National Center for Science Information Systems (NACSIS), which operates database systems and the computer network linking universities and national labs of the Ministry of Education, Science, Sports, and Culture (Monbusho). His long list of honors includes one of Japan's highest—designation as a Person of Cultural Merit—and he serves on numerous governmental advisory committees.

    More voices.

    A. Yonezawa and T. Kamae, above, want Japan to broaden plans for a new institute in which H. Inose, right, has played a major role.


    The new institute is intended to bolster Japan's efforts in information sciences. In May 1997 the Science Council of Japan, an elected body that represents the interests of the scientific community, used a report by a subcommittee as the basis for urging the national government to set up an informatics research institute. Last January, Monbusho received a more detailed analysis of the idea from a subcommittee of its advisory Science Council, which Inose chairs.

    Monbusho then assembled yet another ad hoc committee to make more detailed recommendations. Given the current fiscal crisis, the panel said, it would be better to expand and upgrade the 11-year-old NACSIS rather than to build a new institute. But even before this committee finalized its recommendations, Monbusho had won approval from the Ministry of Finance for eight new research positions at NACSIS as a step toward creating the new institute. Inose filled the slots earlier this year.

    The hiring set off alarms within the community, in part because Inose ignored a pending recommendation from the ad hoc committee that positions be advertised and that a selection committee review the applicants. “The old top-down mechanism is being recreated when what is needed is a new regime in which younger researchers can play an important role,” Kamae says. Computer scientist Akinori Yonezawa of the University of Tokyo, who served on the ad hoc committee, says “the committee had recommended a more open appointment process,” although he concedes that NACSIS followed the letter of the law in its hiring practices.

    Inose argues that there was nothing unusual about the hirings, however. “It is the same process you have at American universities” if they are trying to recruit a particularly prominent scientist, he says.

    In the coming months Monbusho will assemble yet another committee to firm up the research agenda and set staffing policies for the new institute. Critics worry that the committee members will be drawn from a narrow circle of people, many with personal ties to Inose. They also worry that key decisions will be made without outside input, and they expect Inose will be appointed as head of the new institute. Inose says that there are many qualified candidates for the job and that, therefore, his selection is “unlikely.”

    In an attempt to influence those decisions, Yonezawa and his colleagues plan to ask Monbusho to fund yet another study. They want to assemble a panel of 15 to 20 information scientists to study the country's needs in information science. Yonezawa admits it is an indirect approach. But the group hopes it will serve notice to Monbusho that the process is being closely watched, as well as generate suggestions that the official committee will feel bound to consider.

    Kamae worries that this won't be enough. “By the time they finish this new study, all the key decisions will already have been made,” he says. Still, he is encouraged by the growing number of researchers who are willing to challenge the established order. “Scientists of my generation really have a responsibility to speak up and make these practices more democratic,” he says.


    A Gray Day on a Brown Dwarf

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    It's too early for detailed weather forecasts, but two astronomers claim to have detected clouds in the atmosphere of a nearby brown dwarf star. In a paper submitted to the Monthly Notices of the Royal Astronomical Society, Chris Tinney of the Anglo-Australian Observatory in Epping, New South Wales, and Andrew Tolley of Oxford University describe subtle color changes in the faint glow of LP 944-20, a brown dwarf only 60 times as massive as the planet Jupiter. They interpret the variations as evidence that clouds of titanium oxide are sweeping across the disk of this failed star. “Theirs are the first data suggesting the variability we expect for cloudy atmospheres,” says theoretician Adam Burrows of the University of Arizona, Tucson, although he cautions that the variability might be due to star spots or calibration errors instead. But if the color changes are real, they fit with other recent evidence that even brown dwarfs, reclusive though they are, have active private lives.

    The existence of brown dwarfs—stars not massive enough to sustain hydrogen fusion in their cores—has been suspected for decades, but the first bona fide detections of these dim objects came only a few years ago. Now, write Tinney and Tolley in their paper, “this field can finally move beyond the Guinness Book of World Records phase and into a period where real understanding of brown dwarf properties is possible.” One insight appears on page 83 of this issue of Science, where Ralph Neuhäuser of the Max Planck Institute for Extraterrestrial Physics and Fernando Comerón of the European Southern Observatory, both in Garching, Germany, announce that they have picked up x-rays from a very young brown dwarf. The x-rays are probably produced in the outermost layers of the brown dwarf, as a result of strong magnetic activity. The finding suggests that the brown dwarf is rotating very rapidly; otherwise there would be no strong dynamo effect to generate the magnetic field.

    Above the restless surface of a brown dwarf, astronomers expected to find an equally changeable atmosphere. The heat of a normal star would break up most compounds, but the relatively low temperatures around a brown dwarf, below 1500 kelvin, allow many more compounds to form and condense into solid particles. “We expect a very rich cloud physics in brown dwarfs,” says Burrows. According to Jonathan Lunine of the University of Arizona's Lunar and Planetary Laboratory, the clouds could consist of heat-resistant silicates, plus a host of trace compounds, including ones containing sulfur and chlorine. “The chemistry can be complex,” says Lunine. Burrows calls it “a fascinating mess.”

    Tinney and Tolley caught a glimpse of this mess using a novel instrument on the 3.9-meter Anglo-Australian Telescope to look for clouds of titanium oxide, chosen because it absorbs light strongly as a gas. The instrument, called the Taurus Tuneable Filter, enabled them to make accurate brightness measurements of a brown dwarf in two very narrow wavelength bands, one of which coincides with the absorption wavelength of gaseous titanium oxide. If clouds of titanium-bearing condensates were forming in the atmosphere of the brown dwarf, the depletion of gaseous titanium oxide would increase the brightness in this wavelength band relative to the other.

    A faint brown dwarf called DENIS-PJ1228-157 did not show the effect, but observations of the much brighter brown dwarf LP 944-20, made in February and August, both revealed telltale variations of a few percent in the ratio of the two brightnesses, sometimes over just a few hours. The observations say nothing about the actual cloud structure, such as the thickness or extent of the clouds or whether they are scattered randomly or in equatorial bands as in the atmosphere of Jupiter. But Tinney and Tolley calculate, from the brightness variations they observed, that if clouds covered 5% of the visible disk of the star, the cloud tops would appear 400 degrees cooler than the surface of the star.

    Burrows cautions that the researchers' conclusions are not yet conclusive. But he says that as observers continue to cast a weather eye on brown dwarfs, “we should soon know whether this exciting variability is real.”


    RAC Confronts in Utero Gene Therapy Proposals

    1. Jennifer Couzin

    A National Institutes of Health (NIH) advisory committee last week began what could be a long debate over whether to permit the next step in gene therapy: correcting genetic defects in a fetus before birth. Researchers say they may be ready to attempt such an experiment in 2 or 3 years, but fetal gene therapy carries potential new risks and ethical implications—including the possibility that transplanted genes could end up in sperm or egg (germ) cells and be passed on to future generations.

    NIH's Recombinant DNA Advisory Committee (RAC) began to confront those issues at a 2-day meeting on 24 and 25 September when it discussed two “preprotocols” for in utero therapies submitted by W. French Anderson, a geneticist at the University of Southern California in Los Angeles. Anderson was part of an NIH team that performed the first gene therapy experiments on humans 8 years ago. Although Anderson says he needs to do more animal studies before he draws up a solid protocol, he submitted his preliminary proposals to the RAC to force discussion of the risks early on. “It is imperative to do everything possible” to reduce the chance of germ line gene transfer, says Anderson.

    Anderson is hoping to test in utero gene therapy on two potentially fatal diseases: homozygous α-thalassemia, a hemoglobin disorder so severe that it kills the fetus before birth, and a severe immunodeficiency caused by lack of the enzyme adenosine deaminase (ADA). The protocol for treating α-thalassemia involves mixing fetal blood with a retroviral vector carrying a functioning copy of the gene missing or defective in α-thalassemia, which makes the protein α globin, and then transfusing the treated blood back into the fetus. The hope is that the virus will insert the gene into stem cells—the blood cell precursors—which are more prevalent in fetal than adult blood. This procedure might only partially correct the defect, in which case the child could be born with developmental abnormalities, or with transfusion-dependent thalassemia. But because the genetic manipulations would be performed outside the womb, it would pose little risk of the gene entering the fetus's germ line.

    The likelihood of that happening would be much greater, however, with Anderson's proposal for correcting ADA deficiency. The preprotocol calls for injecting a retroviral vector carrying the ADA gene directly into the fetus's peritoneal cavity. Based on studies in sheep, he expects the vector to carry the gene into the rapidly dividing stem cells of the bone marrow, which produce the cells of the immune system. But the vector could also find its way into germ line cells.

    That prospect raised concerns among the 15 regular RAC members and the eight ad hoc participants invited to review the preprotocols. “This is a lightning-rod issue,” says LeRoy Walters, director of the Kennedy Institute of Ethics at Georgetown University in Washington, D.C., and an ad hoc reviewer. The big worry is that the transferred gene could cause deleterious mutations that could be passed to future generations. “Nobody knows quite what's going to happen,” says RAC Chair Claudia Mickelson of the Massachusetts Institute of Technology.

    Anderson acknowledged that the risk of germ line transfer can't be eliminated. But he expects it to be very low—one in a million sperm might be affected, he guesses—and it could have positive effects or none at all. Most committee members seemed convinced. Evelyn Karson, an ad hoc reviewer and director of the Division of Reproductive Genetics at Washington, D.C.'s, Columbia Hospital for Women, noted that relatively few people would receive fetal therapy, and the number of mutations would be far outpaced by those occurring naturally. “Our genes just get picked up and tossed like a big tossed salad every time we undergo reproduction,” she says.

    For now, committee members called for more experiments to assess both the risks of the proposed protocols and their chance of success. As even Anderson concedes, “we do not have data to answer” whether in utero gene therapy will succeed. “There's a whole lot of work that needs to be done,” says committee member Philip Noguchi, director of the Division of Cellular and Gene Therapies at the Food and Drug Administration.

    The committee proposed that long-term studies on in utero gene transfer in sheep and many generations of mice be conducted to supplement the thin and somewhat ambiguous animal data that now exist. It also suggested that different diseases be considered as candidates, such as other immune deficiencies that are even harder to treat than ADA-deficiency. Until those experiments are done, RAC members say they are keeping an open mind about in utero gene therapy.


    When Walls Can Talk, Plant Biologists Listen

    1. Evelyn Strauss*
    1. Evelyn Strauss is a science writer in San Francisco.

    Plant cell walls, once thought to be inert boxes, are revealed as powerful signalers that determine the fate of cells during development

    Since the 17th century, when British scientist Robert Hooke trained an early microscope on a piece of tree bark and saw cork, biologists have known that plant cells are surrounded by rigid walls. Today, the glossary of any biology textbook will describe a traditional, narrow role for walls, depicting them as cellulose boxes that passively encase the active, living material of the cell. But over the past few years, a steady stream of research has begun to contradict this humdrum picture.

    Staining the walls.

    The amount of cellulose (red) and pectin (yellow) varies in the cell walls of an Arabidopsis stem.


    The new work shows that far from being inert, cell walls play active roles in determining plant cell fates. Walls contain a gallery of carbohydrates and proteins—whose identities are still largely mysterious—that appear to “talk” to other molecules, both inside and outside the cell. Researchers are trying to track down these messenger molecules and are also exploring just what information they convey. So far, most of the chatter seems to be related to development, as walls direct some cells to become roots while ordering others to develop into shoots and leaves.

    “The last few years have seen a revolution in how we think about cell walls,” says Roger Pennell, a plant biologist at Ceres Inc. in Malibu, California. “They're not static, rigid, impermeable things. They're very dynamic, very porous, very malleable, and they broadcast signals to the cells they contact directly as well as from one cell to another.”

    Indeed, many researchers now liken the wall to the extracellular matrix (ECM) of animal cells, which engages in a constant dialogue with cells, affecting many aspects of their lives, from cell division to differentiation. Like the ECM, the cell wall consists largely of load-bearing fibers tethered together in a watery gel. And the components of both the cell wall and the ECM are originally secreted by cells; then, once positioned in the wall or ECM, the molecules “talk back” to the cells and their neighbors. “The fact that, historically, people called it a wall generated a psychological barrier that probably wasn't based in reality,” says Keith Roberts, a plant cell biologist at the John Innes Centre in Norwich, U.K.

    The new respect given to cell walls was evident at the 8th International Cell Wall Conference in Norwich. The first such meeting 20 years ago mustered only 30 scientists. But this year 400 scientists turned out, and an additional 100 were turned away because of lack of space.

    Determining destiny

    Plant pathologists have long known that fragments of the cell wall can influence cell behavior when plants are fighting off microbes. Then the first inklings that cell walls might regulate normal development in healthy plants appeared in the late 1980s, when plant developmental biologist Ralph Quatrano, currently at Washington University in St. Louis, and colleagues were studying embryos of the brown alga Fucus. The group found that enzymatic removal of the cell wall destroys the ability of the embryo to “remember” where its rhizoid—the alga's version of a root—is supposed to grow. Once the walls were gone, exposure to light caused the rhizoids to regrow in new directions, suggesting that the wall stored signals needed to direct root growth.

    This finding was extended in 1994, when plant biologists Fred Berger, Colin Brownlee, and Alison Taylor of the Marine Biological Association in Plymouth, U.K., made a serendipitous observation while studying two-celled Fucus embryos. At this stage, the fates of both cells are determined—one is destined to form the rhizoid and the other to develop into leaflike fronds. The researchers were studying whether the two cells differed in how they moved ions across the cell membrane, and the cell wall hindered their access to the membrane. So they used a laser to slice a hole in the wall of the rhizoid-forming cell, letting the cell inside ooze out for study. Then they tossed away the remaining cell and empty cell wall. But one day, on a whim, Berger saved the leftovers.

    He saw the intact cell—which had been destined to become fronds—divide and multiply. The cell walls of its descendants eventually touched the empty rhizoid cell wall—and these cells then changed identities, maturing into rhizoid cells. “One day I saw an embryo with a rhizoid,” says Berger, who is currently at the Institut National de Recherche Agronomique in Lyon, France. “The next day there were 20 [such] embryos, and I realized something interesting was going on.” Follow-up experiments confirmed that the empty cell wall could in fact determine the identity of the cells that touched it.

    Next, the researchers studied cells already specialized to form either rhizoids or fronds and found that once liberated from the wall, such cells could give rise to both types of cells. That suggests that the Fucus embryo cells didn't contain the information needed to specify their own destinies—but their cell walls did. “As soon as you remove a cell from its wall, it goes back to the zygotic state—it has the potential to become anything,” says Berger.

    In work on roots reported earlier this year in Current Biology, Berger has extended these findings to the more evolutionarily advanced experimental plant Arabidopsis. During normal Arabidopsis development, epidermal cells—those that form the outer layer of the root—sit above an inner, or cortical, layer of root cells. Not all the epidermal cells grow root hairs, however; only those sitting on the junction between two cortical cells do that. Berger wanted to find out whether this ability was due to a cell's lineage—whether it came from certain root-hair-forming predecessors—or to its position. By studying cases where the plane of cell division goes awry and by moving cells around, he and his colleagues showed that it was in fact a cell's position—touching cortical cell junctions or not—that determined its fate.

    The source of this fate-determining signal turned out to be the cortical cell wall, as the researchers showed by killing selected cortical cells, leaving only their walls. Cells touching the junction between the former cortical cells still became root hairs even though the cortical cells themselves were gone. “The only structure that's left is the actual skeleton of the cortical cells themselves,” says Berger. “The most logical explanation is that the messengers [specifying cell fate] are in place within the [cortical cell] wall.” Roberts agrees: “The cell's fate changes depending on what wall the cell touches. This work makes a very persuasive case that some immobilized component in the wall is what's required.”

    This result shows that the cell wall sends messages that help determine the identity of neighboring cells as well as of the cell it encloses. It also implies that the composition of the cortical cell wall varies, with some as-yet-unknown molecular signal occupying certain patches of the wall—near cortical cell junctions, for example.

    Touchy business.

    As shown in this root cross section, epidermal cells that grow root hairs touch the junction between two cortical cell walls.


    In fact, over the last 10 years, other researchers have been finding just such variations in cell walls through monoclonal antibody studies. Researchers have long known that the wall contains a wide variety of carbohydrates and proteins, some of which are attached to two of the wall's major components, hemicellulose and pectin. Few of these molecules have been identified yet, but the antibodies have shown that wall components vary among different cell types, at different times in development, and even between different locations around a single cell. “It's clear that the plant cell is investing a tremendous amount of metabolic energy in synthesizing components that get put into walls at specific times and places, and it doesn't need to do that if the purpose is to make a box just to hold the plant cell,” says Michael Hahn, a plant biochemist at the Complex Carbohydrate Center at the University of Georgia, Athens.

    Messengers of fate?

    The next step is to try to pin down the identities of these suspected signaling molecules. Researchers have long known of one, the so-called S locus glycoprotein, which helps the female parts of crucifers such as broccoli reject pollen from plants that are too closely related, and they're hot on the trail of others, most of which seem to be important to development.

    Last December in The Plant Cell, for example, Pennell and colleagues reported a cell wall molecule—probably a kind of pectin– identified because it binds to an antibody called JIM8, and linked to differentiation. In culture, undifferentiated cells whose walls contain this JIM8 antigen can give rise to complete embryos, while cells with walls that don't express it remain stuck in an unspecialized state. This is consistent with the idea that the JIM8 antigen helps transform cells into embryos.

    While Pennell works to better characterize this antigen and its function, other researchers are studying a class of cell wall molecules—originally noted for their structural role—that also appears to be important in organ formation. As a plant cell grows, its volume balloons by 10- to 1000-fold, and the wall must grow proportionately, while remaining strong enough to counter up to 5 atmospheres of pressure from inside the cell.

    One class of proteins that helps promote cell wall expansion is the expansins, identified by Daniel Cosgrove, a plant biologist at Pennsylvania State University in University Park, and his colleagues in 1992. The latest research shows that these molecules can also trigger the formation of leaves.

    Last year Cris Kuhlemeier, a molecular geneticist at the University of Bern in Switzerland, and his colleagues found that by applying expansin-soaked beads to the growing tip of a tomato stem, they could elicit the production of leaflike structures. Moreover, other components of the plant recognized the expansin-induced leaf as “real”: After the team induced a leaflike structure to grow in this way, other leaves formed in a pattern that mimics normal growth, appearing at regular intervals around the stem relative to the new “leaf” (Science, 30 May 1997, p. 1415). “By changing the mechanical properties of the cell wall with expansin, you can start a developmental program,” says Kuhlemeier. “That makes the cell wall a decisive element in the whole process of how plants turn from undifferentiated cells into leaves.”

    Kuhlemeier's most recent work suggests that expansin performs this function normally, not just when applied by scientists. In a paper in last month's Plant Cell, the group showed that the expansin is being made at just the right place and time to stimulate cell growth. They studied the messenger RNA (mRNA) that encodes the expansin protein and found that in healthy tomato plants its highest concentrations appear in the exact position where the leaf grows. And the mRNA accumulates before a hint of a leaf appears. “It predicts where the leaf will form,” says Kuhlemeier. “That makes it very plausible that what we showed by external application is really what's going on in vivo.”

    At the moment, however, no one knows exactly what the many molecules in cell walls are doing. Although Berger's experiments reveal what “message” walls are conveying, for example, he has not yet identified the molecular messenger. And although Pennell seems to have his hands on an intriguing molecule, its function is still unclear. “Much of the work is descriptive and correlative rather than showing a clear causative chain of events,” says Roberts.

    One problem, he notes, is that many of the wall molecules are carbohydrates, which are much more difficult to work with than proteins. As a result, he says, “it will take a lot of time, effort, and maybe new [methods] to nail down what the signaling molecules are. People are very excited, though. They've been gingered up to a state of expectation with all these molecules that are specific to particular walls. Then these results of Berger's come and make people think, ‘Maybe it's really true. We better go after these molecules.’” After more than 100 years, cell walls' image has finally gotten a whole lot livelier.


    New Math Speeds the Search for Protein Structures

    1. David Kestenbaum

    An old approach is yielding new recipes for turning x-ray snapshots into 3D maps of proteins in days instead of months

    “Fred's Folly” was a contraption that would have made Alexander Calder proud—a box the size of a refrigerator in which a model of a molecule could be neatly strung on piano wire and springs. Designed by Yale University crystallographer Frederic Richards and widely used in the 1970s, the gadget allowed crystallographers who had mapped the atoms in a giant molecule to actually see what it looked like, by tinkering with a molecular mobile until its shadow matched drawings from the data. Visualizing molecules has gotten a lot easier with computer graphics, but the process of figuring out where the individual atoms go is still a laborious enterprise.

    Flash photography.

    New techniques quickly revealed the structure of the antibiotic vancomycin.


    The recipe hasn't changed much over the past 2 decades. In one classic method, the molecule—a protein, say—is crystallized and probed with x-rays, then garnished with atoms of heavy metals and probed again. Only then can investigators interpret the x-rays to work out how the thousands of atoms are arranged in space. Now radically simpler recipes are in sight, at least for some molecules.

    Two new algorithms, one called Shake-and-Bake and the other—for the moment—Half-Baked, offer a way to look at a single high-resolution x-ray snapshot of a crystal and reconstruct in one gulp how its many atoms are positioned. The basic approach, called direct methods, has been around for decades, but large molecules defeated it in the past. At a recent meeting, *however, researchers reported that the new programs are already helping to unravel full-sized protein structures. And instead of taking years, as other methods do, the new algorithms cracked the structures in a matter of days.

    Fittingly, one of the reports comes from Herbert Hauptman, a mathematician who pioneered direct methods and shared the 1986 Nobel Prize in chemistry for the work. Hauptman, president of Hauptman-Woodward Medical Research Institute Inc. in Buffalo, New York, and colleagues have now translated the direct methods idea into a new mathematical language of sorts, and crafted Shake-and-Bake (see Shake-and-Bake is making short work of test proteins such as lysozyme, which contains about 1000 nonhydrogen atoms. And inspired by Shake-and-Bake, George Sheldrick and colleagues at the University of Göttingen, Germany, have cooked up—although not completed—a similar program called Half-Baked. They have used it to crack a protein with over 2000 atoms and work out several key antibiotic structures.

    “It's absolutely incredible,” says Suzanne Fortier, a crystallographer at Queens University in Ontario, Canada. “The success of this work is, I believe, a bit of a surprise to everybody.” The work has also rekindled interest in a dying discipline. Instead of attracting a few die-hards for a nuanced mathematical debate, the direct-methods session at this summer's meeting was standing room only.

    To the uninitiated, trying to figure out a molecule's structure from a diffraction pattern looks about as promising as trying to predict the future by staring at tea leaves. When x-rays pass through a crystal and scatter off the electrons of atoms in the regularly arrayed molecules, they emerge as a constellation of bright dots. Inferring the positions of atoms in the molecule from the intensity and location of those dots is extraordinarily difficult because the dots lack a crucial clue: the “phases,” or the relative positions of the crests and troughs, of the x-rays at different dots. Only by knowing the phases can researchers trace the waves back into the crystal to pinpoint the planes of atoms that scattered them.

    Introducing heavy atoms can break this impasse because the atoms change the diffraction pattern, giving away their own positions in the crystal. With the x-rays scattered from these atoms as reference points, crystallographers can bootstrap their way into the rest of the molecule, figuring out one phase in the diffraction pattern, then another, and finally revealing the whole structure. But the process can take years and “umpteen” crystals, says Cornell University crystallographer Ashley Deacon.

    Direct methods offer a faster route by using computers and equations to puzzle out structure from just a single diffraction pattern. Crudely speaking, the algorithms try different arrangements of atoms, simulating the diffraction patterns they would produce to find an arrangement that makes the observed pattern. Because there is an infinite number of possible configurations, the algorithms have to be clever about how they search. “It's a bit like playing golf, except you don't know where the hole is because someone has removed the flag,” Sheldrick explains. “And there are bunkers … pseudosolutions you can get trapped in.”

    As a result, the technique, conceived in the late 1940s, was limited to molecules containing no more than a couple of hundred atoms. “Only once, and with great effort, did we solve a 300-atom structure,” says Hauptman. Proteins, which can contain tens of thousands of atoms, were far out of reach until now.

    The essence of the new algorithms is that they jump between adjusting or “shaking” the phases, which gives a distribution of atoms, and “baking” in the positions that seem most likely. By shaking and baking repeatedly, they converge on the complete structure. The procedure somehow avoids the bunkers and finishes up with the ball in the hole. “The whole theory is pretty obscure,” Sheldrick says. “We don't really understand why it works so well.”

    Sheldrick, Hauptman, and others caution that the new methods only work for molecules that can be coaxed into producing precise diffraction patterns, with a resolution of about an angstrom—about the size of an atom. Making these precision pics typically requires an unusually high-quality crystal—difficult to get for some molecules, especially large ones containing 10,000 atoms or more. “Currently only about 10% [of large molecules] are suitable,” Sheldrick estimates. Making precise diffraction patterns also usually requires the brilliant x-rays available only at the world's few synchrotron accelerators.

    Improved techniques and new synchrotrons could bring more molecules into range of the direct algorithms, he says. But in the meantime, many crystallographers say, the best hope for figuring out the structures of large proteins lies in combining direct methods with another technique called MAD (for multiwavelength anomalous diffraction) phasing, which has already gone a long way to speeding protein structure determination.

    MAD phasing, developed in the early 1990s, works on modified proteins made by inserting the gene for the protein into Escherichia coli bacteria, then typically feeding the bacteria a version of the amino acid methionine whose sulfur atom is replaced by a heftier selenium atom. The bacteria incorporate these selenomethionine amino acids into the protein. When synchrotron x-rays are shot through the crystallized protein, they scatter strongly off the heavy selenium atoms. By studying how the diffraction pattern changes as the wavelength of the x-rays is varied, crystallographers can tease out the position of the selenium atoms, which then provides an edifice for determining the phases, and hence, structure of the protein. The procedure resembles the heavy-atom approach, but because it riddles the protein with more precise atomic landmarks, MAD phasing can be done with a single crystal.

    Despite MAD's many successes, the technique has bogged down for large molecules, where the positions of many selenium atoms (more than 15 or so) need to be determined. Shake-and-Bake offers a quick way to map large numbers of selenium atoms, Hauptman points out. And for this task, coarse diffraction patterns are often good enough, as the selenium atoms are usually far apart in the molecule. At the meeting this summer, researchers from Cornell University reported using Shake-and-Bake to locate some 65 selenium atoms in a MAD data set, which enabled them to pin down some 25,000 atoms in the structure of a large enzyme that regulates antibiotic uptake in some bacteria. That structure is twice as large as any previously done with MAD, says Cornell's Deacon.

    “The beauty of the method is things can happen very quickly,” he says. “Once you have the protein and crystal, you can collect data and have the structure in a few days.” So far only a few groups have followed suit, however. Says Deacon: “People probably don't realize the magnitude of what can be achieved with this.”

    • * American Crystallographic Association Annual Meeting, 18–23 July, Arlington, Virginia.


    A Bold Plan to Re-Create a Long-Lost Siberian Ecosystem

    1. Richard Stone

    An international team of scientists will test whether bison, horses, and other large grazers can bring back the mammoth steppe

    Cherskii, Russia—Like a frog hopping from lily pad to lily pad, Sergei Zimov strides from one tussock to the next, wobbling for a moment on each sedge knob rooted in the sodden permafrost. Occasionally he misjudges a tussock's firmness and his leg disappears up to the knee into the marsh water. Within minutes, Zimov has reached higher ground and a carpet of mosses and lichens, birch bushes and scattered larches—hallmark features of a mixed tundra-taiga landscape that dominates much of this region above the Arctic Circle. It is a starkly beautiful, wild land, permeated with the fragrance of alpine sage. Zimov, however, wants to see it torn up and populated.

    Zimov is no Soviet-style planner intent on draining the marsh and putting up drab high-rises: He's an ecologist and director of a lonely science outpost in the northeasternmost reaches of Russia. He points to a tangle of birch and willows several dozen meters away, where two of the agents he hopes will carry out his grand scheme are picking their way across a ridge. They are young male Yakutian horses—off-white and pepper-flecked, the color of snow near a Moscow highway. Zimov envisions dozens of these horses, along with moose, reindeer, and a herd of bison imported from Canada, ripping up the moss and shrubs with their hooves and teeth, allowing grasses to move in. Within a few years, he hopes, grazing animals will have supplanted the current ecosystem in a 160-square-kilometer preserve with a grassland resembling one that existed here during the last Ice Age. The idea is to reconstruct a small chunk of the mammoth steppe, a vibrant ecosystem that dominated much of Siberia before vanishing after the Pleistocene epoch ended 11,000 years ago.

    In creating what Zimov calls “Pleistocene Park,” he, two U.S. ecologists—Terry and Mimi Chapin, a husband-and-wife team at the University of Alaska, Fairbanks—and a group of Canadian and Russian wildlife biologists are embarking on an ambitious experiment that aims to test theories about the forces that shaped, maintained, and ultimately vanquished a long-gone ecosystem. Some experts are calling the experiment a watershed in efforts to study lost ecosystems. “It's a very exciting idea whose time has come,” says Paul Martin, who studies Pleistocene extinctions at the University of Arizona, Tucson. The experiment, he predicts, “is going to have a revolutionary effect on how we think about designing nature.”

    The project also marks the first attempt to restock Siberia with bison, a species that went extinct in this region at least 2000 years ago. “It makes sense to reintroduce species that have been recently extirpated by human hunting or habitat encroachment,” says Paul Koch, a specialist on Pleistocene-era mammals at the University of California, Santa Cruz. “It's just planetary hygiene.”

    But Koch and others point out that the project's main goal—restoring the mammoth steppe—could be doomed because some Pleistocene elements are impossible to reproduce: the namesake mammoths, of course, and certain climatic features, such as cooler temperatures and less carbon dioxide in the air. “You still don't have analogs for climate,” says Russell Graham, curator of vertebrate paleontology at the Denver Museum of Natural History.

    Weather is at the crux of the debate over whether Pleistocene Park will succeed. Most experts argue that Siberia in the Pleistocene was much drier than it is today. They point to Pleistocene sediments, which harbor pollen and other remnants of grasses that thrive in dry soil. These grasses, in turn, fed an array of large herbivores, including mammoths, steppe bison, horses, moose, reindeer, and woolly rhinos. Many scientists believe that a sudden and severe climate shift at the end of the Pleistocene—a warm-up of about 5 degrees Celsius in just 20 years and, perhaps, more rain and snow—set in motion a vast and inexorable turnover in which marsh-loving mosses and sedge conquered the circumpolar regions. The shift to low-nutrient mosses, coupled with a new actor—bands of human hunters—drove mammoths, steppe bison, and woolly rhinos to extinction.

    Although Zimov agrees with much of that scenario, he takes issue with one key point. Climate change probably did not alter the balance of power among plants, he says. Zimov, whose staff at the Northeast Scientific Station has spent 2 decades probing Pleistocene sediments, argues that this rich soil appears drier than today's not because the region got less precipitation but because grasses are much better than mosses at sucking water from the soil and releasing it into the air. Indeed, he points out that northeastern Siberia's climate is very dry now: Cherskii, he says, receives on average less than 20 centimeters of precipitation a year.

    Terry Chapin recalls being “intrigued” but “somewhat skeptical” about Zimov's ideas when they first met in 1991. But Zimov, who had developed a computer model that predicted how steppe or moss ecosystems might thrive under various ecological or climatic regimes, made him a believer. In a paper in The American Naturalist (November 1995, p. 765), Zimov, the Chapins, and three colleagues argued that precipitation levels in the Pleistocene could have supported today's mosses just as well as they did the mammoth steppe.

    What kept the steppe covered in grass instead of mosses, they say, was the big grazers. By churning up the ground with their hooves, bison and other heavyweights could have prevented mosses from gaining more than a weak toehold on the landscape. The grazers' dung provided fertilizer for grasses that, in turn, nourished the animals. Aggressive hunting, they argue, decimated the big herbivores, and as they gradually disappeared the ground was disturbed less and grew poorer in nutrients—conditions that could have ushered in mosses and accelerated the herbivores' decline. Zimov points out that mammoth-steppe grasses persist today in areas of Siberia rich in nutrients—along rivers and streams, for instance—or in areas where mosses were disturbed by buildings, roads, fires, and other human activities. “We don't have this ecosystem now for the simple reason that we don't have enough animals,” he says.

    Zimov hatched Pleistocene Park as a way to test this idea. Last spring, he took the first step toward populating the park, buying 32 wild Yakutian horses with funds from the government of the Sakha Republic, which oversees the vast Siberian province called Yakutia, and bringing them about 1000 kilometers east to Cherskii. This breed can put on a thick layer of body fat during the summer and fall that gets them through winter. Next will come the bison.


    Wild Yakutian horses (and bison) should tear up mosses and allow steppe to make a comeback.


    Northern Siberia's climate is too harsh for North American plains bison or European bison. But a subspecies called wood bison (Bison bison athabascae), which is thought to be the closest living relative to the extinct steppe bison, should survive in the park. The species was presumed extinct until a small herd was discovered in northern Canada in 1959. Zimov and the Chapins plan to start with 25 to 28 bison—mostly young cows and a few young bulls to ensure “the maximum rate of reproduction,” Zimov says. They would be flown from Canada to Cherskii and acclimatized in a 24-km2 fenced-in section of Pleistocene Park. Within a few years, the growing herd would range freely in an existing national reserve that encompasses Pleistocene Park. Several years later, bison would be allowed to roam in the 500,000 km2 of lowlands between Siberia's Indigirka and Kolyma rivers—a region haunted by the memory of Stalin-era prison camps.

    To cover the estimated $330,000 cost of readying, shipping, and acclimatizing the first batch of bison, Zimov and the Chapins have applied for a grant from the Turner Foundation, a nonprofit organization established by TV magnate and bison lover Ted Turner. If they get their money, the bison could arrive as early as March 1999. Heeding the Field of Dreams mantra, “if you build it, he will come,” Zimov last week finished building a fence around the marsh in which the bison will adjust to their new climate.

    The scientists expect it will take several years for the animals to churn up and fertilize the ground enough for Pleistocene grasses—present in small patches throughout the park—to become widespread. Even now, Zimov says, the park has enough nutritious grasses and sedge meadows to sustain dozens of horses and bison; in case of a hard winter or unforeseen circumstances, however, he has stockpiled enough forage to sustain 150 big grazers for a few years.

    If the bison take to their new environment, Zimov has plans to reintroduce other animals that could make the ecosystem balanced and self-sustaining. For starters, he plans to bring in musk oxen, already restored to central Siberia's remote Taimyr Peninsula. And he would like to bolster the ranks of predators. Already the park is home to a wolf family, but Zimov also hopes to add big cats—such as the Siberian or the Amur tiger, now threatened with extinction due to habitat loss—that would act as surrogates for extinct Pleistocene lions.

    All that sounds quite radical, but some ecologists believe it may not be radical enough. Koch, for example, predicts that bison and other grazers won't inflict sufficient damage on the mosses. Mammoths and woolly rhinos, he says, were more effective landscapers, clearing snow, rooting up vegetation, and knocking down bushes and small trees. So Koch suggests that it might make sense to introduce their closest living relatives: Asian elephants and white rhinos. He acknowledges, however, that even if these species were able to adapt to the Siberian climate, “my guess is that most contemporary ecologists and conservation biologists would become apoplectic at the thought of releasing exotic living organisms of this size and ecological consequence.”

    Even without elephants and rhinos, Zimov's bold plan might seem a fantasy in the grim realities of Russia's downward spiral. Daniil Berman, an entomologist at the Institute for Biological Problems of the North in Magadan, Russia, says with affection, “Sergei is crazy to build Pleistocene Park in our economic situation.” But he and others marvel at how Zimov has prevailed so far despite the odds. Anticipating the inflation that would make rubles essentially worthless in the early 1990s, Zimov went on a spending spree, buying everything from wood for posts to a heavy-duty tractor for clearing land for a fence. “Zimov has a brilliant speculative mind, and on the other hand he is a man of action,” says Andrei Sher, a Pleistocene expert at the Severtsov Institute of Ecology and Evolution in Moscow.

    Zimov also won important allies in the Sakha government. Aware that Sakha could neither rely on subsidies from Moscow nor revenue from its abundant but hard-to-extract nonrenewable resources—gold, diamonds, oil, and natural gas—Sakha President Mikhail Nikolaev has embraced wildlife stewardship as a potential source of meat for the local population and, perhaps, tourist dollars. Although it may be difficult logistically for Russians—let alone foreigners—to get to Cherskii, Zimov predicts that adventure tourists could boost the region's fortunes. “I hope the density of animals in the park in 20 years will be the same as in the Serengeti,” he says.

    Even if Zimov doesn't manage to create a Siberian Serengeti, the grand ecosystem experiment he is embarking on is likely to keep researchers busy for decades to come. “Scientists are tired of discussing the greenhouse effect,” Zimov says with a twinkle in his eye. “Now, maybe it will be interesting for them to discuss ecosystem reconstruction.” Sher certainly thinks so: “I am looking forward to the start of this great enterprise.”


    Iridium Accelerates Squeeze on the Spectrum

    1. David Malakoff*
    1. With reporting from Pallava Bagla in New Delhi.

    A fleet of 66 satellites that will transmit near an important frequency for studies of the cosmos is the latest example of commercial assaults on the radio spectrum

    Astronomer Harold Weaver was sound asleep when the phone rang one night in 1963. On the other end of the line was a colleague warning him about strange readings from the University of California's radio telescope in Hat Creek, whose sensitive antennae tracked the natural radio whispers produced by galactic gas clouds. “He thought something was terribly wrong,” the 81-year-old professor emeritus recently recalled from his office on the Berkeley campus. But the equipment was working fine: The unexpectedly strong emissions were evidence of the first known natural maser, an intense blast of laserlike, organized radio waves unleashed by molecules excited by cosmic radiation.

    Today, Weaver's maser—a signal produced by negatively charged hydroxyl ions composed of hydrogen and oxygen—is one of radio astronomy's most important beacons. The unusually bold spectral line it produces at 1612 megahertz (MHz) on the radio spectrum has led astronomers to new insights into how stars form and die. In particular, the maser's fluctuating intensity allows astronomers to estimate the temperature, composition, and other attributes of galactic gas clouds, including those that serve as stellar nurseries and those ejected by dying red giant stars. Hydroxyl masers are also one of the field's few reliable cosmic yardsticks: By measuring the time delay of the maser's signal between a red star's far and near sides, researchers can estimate the star's diameter and hence its distance from Earth. “The hydroxyl line is a real workhorse that has allowed some spectacular studies,” says astronomer James Cohen of the Nuffield Radio Astronomy Laboratories at Jodrell Bank, England.

    Soon, however, astronomers may find that hydroxyl workhorse and other important spectral lines much harder to ride. Celestial signals at a growing number of important points along the radio spectrum are being blocked from Earth-bound telescopes by a blanket of electromagnetic smog laid down by communications satellites. This fall, that blanket will grow thicker when the mobile phone company Iridium LLC turns on a $5 billion, globe-girdling fleet of 66 satellites to serve customers who never want to be out of touch. And Iridium isn't the only threat to parts of the radio spectrum that astronomers once had mostly to themselves. Within a few years, at least 100 other communications satellites are due to be launched, and one firm has designed a flotilla of signal-relaying airships. Some of these new platforms will interfere with stellar signals unless government officials intervene, observers say.

    “The demand for radio frequencies is growing explosively, and the spectrum is getting crowded,” says Paul Feldman, a telecommunications lawyer with Fletcher, Heald & Hildreth in Rosslyn, Virginia. “Increasingly, the question is whether radio astronomers can keep their windows of discovery clear of interference from neighboring users.”

    The sound of snowflakes

    Astronomers are vulnerable bystanders in the communications revolution because they operate some of the most sensitive passive radio receivers on Earth. Modern radio telescopes can detect distant energy emissions of less than a trillionth of a watt, signals equivalent to the energy ripple generated by a falling snowflake. As a result, some astronomy dishes are notoriously susceptible to interference, especially from signals traveling from space to Earth. Indeed, astronomers have estimated that a single cellular phone transmitting from the surface of the moon would be perceived on Earth as one of the strongest radio signals in the universe.

    To shield their sensors from unwanted signals, astronomers have waged a 40-year campaign to secure proper access to important frequencies (Science, 8 December 1995, p. 1564). There have been some successes. In 1992, for instance, the International Telecommunication Union (ITU), the United Nations body that allocates the spectrum to different users, awarded radio astronomers primary rights to a chunk of the spectrum around the coveted 1612 hydroxyl band, from 1610 to 1613.8 MHz. In a footnote, the ITU said astronomers would be protected from a specified level of “harmful” interference from mobile satellite services.

    That policy was quickly put to the test when Iridium won the right to transmit messages from space to Earth in an adjacent band, from 1616 to 1626.5 MHz. Despite early assurances that the downlink signals would stay clear of the hydroxyl frequency, engineers with Iridium's parent company, Motorola, soon admitted that radio emissions would stray into the hydroxyl band at levels that would swamp signals from the cosmos. The news stunned those who had fought for the protective ITU footnote.

    The community tried initially to convince Motorola to fix the problem before the satellite system was launched, arguing that the footnote required it to move to another part of the spectrum or redesign the system, an expensive undertaking. But company officials asserted that it was only required to share the sky with the world's dozen major radio observatories. Because those telescopes spend only a portion of their time studying the hydroxyl line, the company said, the problem could be solved by providing researchers with a few unobstructed hours each day. “Did we agree with the radio astronomers' interpretation of that footnote? No, because you can't be that rigid,” says Iridium's Jack Wengryniuk, one of the company's lead negotiators.

    Then the astronomers tried another tack, winning governmental promises to reject licensing of Iridium unless it made an effort to protect domestic radio observatories. In the United States, this leverage led to a 1994 time-sharing agreement between Iridium and the Charlottesville, Virginia-based National Radio Astronomy Observatory (NRAO), which is funded by the National Science Foundation. The pact required Iridium to provide four observing hours each night—when phone chatter dies down—to the single-dish observatory in Green Bank, West Virginia. Two other multidish arrays, which are less sensitive to Iridium's signals, won less stringent relief. To the disappointment of astronomers outside the United States, the limited NRAO pact became a model for later agreements.

    Even so, it took four more years of negotiations for Iridium to work out a similar agreement with the National Astronomy and Ionosphere Center (NAIC), which runs the world's largest single-dish telescope in Arecibo, Puerto Rico. The pact, signed this past March, provided Arecibo with 8 hours a night of observing time, plus up to 20 hours of daylight telescope “passes” to observe special objects such as comets. Even that agreement has its limitations, however. “It is an annoying solution because it imposes a straitjacket on our schedule,” says Mike Davis, Arecibo's project manager.


    Purpose: Global cellular phone and paging service

    Goal: Operating licenses in 239 nations

    Facilities: 72 satellites (includes 6 spares) orbiting at 780 km; links to 12 ground stations

    Cost: $5 billion

    Start: 1 November 1998

    Owners: Motorola and 18 other companies

    In Europe, astronomers unhappy with the restrictions imposed by the U.S. agreements took a different approach. Under the banner of the European Science Foundation, astronomers from across the continent signed a pact in August that puts off difficult time-sharing decisions for a year, an interval when Iridium traffic is expected to be light. The U.S. time restrictions “are unacceptable to us: We are not willing to give up daytime observations,” says Peter Spoelstra, an astronomer at Westerbork Observatory in the Netherlands.

    The Europeans, however, did extract a promise from Iridium to eliminate the interference problem before launching its next generation of satellites in 2006. But some astronomers doubt the company can meet the technical challenges, even by that deadline, because Iridium's need for light and inexpensive satellites limits the use of heavy filters and rules out some possible solutions. Iridium's Wengryniuk concedes that the pact is “a pretty complicated arrangement that is going to take a lot of work on both sides.”

    The U.S. and European agreements exerted a strong influence on negotiations with India, which operates the Giant Meterwave Radio Telescope near Pune. “First the American astronomers compromised and the Europeans followed, so we had to soften our stand,” says astronomer Govind Swarup, explaining why his National Center for Radio Astronomy agreed in August to a compromise that provides his observatory with six clear hours a night and requires Iridium to pay for filters that remove some of the interference. Swarup also wonders if Iridium can meet the 2006 deadline, and he warns that future battles are possible. “There is no silver bullet,” he says. “If they don't respect the agreement, we will approach the government to cancel their [broadcasting] license.”

    The brutal truth

    Wireless industry analysts say that such a threat lacks credibility in the United States. The Federal Communications Commission is likely “to ignore the protests of radio astronomers and side with the communications provider” if Iridium fails to uphold its agreements, says Larry Swasey of Allied Business Intelligence in Oyster Bay, New York. And astronomers should not expect to win over the many companies lining up to launch new wireless systems, he adds: “The brutal truth is that most of these companies have given little thought to what effect their services will have on astronomy.” Astronomers are most concerned about a proposal to use bandwidths near an important neutral hydrogen line at 1.4 gigahertz (GHz), as well as a malfunctioning television satellite that is interfering with a methanol line at about 10 GHz.

    The next step for astronomers is an ITU allocation conference set for May 2000 in Geneva. The goal is to expand existing allocations where possible and to stake out new claims to key spectrum bands above 75 GHz. That bandwidth has been virtually vacant except for radio astronomers exploiting newly discovered spectral lines, such as one produced by celestial methanol molecules. The conference “may be the last where corrections may be made,” predicts Willem Baans of Westerbork Observatory.

    Other researchers, however, are preparing for the worst. “There is no way we can fight off the commercial interests and protect all [the bands],” says NRAO's Barry Turner. The result, he predicts, will be “a lot more glum radio astronomers.” On the bright side, says Weaver, the hydroxyl line's discoverer, there's at least one region that no company has yet to exploit—the moon's dark side, which is sheltered from Earth's electromagnetic smog. “Last time I looked,” says Weaver, “the quiet side was still there, waiting for us.”


    Leveling the Playing Field for Scientists With Disabilities

    1. Constance Holden

    With help from new technologies and old-fashioned willpower, researchers with disabilities are making gains in the workplace

    Davis, California—In a lecture to a packed auditorium here, biochemist Larry Hjelmeland is describing the bends and notches that give proteins their shapes. Behind him flash slides depicting fibroin, alpha keratin, and collagen; the cartoons aren't what these substances really look like, of course, but they help the mind develop a picture of how microscopic strings of amino acids link up to form fibrous proteins. Like his students, Hjelmeland, 50, can only imagine the molecular world, invisible to the naked eye. But unlike most people, he is forced to imagine the macroscopic world, too: the tendons, the creases of a smile, all the shapes formed by proteins. That's because Hjelmeland lost his eyesight 15 years ago.


    Blind biochemist Larry Hjelmeland makes silk proteins come to life.


    A generation ago, people who were blind, deaf, or wheelchair-bound faced long odds in climbing the science career ladder. But thanks to federal laws requiring accessible buildings and forbidding discrimination against people with disabilities, social values emphasizing “inclusion,” and—most of all—the computer revolution, it is now possible for someone with almost any kind of impairment to communicate and acquire data. Indeed, a wealth of new technologies, particularly those for blind people (see sidebar), are helping put scientists with disabilities on the same footing as their peers. “Technology has been opening up the world,” says Larry Scadden, head of the Program for Persons With Disabilities at the National Science Foundation (NSF). “The greatest boon,” says Scadden, who is blind, is “independent access to information.”

    Only recently has NSF begun to track the number of people with disabilities who have chosen science as a career. In 1996, the agency estimated that 5% of the 3-million-strong science and engineering workforce—about 175,000 people, including 26,000 Ph.D.s—have “moderate to severe” disabilities. By far the most common are dyslexia and other learning disabilities, followed by speech problems—from a stutter to cerebral palsy. Orthopedic disabilities come next, then vision loss and hearing impairments. Still other people suffer from chronic diseases. Crippled by amyotrophic lateral sclerosis, or Lou Gehrig's disease, for example, astrophysicist Stephen Hawking of Cambridge University is confined to a wheelchair and speaks through a computer.

    Behind the numbers are compelling stories about how individuals have leveraged high-tech advances—and their own determination—to become productive scientists. Hjelmeland was blinded by diabetic retinopathy, a breakdown of blood vessels supplying the retina, over a 9-month period in the early 1980s while a protein chemist at the National Eye Institute (NEI) in Bethesda, Maryland. “I thought it would be partial loss, but in the end I was left with absolutely nothing,” he says. His first reaction, incredibly, was “excitement at facing a new challenge.” That was followed by anger and depression. “It was terrible,” he says. “I had very high expectations for my career at that point. Basically I felt it was all over.”

    Fortunately, Hjelmeland's vision loss occurred just as screen readers—software that translates data on a monitor screen into audio—were becoming available. He says NEI offered to create a post for him, as he calls it, as “a minor luminary on how blind people use computers.” Instead, he decided to try to make a go of it in academia even though, he says, “I had no sense that I was a salable commodity on the open job market.” He won an NEI grant to study the cell biology of scar tissue formation on the retina and obtained a post as adjunct professor at the University of California (UC), Davis, a conveniently flat campus well known for supporting people with disabilities. “My family thought I was nuts” to turn down a comfy bureaucratic job at NEI, he says—especially as he was developing diabetic kidney disease and knew he would be needing a transplant (which he got in 1996). But his gamble paid off: Within 4 years he had won tenure and had secure funding for his eye disease research.

    Hjelmeland says it took about 5 years to make the transition—psychologically and logistically—to being sightless. The secret, he says, is “to redefine yourself” in light of your limitations “and be born as a new person with a new identity.” On the whole, he considers himself lucky. “My story is a story of privilege. At every turn of the corner, I have received pretty massive support.”

    Geerat Vermeij, on the other hand, never had to redefine himself. Blinded by glaucoma in early childhood, he has earned an international reputation at UC Davis for his work in a discipline that most people would assume requires a sharp eye: evolutionary biology. For Vermeij, however, a sharp eye depends on the eye of the beholder. He has done extensive fieldwork, using his fingers to trace the subtle evolutionary adaptations that show up in snail fossils over millions of years.

    Unlike Hjelmeland, Vermeij relies heavily on Braille, typing up detailed notes from papers read to him by his assistant or by his wife, Edith. Hundreds of tiny drawers in his office contain specimens, with a strip of stiff paper marked in Braille coiled around each. Vermeij also considers himself fortunate: He's doing exactly what he's always wanted to do and admits, he says, to being “surprised at how little problem I have had with [other] scientists” on account of his disability.

    Others have faced greater obstacles. Jim Caldwell, an IBM engineer currently on loan to the St. David's Foundation in Austin, Texas, was badly burned, blinded, and paralyzed from the waist down in 1962 when a can of boat stove fuel being used in a backyard barbecue fire exploded. After the accident, Caldwell says, his doctors at the hospital “said I had no rehab potential.” Prospective employers wrote him off, too. “I couldn't even get into an employer's office to talk about a job,” he says. He finally landed in the Baltimore phone company as a computer programmer in 1966.

    Budding scientists get much more help these days with launching a career. Besides NSF's disabilities program, which promotes education for children and career development for adults, the American Association for the Advancement of Science (publisher of Science) has been serving as an information clearinghouse, as a resource for teachers, and as an advocate through its Project on Science, Technology, and Disability, started more than 20 years ago. Such programs have helped change attitudes in the scientific community. Rebecca Jackson, who lost the use of her legs after falling from a balcony in 1979, says that when she went to scientific meetings a decade ago, there were no services to accommodate disabilities. Now, says Jackson, an endocrinologist at Ohio State University in Columbus, “people call and ask if you have any special needs.”

    Whereas ramps into a building are essential to wheelchair-bound scientists, on-ramps to the information highway have been key to keeping people with disabilities in science careers. Take NSF's Scadden, who went through school with the aid of a slate and stylus after losing his sight from a household accident at age 9. Now he has a panoply of technologies at his fingertips, including a computer with talking screen reader and a machine called “Reading Edge” that scans documents into his computer or reads them with a voice synthesizer. He takes notes with a “Braille Lite”–a portable device with a refreshable display into which he can type information in Braille and retrieve it in Braille or in a digital voice.

    Gadgets are also making life easier for Ken Barner, an assistant professor of engineering at the University of Delaware in Newark. A freak injury from diving into a pile of hay at a fraternity party in his freshman year left him paralyzed from the neck down in 1983. Now studying ways to form tactile representations of scientific concepts, Barner has mostly forsaken arduously pecking things out on a keyboard in favor of “Dragon Naturally Speaking,” a computer program that recognizes his speech and turns it into writing.

    Hjelmeland says there's no way he could do his current job without such tools. “To me it was a no-brainer when I lost my eyesight” to go digital, he says. Without computers, he says, “I'd be dead in the water.” Hjelmeland spends about half his day on the computer and the other half talking to people. Fortunately he has a “killer memory,” as he has never learned Braille and relies only on prompts from a tape recorder when he lectures. He doesn't do hands-on work in the lab, but as colleague Claire Gelfman notes, “most professors are more involved in grant writing and mentoring students than doing the actual bench work.” Working with a blind lab chief, she says, “requires you to be more descriptive when presenting data.” However, Gelfman adds, “those of us who work with him do not see him as a disabled person at all.”

    But there's no way Hjelmeland could keep up without help. He prefers to have papers read to him rather than scanned and spoken by machine. “My independence is not as important to me as getting the job done,” he says. Assistant Rosemary Motz guides him to appointments, sets up computer-based video projectors in class, and reads papers to him.

    Others with disabilities, however, are forced to give up long-cherished goals. After losing his eyesight to diabetes while in graduate school at the University of Chicago in 1981, Mark Dubnick had to abandon plans to do bench research as a mammalian cell culture geneticist. He retooled as a computer scientist at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. But his heart is still in the lab.

    Karen Sadler also reluctantly let go of her dream. Sadler, who is deaf, went back to college at the age of 34 in 1990 to study neuroscience at the University of Pittsburgh. She says she abandoned the idea of having her own lab when she realized how long it was going to take to attain her goal. She was also swamped trying to keep up with course work. In class, she says, she would struggle to take notes while watching a sign-language interpreter, the professor, the blackboard, and maybe an overhead projector. “It is just too much input,” Sadler says. She eventually switched to a Ph.D. program in education, where she is developing materials to educate the deaf community about AIDS as well as for teaching science to deaf people.

    People with disabilities have to know exactly what their needs and limitations are if they want the world to have confidence in their abilities, says Sheryl Burgstahler, who directs a 6-year-old program called DO-IT at the University of Washington, Seattle, which helps high school students with disabilities get into science. Dubnick, for example, says that although he had “excellent support” from his department, it was he who had to figure out how to fulfill his Ph.D. requirements—by obtaining a grant to hire an assistant to perform lab work. “You have to be extremely independent and extremely brass-ballsy to make it,” he says.

    And you have to have faith. Even during the darkest times, says Hjelmeland, “I always thought things would be all right.”


    Opening New Vistas for Blind Scientists

    1. Constance Holden

    Technology offers a wealth of opportunities for scientists with disabilities. Blind researchers, in particular, are benefiting from innovations and gadgetry. Here's a sampling of what's in the works:

    • Sightless math. Math is ordinarily represented in Braille by the Nemeth code, a variation on literary Braille that is tough to follow. So researchers are devising other ways to help blind people absorb mathematics. One project is the Audio System for Technical Readings developed by T. V. Raman, a blind computer scientist who works at Adobe in San Jose, California. Notation is converted to LATEX, a system that puts symbols in the order in which they are spoken. Then an Audio Formatting Language turns symbols into varied speech and nonspeech sounds. Superscripts, for example, come across in a raised voice.

      Shape of things to come.

      Blind chemist William Skawinski and colleague Carol Venanzi show off prototype of 3D molecule.

    • Virtual reality by feel. Ken Barner of the University of Delaware in Newark is developing what he calls a sense-of-touch, or haptic, environment. He is refining a computer-controlled, thimblelike device that, as you move your finger, presses against your fingertip in patterns that convey location on scientific plots or weather maps, for instance. A speech synthesizer supplies each data point.

    • Molecules in 3D. Blind chemist William Skawinski of the New Jersey Institute of Technology in Mount Laurel has devised a technique using laser stereolithography for building three-dimensional (3D) replicas of molecules from computer graphics. A computer directs the construction of an enzyme model, for example, from photosensitive liquid resin that is modeled by an ultraviolet laser and cured in an ultraviolet oven. Scadden calls the feat “really incredible.” Building on this work, Anshuman Razdan of Arizona State University in Tempe is creating a library of 3D models for teaching chemistry and biology.

Stay Connected to Science