News this Week

Science  16 Oct 1998:
Vol. 282, Issue 5388, pp. 386

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Possibly Vast Greenhouse Gas Sponge Ignites Controversy

    1. Jocelyn Kaiser

    As greenhouse warming experts try to predict how much the world's climate may heat up in the next century, they keep bumping up against a mystery: Where does much of the carbon dioxide (CO2) pumped into the air actually end up? Answering this question could have huge ramifications for nations that ratify the climate change treaty signed in Kyoto, Japan, last December: Countries shown to harbor substantial carbon “sinks” could argue that an ability to soak up excess CO2 should offset their emissions.

    If those arguments prevail, it appears that North America may have drawn the winning ticket in the carbon sink sweepstakes. In what is shaping up as one of the most controversial findings yet to emerge in the greenhouse gas debate, a team of researchers on page 442 of this issue of Science presents evidence that North America sops up a whopping 1.7 petagrams of carbon a year—enough to suck up every ton of carbon discharged annually by fossil fuel burning in Canada and the United States. The magnitude of the apparent sink, says team member Jorge Sarmiento of Princeton University, “is going to be a lightning rod for all sorts of criticism.”

    Indeed, critics have already thrown up a fistful of red flags, attacking the study for everything from its methodology to its implications. “There's a huge amount of skepticism about the result,” says ecologist David Schimel of the National Center for Atmospheric Research in Boulder, Colorado, who notes that at least one other group has calculated a much smaller North American sink. Moreover, a second paper in this issue—by a group led by Oliver Phillips of the University of Leeds in the United Kingdom (p. 439)—adds to the uncertainty. It points to a carbon sink in tropical South America so large that it is hard to reconcile with the Sarmiento group's results.

    Disappearing act.

    Contours show how predicted CO2 levels (in parts per million) would change if there were no terrestrial uptake in North America. Measured levels decline, rather than increase, from west to east North America, however, implying a large carbon sink.


    Especially worrisome, Schimel and others say, is that groups opposed to the Kyoto treaty will seize on the estimate to argue that the United States doesn't need to reduce its emissions to comply with the accord. “We're all really concerned that many people will find it convenient to accept the result,” Schimel says. At the same time, scientists say this sort of calculation is a key step toward honing our understanding of the global carbon cycle. “The authors deserve a lot of credit for sticking their necks out,” says climate modeler Inez Fung of the University of California, Berkeley.

    At the heart of the debate is a simple math problem, resembling a chronic inability to balance one's checkbook, that has bedeviled scientists for nearly 2 decades. The balance sheet looks like this: Less than half of the 7.1 petagrams of carbon produced by human activity each year stays in the atmosphere. Although about 2 petagrams go into the oceans, another 1.1 to 2.2 petagrams appear to vanish into the land, likely taken up by plants during photosynthesis. Figuring out what's going on—whether the extra CO2 is spurring faster tree growth, for example, or carbon is disappearing into soils—is crucial to learning whether reforestation and other actions might help stave off warming (Science, 24 July, p. 504). “If you understood the mechanism, you'd be in a much better position to say whether the sink will continue,” says biogeochemist Richard Houghton of Woods Hole Research Center in Massachusetts.

    To get at how much carbon the different land masses are absorbing, Sarmiento and his colleagues with the Carbon Modeling Consortium (CMC), based at Princeton, used an approach called inversion modeling. They first gathered data on atmospheric CO2 levels taken from 1988 to 1992 at 63 ocean-sampling stations. Next, they divided the world into three regions—Eurasia, North America, and the rest—then fed the CO2 data into two mathematical models: one that estimates how much carbon the oceans absorb and release, and another that gauges how CO2 is spread across the globe by wind currents. When they fitted their models to the data, they found that, surprisingly, CO2 levels dropped off slightly from west to east across North America—even though fossil fuel emissions should boost levels in the east. That meant there must be a big carbon sink in North America.

    Straining belief among other experts is the sink's estimated magnitude—1.7 petagrams of carbon per year, plus or minus 0.5 petagrams—roughly equaling the continent's fossil fuel carbon emissions of 1.6 petagrams. “It's hard for me to know where that much carbon could be accumulating in North America,” says Houghton. Data from forest inventories suggest the U.S. sink absorbs only 0.2 to 0.3 petagrams of carbon a year. Sarmiento's team suggests that the inventories have missed a lot of forest regrowth on abandoned farmland and formerly logged forests in the east fertilized by CO2 or nitrogen pollution, and that they fail to account for carbon stored in soils and wetlands. But the result also suggests that Eurasia's immense forests are taking up only a fifth as much carbon as U.S. forests. “Ecologically, it seems almost incomprehensible,” Schimel says.

    Several modelers contend that the study is riddled with uncertainties. For one thing, the two models used to gauge carbon flux “could easily be off by just a little bit, and you get a very different conclusion,” says Fung. The results could also be skewed by a dearth of data from the North Atlantic, as the authors note in their paper. For example, the group threw out readings off Sable Island, Nova Scotia, because the data were unreliable, says team member Pieter Tans of the National Oceanic and Atmospheric Administration. Factoring in Sable Island, the sink shrinks by 30%.

    Even if the results do hold up, observers note, the CMC study's time period includes the 1991 Mount Pinatubo eruption, which led to cooler, wetter conditions and a much higher global carbon uptake than usual. “Some of this sink must clearly be … transient,” says Martin Heimann, a modeler at the Max Planck Institute for Biogeochemistry in Jena, Germany. And the findings clash with those from a team led by Peter Rayner of Monash University in Australia, which calculates a North American sink of only 0.6 petagrams of carbon from 1988 to 1992—about one-third the CMC group's estimate. The Australian group's results will be published next year in Tellus.

    The CMC team acknowledges that its results strain credibility. “I have trouble quite believing” the size of the sink, says Tans, adding that “We're pushing the data pretty far.” But, says Sarmiento, “we've really carefully analyzed the data in a lot of different ways.” U.S. Geological Survey geochemist Eric Sundquist agrees: “The paper is a credible and rigorous interpretation of the available data.”

    More and better data, including direct measurements of carbon storage and flux over land, will be needed to narrow the gap between the two studies. Already, this approach has turned up a big surprise: According to the U.K. group's results, undisturbed tropical forests in South America are getting thicker and may account for about 40% of the missing sink, a figure seemingly at odds with the CMC group's inversion results. The study is the first to pool data from measuring carbon storage, or biomass, over 2 decades at over 150 tropical forest plots worldwide. “This illustrates the types of studies that really need to be integrated,” says Sundquist.

    Before this research has time to mature, however, the possibly vast North American carbon sink could be the subject of heated debate in climate treaty implementation talks next month in Buenos Aires, Argentina. If the CMC team's findings are accurate, “the most obvious conclusion” would be that “there's no need for the U.S. and Canada to curb emissions,” says Heimann. Indeed, Steven Crookshank of the American Petroleum Institute says the study “calls into question the scientific basis on which we're making these decisions, when we still don't know if the United States is even emitting any carbon in the net.”

    But some observers argue that a large North American sink should not be an excuse to go easy on emission controls. Maturing forests eventually stop storing carbon, so “this part of the missing sink [won't] be with us forever or even much longer,” says atmospheric physicist Michael Oppenheimer of the Environmental Defense Fund in New York City. “The existence of the sink isn't important. What's important is the changes in the sink.”


    California Adopts Controversial Standards

    1. Gretchen Vogel

    Third-graders in California will be taught about the periodic table, and sixth-graders will learn about Earth's “lithospheric plates” under a new set of standards* approved last week by the state Board of Education. The standards—which will be used to revise the state curriculum, set guidelines for textbooks, and develop statewide tests—have been sharply attacked by many science education reformers, who contend that they focus too much on detailed knowledge and too little on concepts. Although the board's action appears to put an end to the controversy, critics are hoping that the winner of next month's gubernatorial race will revive the debate.

    The standards reflect California's first attempt to spell out what students in kindergarten through 12th grade should learn about science. They follow on the heels of mathematics standards that were even more hotly contested before their adoption last December (Science, 29 August 1997, p. 1194). New tests for the state's 5.5 million students are scheduled to be ready in 2000—the same year public school textbooks will have to meet new guidelines. Those are expected to influence science teaching across the country, as California represents more than 10% of the national textbook market.

    Standard deviation.

    California's new science standards introduce the periodic table in grade 3, while those developed by the National Academy of Sciences discourage use of the terms “atom” and “molecule” with students younger than high school.

    View this table:

    Last Friday's unanimous vote by the board came after a final flurry of lobbying and letter-writing by more than a dozen scientific societies (including the American Association for the Advancement of Science, which publishes Science). Some of these groups offered to help rewrite the final draft to bring it into line with National Science Education Standards issued in 1996 by the National Academy of Sciences (NAS). “It doesn't match the [national] standards in any way,” says NAS President Bruce Alberts. He and others believe that the state standards contain so much factual material that teachers will be forced to skip more in-depth learning activities that would give students a better understanding of the scientific process.

    But others praise the California standards as a challenging but realistic set of expectations for students. “I think they're perfect,” says Michael Morgan, a chemistry teacher and chair of the science department at Francisco Bravo Medical Magnet School in Los Angeles, who helped to draft the document. “The average student with a caring teacher can get through this.”

    At the heart of the debate is the role of the state standards. Should they represent a realistic goal for all students, or should they be a challenge for even the brightest ones? Supporters say the new standards set high expectations and will prepare students for tomorrow's technology-driven economy. The California standards “are not designed to be a description of basic literacy,” says biologist Stan Metzenberg of California State University in Northridge, one of the lead consultants on the writing committee. “It's obviously much more than what you might expect every student to leave high school with.” But the standards will provide a basis for tests that will allow school districts and parents to gauge how well students are doing, he adds.

    In contrast, detractors fear that the quantity of material required by the standards will drive students away from science by making it unappealing. “These standards are so chock-full of factoids,” says American Physical Society President Andrew Sessler, “the only way you can get them across is by rote learning.” Critics also complain that abstract concepts are introduced too early (see figure). “When you start teaching first- and third-graders about abstract things like atoms and molecules,” says Alberts, “what we actually do is not have kids understand anything.”

    The state board is expected in the next few weeks to form a committee that will draft a set of curriculum frameworks based on the standards. But opponents are clinging to one last hope. “My hope is that the next governor takes care of this” by commissioning a major overhaul of the standards, says Alberts. Such a decision, say political observers, might well set a new standard for controversy.


    Tight Budget Could Shut Down MIT Accelerator

    1. David Malakoff

    Unless the U.S. government finds more money for medium-energy physics research, the Siberian Snake may never slither into the Massachusetts Institute of Technology's (MIT's) Bates Linear Accelerator. Last week, a government advisory panel recommended that the MIT accelerator be shut down in 2 years and that other nuclear physics experiments elsewhere be abandoned if the Department of Energy (DOE) and the National Science Foundation do not boost funding for the field. DOE officials plan to use the report * to convince the Administration to do exactly that in its upcoming 2000 budget. If they succeed, physicists at the suburban Boston laboratory will be able to complete studies that require installation of the snake, a ring of magnets that organizes the accelerator's beam of electrons.

    The Bates facility, which has operated since 1968, has been a training ground for many medium-energy physicists, who explore the properties of the atomic nucleus, including the forces that bind it together. However, 2 years ago DOE opened the $600 million Continuous Electron Beam Accelerator Facility at the Thomas Jefferson National Accelerator Facility in Newport News, Virginia, a larger facility that provides researchers with higher energy electron beams.

    Despite the increased capacity, DOE funding for the $116 million program has failed to keep pace with inflation over the last few years and has fallen at least 10% below levels suggested in a 1996 plan. “The budget pressure has been building—not everything can continue under a flat [funding] scenario,” says James Symon, a physicist at the Lawrence Berkeley National Laboratory in California. In June, Symon was appointed head of an 11-member Nuclear Science Advisory Committee (NSAC) charged with recommending how scarce DOE funds should be spent.

    Symon and others hope their report will help the department win a 10% to 15% increase for its entire $323 million nuclear physics program in the Administration's 2000 budget request to Congress now being assembled. A 6-year boost would enable Bates's 85 employees to finish three experiments scheduled through 2004 and plan for “an orderly shutdown without disrupting a lot of people's careers,” says physicist Konrad Gelbke of Michigan State University in East Lansing, who chairs the NSAC. New funds would also preserve experiments planned for the Alternating Gradient Synchrotron at Brookhaven National Laboratory in New York, which produces hadron particle beams prized by physicists. The experiments are imperiled by the loss of funding from DOE's high-energy physics program at the end of 1999.

    The squeeze on Bates is caused in part by the growing demands of the Jefferson accelerator, with 423 full-time staff and more than 1000 users. Jefferson officials have estimated their $70 million annual budget needs to be raised by at least $3 million to avoid cutting back on experiment time. “The hard decision is whether you need to have two electron machines,” says Hamish Robertson, a panel member from the University of Washington, Seattle.

    Funds saved by canceling the hadron experiments and closing Bates—which has a budget of about $8 million a year—would also be used to preserve smaller grants to university-based researchers. DOE spends about $15 million on 40 projects at 32 universities. The panel noted that the projects, most of which have budgets of less than $500,000, help maintain “a balanced scientific program.”

    At MIT, however, officials are bullish that their accelerator will survive. “The quality of our science is not the issue,” says Bates director Richard Milner, noting that the Symon report gave “excellent” and “outstanding” ratings to the facility's experiments. Such arguments have already won over one key DOE official. Acting nuclear science chief Dennis Kovar says the report “is going to help us make the case that added funds would realize real benefits to research.”


    Sea Otter Declines Blamed on Hungry Killers

    1. Jocelyn Kaiser

    Sea otters spend much of their days playing, drifting with the tide, and filling their bellies with the soft meat of shellfish and sea urchins—a lazy lifestyle that many of us might envy. But ecologists know that sea otters off the Alaskan coast, at least, play a pivotal role in marine ecosystems: By dining on sea urchins, the animals help preserve kelp forests that feed a range of species, from barnacles to bald eagles. Now, however, this “poster child of marine near-shore ecology,” as Robert Paine of the University of Washington, Seattle, calls the sea otter, appears to be fighting for its survival.

    On page 473, a team led by James Estes of the Biological Resources Division of the U.S. Geological Survey and the University of California, Santa Cruz (UCSC), has documented a 90% crash in sea otter populations in western Alaska's Aleutian Islands since 1990, with devastating effects on kelp forests. The reason for the crash, Estes believes, is that killer whales, never before known to eat sea otters, appear to be snacking on the creatures, apparently because their usual food source—seals and sea lions—is declining. “This reflects real desperation for the orca. They're eating popcorn instead of steaks,” says ecologist Paul Dayton of the Scripps Institution of Oceanography in La Jolla, California.

    Experts call the study a vivid example of an ecological cascade operating on a vast scale. “It's a heroic effort, and it's a terrific find,” says Paine. The research has policy implications, too: Estes asserts that the chain of events leading to the otter's decline may have been triggered by a boom in commercial fishing in the Bering Sea. “It raises the possibility that overfishing can have a wide array of effects on species that we wouldn't expect to be impacted,” he says.

    Once hunted to the brink of extinction by fur traders, Alaska's sea otters resurged in the 20th century. But near some Aleutian islands, the otters were slow to rebound. In the 1970s, Estes and others found that off these islands, sea urchins had mowed down Pacific kelp beds, depriving fish of vital habitat and leaving the sea floor barren. Around islands with healthy otter populations, however, the kelp and its associated species flourished.

    Despite the patchiness of their recovery, otter populations seemed healthy overall through the 1980s. Beginning around 1990, however, Estes's group and others noticed that the animals were becoming scarcer. Along a necklace of the Aleutians spanning 800 kilometers, Estes's group estimates that otter numbers have plummeted from about 53,000 in the 1970s to 6000 last year. The ecological consequences have been severe, the team reports: On Adak Island, for example, where otters now number about 300, sea urchin are booming and kelp density is down 12-fold.

    A clue to this puzzling decline appeared in 1991, when researchers witnessed for the first time a killer whale eating an otter. The whales had been thought to shun otters because the animals provide few calories compared to larger, fat-laden harbor seals and Stellar sea lions. Since that initial shocker, ecologists have documented 12 cases of orcas eating otters, often swallowing them whole or first crashing down on the otters, perhaps to stun them.

    Estes says he “was really skeptical” at first that the orca attacks could explain the otter declines. But if otters were dying of disease or starvation, their carcasses should be washing up on beaches—and they are not. A series of observations persuaded his group that orcas are the likely culprit. Estes and his colleagues figured that, given the thousands of days they have logged watching otters and the six attacks they've seen, the probability that attacks had occurred, unwitnessed, before 1991 was near zero. They also tagged with radio transmitters 17 otters in a lagoon that orcas can't reach and 37 otters in an open bay; over 2 years, deaths were much higher in the bay. “That provided a very startling contrast,” Estes says. Finally, the team calculated that the number of observed killer whale attacks on otters, extrapolated to the general population, could account entirely for the observed declines. Remarkably, as few as four whales could be decimating otters along 3300 kilometers of shoreline between the Kiska and Seaguam islands. Says ecologist Mary Power of the University of California, Berkeley, “It's just mind-blowing that as few as four whales could cause an ecosystem effect over such a huge part of the Earth.”

    The researchers don't know exactly what is prompting the whales to eat otters, but they suspect it's related to a plunge in sea lion and seal populations in the western North Pacific since the 1970s. The reason for those declines is itself controversial, although one possibility is that intensified trawler fishing in the Bering Sea has sharply curtailed or altered the food supply for sea lions and seals. But as a National Research Council report noted in 1996, changes in fish populations could also be related to warmer ocean temperatures stemming from a shift in deep currents since the mid-1970s, as well as the local extinction of baleen whales, which has allowed one fish—pollock, which are low in fat—to flourish. Andrew Trites of the University of British Columbia, who has done studies of the sea lion declines sponsored by the fishing industry, says he's scrutinized the timing of the declines and the fisheries buildup and “I don't see a connection.” Dayton tends to agree that overfishing “is too simplistic. … The cause of the killer whale shift is probably very complicated.”

    Estes, however, believes overfishing is the most likely suspect—and he warns that the lesson applies far beyond Alaska. Fisheries are collapsing around the world, he notes, and as with the otters, if scientists looked more closely they might find the effects “very widely manifested in coastal ecosystems. We were lucky just to have been sitting on this and seen it when it happened. But very likely, they're the sorts of things we should be worrying about elsewhere.”


    Deep Chill Triggers Record Ozone Hole

    1. Richard A. Kerr

    In theory, the ozone hole that reopens each year over Antarctica should gradually heal as international regulations choke off the flow of ozone-destroying chlorine compounds into the stratosphere. But little about atmospheric chemistry is that simple, as this year's Antarctic ozone hole testifies. It is almost as severe as any seen before, and it stretches over an area larger than North America, a new record. Unprecedented stratospheric cold is driving the extreme ozone destruction, say researchers. Some of the high-altitude chill, they add, may be a counterintuitive effect of the accumulating greenhouse gases that seem to be warming the lower atmosphere.

    This year's Antarctic ozone hole is a whopper in every sense. Seen from a National Oceanic and Atmospheric Administration (NOAA) satellite, the area of depleted ozone extends over about 26 million square kilometers, the largest observed since annual holes first appeared in the late 1970s. Measured by balloon-borne instruments ascending from the South Pole, the layer of total ozone destruction extends from an altitude of 15 kilometers to 21 kilometers. That's higher than ever seen before, says ozone researcher David Hofmann of NOAA in Boulder, Colorado. And by 5 October, the total amount of ozone over the South Pole had dropped to 92 Dobson units, Hofmann says; only in 1993 was the ozone hole deeper, when the catalytic effect of debris from the 1991 eruption of Mount Pinatubo in the Philippines helped drive ozone down to 88 Dobson units. (Normally there are about 280 Dobson units of ozone over the pole.)

    The deep chill that gripped the Antarctic stratosphere this past austral winter is to blame, say Hofmann and other scientists. Every winter, it gets cold enough there—below −78°C—to form the icy stratospheric clouds that catalytically accelerate the destruction of ozone by the chlorine from chlorofluorocarbons (CFCs). This year, the area cold enough to form polar stratospheric clouds “is larger than anything we've seen to date” for the same time of year, says meteorologist Melvyn Gelman of NOAA's Climate Prediction Center in Camp Springs, Maryland. “There's much less heat being pumped up into the stratosphere than usual,” he says.

    No one knows just why, but an underlying cooling trend in the stratosphere—induced by, of all things, greenhouse gases—is probably aggravating the situation, researchers say. Although greenhouse gases warm the lower atmosphere, they cool the stratosphere by radiating heat to space, creating an “icehouse effect.” Recent computer modeling has suggested that greenhouse cooling might greatly worsen the nascent ozone hole over the Arctic (Science, 10 April, p. 202). And a new modeling study, published in the 1 October Geophysical Research Letters by M. Dameris of the German space agency DLR in Oberpfaffenhofen and colleagues, points to effects on Antarctic ozone, too. By 2015, their model says, ozone at lower latitudes will begin recovering as CFC controls take effect, but the chilling effect of greenhouse gases will have kept the Antarctic ozone hole as severe as ever.


    Seeing the Universe's Red Dawn

    1. Ann Finkbeiner*
    1. Ann Finkbeiner is a science writer in Baltimore.

    Hidden in a corner of the nondescript patch of sky called the Hubble Deep Field, astronomers have found what may be the farthest and oldest galaxies ever seen. So distant that the expansion of the universe has stretched their light all the way into the infrared region of the spectrum, they may have formed just a few hundred million years after the universe itself. If the universe was born 13 billion years ago, they are probably 12.3 billion years old.

    The discovery, announced last week at a NASA press conference, is a follow-up to the original Deep Field exposure by the Hubble Space Telescope (HST) in 1996. In an exposure lasting 10 days, the HST soaked up light from that patch of sky, revealing a swarm of blue, silver, and gold galaxies 11.7 billion years old. Those galaxies originally shone brightly in ultraviolet light because of the hot young stars populating them, but the expansion of the universe has “reddened” the ultraviolet into visible wavelengths. Even more distant galaxies, reddened all the way to the infrared, would have eluded the original Deep Field observation.

    Last January, Rodger Thompson of the Steward Observatory at the University of Arizona, Tucson, and his team went looking for those galaxies by aiming HST's infrared camera, called NICMOS, at one-eighth of the Deep Field for 36 hours. In a corner of the Deep Field that held more than 300 galaxies in visible light, NICMOS found 100 more. The light from most of those appeared to have been reddened by dust, not great distance, but the light of 10 of the dimmest ones seemed to have been stretched all the way from ultraviolet to infrared, giving them redshifts of 5.0 to 7.0. That would make them the oldest, farthest objects known. “What we see may be the first stage of galaxies in formation,” said Alan Dressler, an astronomer at the Observatories of the Carnegie Institution in Pasadena.

    “Next, we have to sort them out and find out what they are,” said Thompson—“how similar these are to everyday galaxies.” For now, these 10 ancient objects are too dim for anyone to see their shapes, estimate how quickly they're forming stars, pinpoint their distance, or even decide whether they're small galaxies or pieces of galaxies. One of them, Thompson thinks, looks a little like an edge-on spiral galaxy and another like a small elliptical. But their identities won't be certain until they're observed with HST's successor, the Next Generation Space Telescope, to be launched in 2007.

    NICMOS did reveal details about other galaxies—ones that were seen in the original Deep Field, where they looked, says Thompson, like blue “jumbled-up bunches of things.” These jumbles, which some astronomers had speculated might be pieces of galaxies in the process of merging, turned out to be brilliant knots of new stars forming among older, redder stars of fully formed spiral galaxies. “The [early] universe was better organized than we thought,” says Dressler. Next, theorists have to figure out how the universe managed to organize itself into galaxies in only a few hundred million years.


    1999 Budget: One Step Forward, Two Back

    1. Elizabeth Pennisi

    Last week was a bittersweet moment for agricultural researchers in the United States, as Congress finally agreed on a 1999 budget for the U.S. Department of Agriculture (USDA). The good news is that the bill provides a 23% jump in funding for the department's centerpiece competitive research program, the National Research Initiative (NRI). But the bad news is that Congress killed funding for a major new research initiative and axed a smaller research program focused on rural communities to help pay for NRI's increase. “Every little bit helps,” says Louis Sherman, a plant molecular biologist at Purdue University in West Lafayette, Indiana. “But we're disappointed that the funding for the [new research] initiative was not appropriated.”

    Last June, Congress raised the hopes of plant and animal scientists by approving a $600 million, 5-year program that would support agricultural genomics, nutrition, food safety, biotechnology, and natural resources management (Science, 3 April, p. 23). The bill's sponsors and researchers saw the Initiative for Future Agriculture and Food Systems as a way to revolutionize agricultural research with large grants to multi-institution collaborations attacking major problems. Researchers also welcomed a move by the Senate to double the research component of the department's $100-million-a-year Fund for Rural America, now in its second year. That program supports work on animal waste management, alternative fuels, and other subjects important to rural communities.

    But when it came time to pay for these ventures, Congress balked. A panel of conferees from the House and Senate cut both research programs and shifted some of the funds into other activities. Instead of the $120 million that the Senate had recommended for the futures' initiative, the conferees upped NRI's budget from $97 million to $119 million. The Agricultural Research Service, which supports USDA scientists, received an additional $26 million, to $782 million, and the $222 million in formula funds allotted to states was increased by 7% instead of the Administration's 3% request. But Terry Nipp, a lobbyist for directors of agricultural experiment stations and extension programs, says that “[the additions] in no way get close to the money we lost. We're deeply disappointed.”

    The increase for NRI is, at least, welcome news for a program whose budget has been stuck at its initial level of $100 million a year for 7 years—even though Congress itself had once agreed that it should grow to $500 million. That steady state has prevented NRI from making the type of larger, multidisciplinary awards now standard for many cutting-edge research projects or for using the latest molecular techniques in plant and animal science.

    Even NRI's new budget falls well short of President Clinton's request for $130 million, however, and the increase is not spread evenly across the program. The funding bill doubles NRI's spending for food safety research, to $16 million, and adds $4 million for plant genome studies and $5 million for animal genome studies. These efforts would have been part of the new foods initiative, but it's logical for NRI to fund them, says Sally Rockey, a deputy NRI administrator, because it is already supporting projects in these fields. But some things are lost in the trade-off, says Nipp. NRI-funded projects do not have the education and extension components that were to be part of the new initiative, and traditionally NRI awards are small, single-institution grants, he notes. “The NRI is not structured to do some of the things that the initiative was supposed to accomplish.”

    Although the bill's ultimate fate is uncertain—Clinton vetoed it in an attempt to win more emergency aid for farmers—most observers expected its provisions to be retained in a catch-all budget bill passed before Congress adjourned for the year. But supporters of the new initiative are not yet ready to throw in the towel. When Congress considers the agriculture budget next year, says Rockey, “we would hope we can resurrect it.”


    Male Mating Blocks New Cuckoo Species

    1. Virginia Morell

    The common cuckoo thrives by hoodwinking other birds, and it has mystified biologists as well. Cuckoos lay their eggs in the nests of many other bird species, but individual females specialize in nests of just a single species, leaving eggs that match those of the host bird to fool it into tending them. In spite of this specialization, the cuckoo itself remains a single species. Now, a study on page 471 reports that the reason lies not in the struggle between host and parasite, but in another ancient battle—between male and female.

    Genetic analyses of cuckoos reveal that even as they specialize on different hosts, the different reproductive strategies of males and females prevent speciation. “They're at odds with each other,” explains co-author Karen Marchetti, an evolutionary biologist at the University of California (UC), San Diego. Although it's in the interest of the female to mimic a particular host as closely as possible, Marchetti and her colleagues found that the male mates with females that parasitize many different hosts, spreading his genes around. “And that prevents the development of new species,” says Marchetti.

    “They've confirmed what we suspected but has been hard to show,” says Bruce Lyon, an evolutionary biologist at UC Santa Cruz. “These are hard-won data,” he says, noting that the secretive behavior of the cuckoo has hindered earlier efforts to study its reproductive patterns. But the jury is still out on some of the team's theories—for example, that cuckoos pass egg traits such as color, pattern, and size only from mother to daughter, enabling them to produce a variety of egg patterns in spite of the males' homogenizing effect. “It's tantalizing, but it raises more questions than it answers,” says John Eadie, a behavioral ecologist at UC Davis.

    To solve the mystery of how cuckoos specialize without speciating, Marchetti and her co-authors studied the mating patterns of the Japanese common cuckoo. This bird lays its eggs in the nests of three other species, then leaves, letting the hosts perform all the chick-rearing chores. There's always the risk that the host species will learn to recognize foreign eggs and push them out of the nest. “That puts selective pressure on the female cuckoos to lay eggs that match their host's,” explains Marchetti. Such evolutionary pressure is beginning to lead to the formation of “host races,” with eggs specialized for a particular host species—a process that is more advanced in the European cuckoo, in which females lay very distinct eggs for each host. These races would seem to be poised to develop into different species—but the cuckoos don't go that far. “That's the mystery,” says Marchetti.

    The Japanese cuckoo has been carefully studied in the field by one of the authors, ornithologist Hiroshi Nakamura of Shinshu University in Nishinagano, Japan. He noted that just 30 years ago, the cuckoo added the third host, the azure-winged magpie, to its surrogate parent list. To see how this new specialization affected the bird's genetics, Nakamura collected blood samples from 83 adult male and 79 female cuckoos. He also sampled 136 chicks, recording which of the three host nest types the chicks were found in.

    Marchetti then used this material to determine each chick's parenthood. Family tree in hand, she could then see where each female's chicks grew up. It quickly became clear, as researchers had guessed from field observations but never shown, that females are typically faithful to their host species, laying eggs in the kind of nest in which they were born. If an egg gets laid in the wrong nest by mistake, as happens about 5% of the time, and if the naïve host rears the chick, the cuckoo can immediately be set on a new evolutionary path, leading to the formation of host races—and potentially a new species.

    \But when researchers looked at the chicks' paternity, they found that the males willingly mate with any female, regardless of which host she is attached to. That behavior should block the development of any new species. “It's a conflict between the sexes,” says Marchetti. “The males want to maximize the number of their offspring … [while] the female is under pressure to produce the best matching egg, one the host won't reject.”

    Because the father's genetic contribution cannot foster specialized eggs, the team speculates that egg-mimicry traits are passed only from mother to daughter. That would increase the chances of the females laying well-matched eggs—even though the male mating habits are working against them. But Eadie and others note that this is still a theory, and the team has yet to muster evidence on how egg mimicry comes about and is maintained. “That's still a big, black box,” says Eadie. “This paper has opened the door,” says Paul Harvey, an evolutionary biologist at Oxford University. “We're going to see a lot more” now that genetic techniques can be applied to the cuckoo's sneaky reproductive habits.


    Recovered SOHO Passes Health Check

    1. Alexander Hellemans*
    1. Alexander Hellemans is a writer in Naples, Italy.

    Solar astronomers are breathing a sigh of relief this week. After a couple of months out of contact with Earth, spinning out of control, and exposed to extremes of temperature, the hugely successful Solar and Heliospheric Observatory (SOHO) appears to have come through its ordeal unscathed. As Science went to press, seven of the 12 instruments on board had been switched on successfully and recommissioning is complete for four of them, reports Bernhard Fleck, the European Space Agency (ESA) Project Scientist for SOHO at NASA's Goddard Space Flight Center in Greenbelt, Maryland. (SOHO is a joint project between NASA and ESA.) “We haven't observed any adverse effects due to the thermal stress so far,” Fleck says. He is still surprised at how well the recovery has gone. “It is a miracle,” he says.

    When SOHO spun out of control in June, following a series of ground control errors (Science, 11 September, p. 1585), astronomers feared they would lose a unique vantage point in space to watch the sun as it reaches its 11-year maximum in solar activity around 2001. “The loss would have been a major setback,” says Jørgen Christensen-Dalsgaard of Århus University in Denmark, a SOHO co-investigator.

    Ground engineers reestablished contact with the spacecraft in August and painstakingly worked to slow its spin and recharge its batteries. On 25 September, they succeeded in taking the spacecraft out of its “safehold” mode—a safety position that keeps its arrays pointing toward the sun—and restored SOHO's high-precision pointing mode, in which its attitude is maintained by spinning reaction wheels and controlled by a fine-pointing sun sensor and a star tracker.

    On 5 October, the first instrument was powered up—the Scanning Ultraviolet Spectrometer. The Variability of Irradiance and Luminosity Oscillations instrument was switched on the next day. “All the data we have seen up to now look very good,” says principal investigator Claus Fröhlich of the Physical Meteorological Observatory of the Davos World Radiation Center in Switzerland.

    The interruption of data taking was a setback for researchers studying the long-term oscillations of the sun, however. “We are still searching for the g-modes, the gravity modes, and the longer the time series, the better the signal-to-noise [ratio],” says Fröhlich. Although time has been lost, astronomers who spoke with Science are unanimous in praise for the Goddard recovery team, which has been toiling 7 days a week since June. “It is a major success … following a major failure,” says Christensen-Dalsgaard.


    Planet Hunters Become Weight Watchers

    1. Govert Schilling*
    1. Govert Schilling is an astronomy writer in Utrecht, the Netherlands.

    It's official: The invisible object tugging on a star in the constellation Cancer is a planet. Although it is 3 years since astronomers first detected the telltale wobble in a nearby star, indicating an unseen companion, they could not rule out the possibility that it is a dim star or brown dwarf instead of a planet. The same doubt has accompanied every one of the dozen or so planet candidates detected since then. But by imaging a flattened disk of dust particles surrounding one of those candidate planetary systems, two Arizona astronomers were able to deduce, for the first time, the mass of a suspected planet. The object is, they calculate, less than twice the mass of Jupiter—far too light to be a star or a brown dwarf.

    Astronomers can detect invisible, low-mass companions around other stars by finding tiny periodic changes in the line-of-sight velocity of the parent star as the companion tugs it back and forth. But it is impossible to deduce the true mass of such objects from the stellar wobbles, because astronomers do not know the orientation of the companions' orbits with respect to the line of sight. The same radial-velocity variations in a star could be caused by a relatively low-mass planet in an edge-on orbit, or a heftier object—a brown dwarf, say—in an orbit at a steep angle to the line of sight.

    Telltale tilt.

    The inclination of 55 Cancri's disk reveals its planet's mass.

    Last year, Carsten Dominik of Leiden University in the Netherlands and colleagues announced that, using the European Space Agency's Infrared Space Observatory, they had detected an excess of far-infrared radiation from the star 55 Cancri, hinting at the existence of a disk of dust and debris orbiting it (Science, 5 December 1997, p. 1707). Hearing Dominik's announcement, David Trilling and Robert Brown of the University of Arizona, Tucson, realized they had a rare opportunity. The star was also known to have a possible exoplanet companion, discovered in 1996 by Geoff Marcy of San Francisco State University and Paul Butler of the Anglo-Australian Observatory in New South Wales. And Trilling and Brown had scheduled an observation of 55 Cancri the following week, using an instrument called CoCo (for cold coronagraph) at NASA's 3-meter Infrared Telescope Facility (IRTF) at Mauna Kea, Hawaii. CoCo is designed to observe cool objects such as clouds of dust. By imaging the disk around 55 Cancri, Trilling and Brown hoped to learn its orientation, and hence the orientation of the planet's orbit.

    A week later, Trilling and Brown blocked out the star's own infrared glare to reveal a flattened disk of cool dust extending to at least 6 billion kilometers from the star. The infrared image implied that the disk is tilted 27 degrees from the plane of the sky. Assuming that the planet around 55 Cancri (which is much closer to the star) orbits in the same plane, its mass could be calculated from the wobbles Marcy and Butler originally measured. This week, Trilling and Brown announced their results, including an estimated mass of 1.9 Jupiters, at the American Astronomical Society's Division of Planetary Sciences meeting in Madison, Wisconsin; a paper is due to appear in Nature next week.

    Trilling admits that he cannot prove the crucial assumption—that the disk and the planet orbit in the same plane. “But it would be a very strange dynamical system” if they were not in the same plane, he says. Dominik agrees. “All theories I know about the formation of planets around [sunlike] stars start with a disk around the star, and the planets are formed in that disk. I think it is very reasonable to assume that both are in the same plane.”

    Astronomers now hope to learn whether the 55 Cancri system is one of a kind, or whether other exoplanetary systems have disks that may betray their mass. Omens are good—after all, our sun has its own disk, the distant band of comets called the Kuiper belt. A couple of weeks ago, Trilling and Brown used CoCo again to observe a number of other stars thought to be accompanied by planets. Although he does not want to disclose any results yet, Trilling says that they found one or two more dust disks. “We expect that most systems with planets also have disks,” he says.

    Details of masses would tell us if other planetary systems are like our own; hence, other astronomers are on the lookout. Jane Greaves of the Joint Astronomy Center in Hawaii, who discovered a disk around the nearby star Epsilon Eridani (Science, 10 July, p. 152), says, “We haven't yet tried to image dust around [known exoplanet stars], but we're planning a project for early next year.” It surely won't be long before other exoplanets are put on the dust-disk weighing scales.


    Microchip Arrays Put DNA on the Spot

    1. Robert F. Service


    Researchers are finding new uses for microchip technology. Soon, DNA sequencers, chemical plants, and satellite propulsion systems will all come in credit-card-sized packages.

    DNA chips, which identify DNA by binding it to samples on a substrate, let researchers tune in to the symphony of gene expression. They look set to revolutionize drug discovery and diagnostics, too

    Last January, a new kind of microchip saved Patrick Baeuerle from going down a multimillion-dollar, dead-end street. Then the head of drug discovery at a South San Francisco- based biotechnology company called Tularik, Baeuerle and his colleagues had just synthesized a new drug compound that, in cell cultures, drastically reduced levels of low density lipoprotein, which has been linked to hardening of the arteries. The next step was to learn how the compound worked, a puzzle that can take years to unravel.

    Looking for a shortcut, Baeuerle opted to try and find which genes a cell switches on in response to the compound. He and his team turned to researchers at Synteni, another Bay Area start-up firm, which makes DNA chips. These chips carry arrays of different snippets of DNA that serve as probes for detecting DNA fragments with a complementary nucleotide sequence.

    When Synteni researchers used their chips on fluorescent-labeled DNA from cells exposed to either the new Tularik drug or a related drug already on the market, the pattern of fluorescence showed that the new drug had caused a completely different cellular response. “It dramatically changed the profile of gene expression,” says Baeuerle, who just left Tularik to head up research at a biotech start-up in his native Germany.

    Unfortunately, the change wasn't for the better. The pattern of genes turned on by the new drug candidate strongly resembled that from a completely different class of compounds that had also looked promising but proved to be toxic. “It killed the prospects for [our] compound,” says Baeuerle. Alhough the result was a disappointment, the DNA tests likely saved Tularik millions of dollars by helping it weed out an unsuitable drug candidate early on, rather than later in animal or human tests.

    Such experiences underscore the promise of what many are now calling the microchip of the 21st century. These 2- or 3-centimeter-wide slices of either silicon or glass, bearing anything from hundreds to hundreds of thousands of immobilized snippets of DNA, have the unique ability to track the expression of many (if not all) of a cell's genes at once, allowing researchers to witness for the first time the behavior of thousands of genes in concert. Moreover, tracking cells' responses to drugs is far from the only application of these chips. Genetic diagnostics companies are turning to DNA arrays hoping that unique gene-expression patterns can pinpoint the onset of diseases from cancer and Alzheimer's to osteoporosis and heart disease.

    Elsewhere, researchers hope that arrays will help them gauge the success of HIV drug treatment, tailor medications to patients with specific genetic makeups, and sequence genes. And that's just for starters. The drug companies and biotech firms pursuing the technology are hoping that DNA chips will prove to be a primary research tool in a genetic-medicine revolution. They expect that understanding the genes active in disease will spawn a new generation of therapeutic drugs that treat underlying causes rather than symptoms.

    “In the past, we compared the activity of single genes,” says Wei Zhang, an oncologist at the M. D. Anderson Cancer Center in Houston, Texas. “With the new technology, we can analyze a huge number of genes at the same time. That provides hope for a new era of diagnostics and therapeutics.” Jeffrey Trent, who heads DNA array research at the National Human Genome Research Institute (NHGRI) in Bethesda, Maryland, agrees. “It's a remarkably different approach to genetics,” he says. “[It] allows us to track pathways instead of individual genes.” Because of that advantage, the use of the new arrays “is just exploding in all kinds of directions,” says Francis Collins, NHGRI director. “The limits will not be found anytime soon.”

    That promise has touched off a race to capitalize on DNA chips. Surveys by brokerage houses and market research firms indicate a nearly immediate annual market for the chips of about $1 billion, with plenty of room to grow. Not surprisingly, in recent years, about a dozen companies have jumped into the DNA chip-making business, each vying to become the Intel of genomics. Affymetrix of Santa Clara, California, an array pioneer, netted nearly $100 million in its June 1996 initial public stock sale. Even after the market's recent downturn, its outstanding stock is now worth more than $575 million, although the company has yet to make a profit. Other array companies are reporting similarly brisk business.

    Despite this flurry of interest, only a handful of actual products exists. Development has been hampered by a host of technical challenges, such as difficulties in distinguishing sometimes weak fluorescent signals from background noise. But the darkest cloud looming over this burgeoning business, according to chip company officials, is the threat of patent battles over key aspects of the technology and over the genes that make up the arrays (see sidebar). “There's still a lot of confusion about who owns what pieces of array technology,” says Michael Albin, the vice president of science and technology with Perkin-Elmer's Applied Biosystems Division in Palo Alto, California. Still, Albin and others are confident that if and when these battles are worked out, gene chips will take the drug discovery and diagnostics markets by storm.

    Array of options

    DNA arrays owe much of their current research and financial promise to the international Human Genome Project. Although sequencing the entire 3 billion nucleotides that make up a person's 23 pairs of chromosomes is a huge task, it is only the first step to making use of the genome. Equally important is linking each gene to its role in the cell—a field of research dubbed functional genomics. This task is daunting, too. In a typical cell, tens of thousands of genes wink on and off to help the cell churn out proteins involved in everything from metabolism to defense. Researchers can track the behavior of genes either alone or in small handfuls, but they have had no way to watch the dance of all the genes at once.

    A key breakthrough came in a 1991 Science paper by Stephen Fodor and colleagues at a drug-discovery company called Affymax (Science, 15 February 1991, p. 767). Fodor's team came up with a scheme to use the same lithographic production techniques employed in computer-chip manufacturing to synthesize a checkerboard array of either short protein fragments called peptides or short DNA fragments called oligonucleotides—each of which ended up with a unique chemical signature. The researchers were looking for a way to generate a large number of compounds quickly; these could then be tested either as drugs, in the case of peptides, or for gene identification, with oligos.

    To make their oligo arrays, for example, they started with a silicon surface coated with linker molecules that bind the four DNA building blocks, adenine (A), cytosine (C), guanine (G), and thymidine (T). Initially, the linkers are capped with a “blocking” compound that is removed by exposure to light. The researchers shone light onto the chip through a mask so that only certain areas of the chip became exposed. They then incubated the chip with one of the four bases, binding it to the exposed areas, then reapplied the block. By repeating this process with different masks and different bases, they could build up an array of different oligonucleotides. With just 32 such cycles, they could create more than 65,000 different oligos, each eight base-pairs long.

    Just like chromosomal DNA, each oligo was capable of binding to other stretches of DNA that had complementary sequences, in which G's on one segment were matched with C's on the other and A's with T's. Hence, the array could be used as a sensor: The researchers could isolate the RNA molecules that signal gene expression from tissues, chemically convert them to DNA, and label them with a fluorescent tag. After floating these tagged strands across an array of oligos, allowing complementary sequences to bind, and washing away the unbound strands, they could detect the strands that had bound by exciting the fluorescent tags with a laser. And because they knew the sequence of each oligo on their chip, the position of the fluorescent spot told them the sequence of the gene fragment that had bound there. In 1993, Affymax spun the idea into a new company—Affymetrix—and gene chips were born.

    Today, Fodor and his Affymetrix colleagues have developed more than 20 different DNA arrays for research purposes. They also offer commercial arrays where the oligos fastened to the chip are chosen specifically to scan for mutations in the HIV genome and the p53 tumor-suppressor gene, which has been implicated in up to half of all human cancers. A third chip, called Cytochrome P450, looks for variations in a set of genes involved in the metabolism of important therapeutic drugs such as beta blockers, prescribed for heart disease, and certain antidepressants.

    Affymetrix remains the best known DNA chip-maker in the business but is by no means alone. Just a couple of miles up Silicon Valley on Highway 101, researchers at Hyseq Inc. in Sunnyvale have developed their own oligo-based scheme for sequencing genes. The Hyseq scheme does not involve labeling the unknown DNA with a fluorescent tag; instead, it is mixed with a tagged oligo of known sequence and washed over the array. Where you get an array oligo and the tagged oligo binding side by side to the unknown DNA, you get a fluorescent spot—and you know part of the unknown sequence. This process is repeated with different labeled oligos, and finally a computer works out what the sequence of the DNA must be to account for all the partial-sequence information. Last year, Hyseq teamed up with gene-sequencing powerhouse Perkin-Elmer to market its chips. The first is expected to be available to researchers within months.

    Synteni was bought out last winter by Incyte Pharmaceuticals, a genomics company, which now offers glass chips that lay out an array of gene fragments 500 to 5000 base-pairs long. To use them, researchers isolate messenger RNA from normal tissue as well as tissue affected by disease or exposed to a drug. These two sets of RNAs are labeled with different-colored fluorescent tags and applied to the chip simultaneously. Scanning for the two colors then gives researchers an instant snapshot of how gene expression differs between normal cells and those affected by diseases or drugs.

    Meanwhile, researchers at San Diego-based Nanogen are putting the finishing touches on chips that apply a controlled electric field to maneuver the DNA fragments around on the chip, looking for a match. The upshot, says Nanogen's Michael Heller, is that fragments find their complementary oligo more quickly, and detection takes just minutes—rather than the hours needed with ordinary chips, which let the DNA fragments diffuse randomly. And an altogether different approach is being taken by the 2-year-old start-up Clinical Micro Sensors (CMS), of Pasadena, California. Researchers there have designed a unique approach that uses electrical signals, rather than fluorescence patterns, to indicate the position of DNA binding to oligos on the array. CMS builds its arrays on a grid of electrodes rather than a passive chip; when DNA binds to its matching oligos, a separate probe molecule, carrying iron, also binds to the complex—an addition that can be detected by the electrodes.

    Data flood

    Just where all this is going depends on whom you talk to. CMS President Jon Kayyem argues that the big market will be in diagnostics, and not just for diseases that can't be diagnosed today. To take one of his favorite examples, when parents bring in a child with a sore throat, doctors typically are limited to taking a throat swab and sending the culture to a lab for testing. The culture can take days, so the doctor often winds up prescribing antibiotics or other drugs without knowing if they will do any good. An instant diagnostic scan that reveals not only the type of infection but the precise strains would be a vast improvement.

    But Patrick Klyne, director of genomics at Millennium Predictive Medicine (MPM) in Cambridge, Massachusetts, says such tests have a long way to go before hitting the market. Not only would they have to wind their way through clinical trials and regulatory approval, they would have to come down in cost, too. Affymetrix chips can run anywhere between $45 and $850, not to mention the scanners and fluidics stations that go with them, which can cost more than $100,000. “To be viable [for diagnostics], the cost needs to come down to about $5,” says Stanley Abromowitz of the National Institute of Standards and Technology. CMS's Kayyem says that his company's electronic detection scheme has a shot at making low-cost readers. But for now, the company has only a prototype device.

    That's why Klyne and others argue that the initial breakthrough market will consist of genomics and pharmaceutical companies, which will use DNA arrays as a research tool to sift through the complex patterns of gene expression in cells and pinpoint particular genes that are turned on in disease. That's the approach being taken by MPM and its rival, diaDexus of Santa Clara, a joint venture between the big drug firm SmithKline Beecham and Incyte. DiaDexus, for example, has already used its arrays to show that prostate cancer cells crank out a protein called PLA2, while the same gene remains dormant in healthy cells. MPM researchers, meanwhile, have shown that melanoma cancer cells turn up production of a protein called melastatin. Both companies hope to turn these insights into new and improved diagnostic screens that would rely not on arrays themselves but on conventional and cheap techniques such as enzyme assays.

    Other basic research with arrays is also beginning to pay off. In 1996, Collins and colleagues at NHGRI used Affymetrix chips to detect mutations in the familial breast cancer gene BRCA1 in subjects at risk for the disease. Upstairs from Collins's lab, Jeffrey Trent and his colleagues are gauging, with their own array system, how radiation treatment affects gene expression in cancer cells.

    What's certain is that these studies are just a taste of what is to come. Already, researchers with access to DNA arrays find themselves with an enviable problem: too much information. “We are drowning in cool data here,” says Stanford array pioneer Pat Brown, whose team has made more than 7 million measurements of the expression of individual genes under different conditions. “More than 99% of the data we have is unpublished. It's so easy to think of an interesting experiment to do using this approach [that] we haven't been able to find the time to publish it all.”


    Will Patent Fights Hold DNA Chips Hostage?

    1. Robert F. Service

    The pioneering technology of DNA chips, which can identify genes by getting them to bind onto a large array of sample sequences fixed to a surface, is sparking a modern-day gold rush, with companies big and small competing frantically to stake their claims (see main text). But legal scuffles could bring that rush to a halt. The disputes—over patents on both the “hardware” (the chip technologies themselves) and the “software” (the actual genes that dot the arrays)—haven't dampened enthusiasm yet. But just about everyone involved expects the legal situation to heat up if and when companies begin to earn profits. “It's really at a critical juncture right now and has the potential to limit access and availability of the technology,” says Jeffrey Trent, who heads a DNA array project at the National Human Genome Research Institute in Bethesda, Maryland.

    On the hardware side, leading chip producer Affymetrix has filed suit against two companies, Incyte Pharmaceuticals Inc. and Synteni Inc., both of Palo Alto, California, and Affymetrix is trading lawsuits with another chipmaker, Hyseq Inc., of Sunnyvale. Affymetrix claims that Synteni, which Incyte recently acquired, is infringing on an Affymetrix technology for making dense arrays, containing more than 1000 gene fragments per square centimeter. Synteni/Incyte officials counter that the suit has no merit, as they use their own proprietary technology to immobilize much larger gene fragments than the short oligonucleotides arrayed by Affymetrix.

    Hyseq, meanwhile, maintains that Affymetrix is infringing on its array technology known as sequencing by hybridization, which uses DNA arrays in combination with separate oligonucleotide probes whose sequence is known either to identify or completely sequence genes of interest. Last month, Affymetrix countered these claims with a suit of its own. But all these companies could soon be feeling some heat from University of Oxford molecular biologist Edwin Southern, who in December was awarded a broad-ranging U.S. patent on a basic technology for laying down short snippets of DNA in arrays. Although Southern says he has yet to try to enforce his patent with companies such as Hyseq and Affymetrix, he adds that “We will be talking to them.”

    Indeed, just about every conceivable wrinkle in array technology, including the schemes for synthesizing the arrays, attaching fluorescent tags to nucleotides, and detecting the fluorescence when the tags bind, is tangled in patents. “It's complicated. Everyone will step on somebody's turf,” says Uwe Müller, who heads technology development for Vysis Inc., an arraymaking company in Downers Grove, Illinois. “As soon as you go out the door, you're slapped with three lawsuits.” Adds Jay Flatley, chief executive of Molecular Dynamics, another arraymaker: “I think it's going to be quite a number of years before all this is worked out.”

    The picture is even more complicated on the “software” side. Patent offices around the globe already allow the patenting of newly discovered genes, as long as a known function can be ascribed. Once genes are patented, DNA chip companies may be forced to obtain licenses before using portions of them on their arrays. “If people have legitimate claims, then we will respect them,” says Affymetrix vice president Rob Lipshutz.

    The scale of the licensing problem can be overwhelming, however. The array on a 2.5-centimeter chip can contain some 40,000 sequences. “If each spot on the array involves a gene that's patented, they have to get licenses for each spot,” says Trent. It's a “nightmare” of a problem, says Müller. “If everyone wants a percentage, you're going to run out of profits [really] fast.” In addition, patent holders could conceivably withhold licenses in the hope of capitalizing on commercial possibilities themselves or offer an exclusive license to one chip company, which would effectively freeze others out of the business.

    So why is anybody still in the game? Müller and others believe that a number of forces will conspire to head off the worst logjams. First, Müller notes that much DNA sequence is now in publicly available databases, giving chip companies a source of sequences for which they won't have to pay fees. Also, a report on biochips last year by the brokerage firm Lehman Brothers noted that the distribution of patents among many companies means that “no one company has enough of a patent position to completely block its competitors.” Hyseq President Lewis Gruber—a patent attorney himself—adds that this form of parity has brought about cooperation in other research-intensive industries such as microelectronics, where companies regularly agree to swap access to one another's technologies. As for the current squabbles within the array community, Gruber says, “It seems daunting, but that sort of problem has been worked out before.”


    Coming Soon: The Pocket DNA Sequencer

    1. Robert F. Service


    Researchers are finding new uses for microchip technology. Soon, DNA sequencers, chemical plants, and satellite propulsion systems will all come in credit-card-sized packages.

    Microfluidics, chips that process tiny volumes of fluids rather than electronic signals, aim to put a whole lab in the palm of your hand

    In May, a new private venture declared its aim to sequence nearly the entire human genome in 3 years for as little as $300 million. The plan beat the U.S. government's timetable by 4 years, at a tenth of the cost, and encouraged the government to move up its own genome deadlines. Leaders of the new venture, headed by genomics pioneer Craig Venter and funded by instrument maker Perkin-Elmer, hailed miniaturization—small, automated DNA sequencers—as the key. But the miniaturization behind this project is only a first step in the downsizing of the analytical laboratory.

    The Venter project will replace conventional manually controlled DNA sequencers with machines that perform the same task nonstop, inside hair-thin capillaries the length of a knitting needle. But researchers at a handful of universities and companies hope to shrink sequencing equipment much further—all the way down to postage stamp-sized microchips etched with a maze of tiny channels and reaction chambers. Because these chips can be mass- produced with a technology similar to that used for silicon-based computer chips, they stand to push down drastically the price of DNA sequencing and, if used in quantity, speed up such sequencing too. And DNA sequencers are just one of the labs on a chip now in gestation.

    A new breed of chipmaking companies is working to shrink to pocket size all types of chemistry equipment, including high-pressure liquid chromatography assays, high- throughput drug-screening systems, portable environmental screening equipment, biological weapons detectors, and even chemical production plants (see sidebar on next page). Harking back to the microelectronics revolution, researchers refer to these chips as “microfluidics” and expect them to have some of the same impact as the earlier development. “What will happen to laboratory equipment in the future is the same thing that happened to mainframe computers,” says Wally Parce, research director for Caliper Technologies, a microfluidics company in Palo Alto, California.

    Just as electronics miniaturization has led to computer-controlled home appliances and children's toys, Parce and others believe that the miniaturization of chemical equipment will lead to a host of as-yet-undreamed-of applications. “I even have folks at [NASA's Jet Propulsion Laboratory] who want to send [our microsystems] to Mars,” says Rolfe Anderson of the biochip start-up Affymetrix in Santa Clara, California.

    At present, however, these labs on a chip owe most of their appeal to their potential for doing the same job as existing equipment at a much lower cost. Current DNA sequencing, for example, requires a half-dozen tabletop machines to separate DNA from a tissue sample, select the desired fragment for analysis, amplify it, and sequence its component nucleotides—all at a high cost in technicians' salaries and in reagents. With microfluidics, both of those costs come down considerably. “The entire budget just falls off the scale,” says geneticist David Burke of the University of Michigan, Ann Arbor. On page 484 of this issue, he and his colleagues describe a DNA analysis chip that uses just nanoliter volumes of reagents (about 100-fold less than what current sequencers use) and integrates the functions of all the different tabletop machines in a single device. These chip-based devices can't yet do the actual base-by-base sequencing of the larger machines. But then again, notes Burke, the field is just getting started.

    In any case, the potential for microfluidics to work quickly and save money is beginning to draw a crowd. “It's a field that's moving very fast right now,” says Barry Karger, a microfluidics expert at Northeastern University in Boston. Spurred by market forecasts ranging from $1 billion to $19 billion for the new devices, a bevy of start-up companies—including Caliper, Aclara BioSciences, Orchid Biocomputer, and Affymetrix—has jumped into the ring. Even established chip- and instrument-makers are getting involved, including Perkin-Elmer, 3M, Motorola, Packard Instruments, and Hewlett-Packard. “It's going to be a tremendous market out there in 4 or 5 years,” says Ron Nelson, who heads research at Motorola in Phoenix, Arizona.

    Small is beautiful

    Microfluidics don't look much different from the integrated circuits at the heart of computers. In fact, most are made in a similar fashion, starting with sheets of very thin glass, silicon, or plastic. Photolithography machines carve out a series of complex, narrow channels that carry fluid to larger openings where individual reactions take place. The product of one reaction is then pumped further down the chip, mixed with more reagents, and allowed to react again.

    The effort to make labs on a chip has been under way for several years (Science, 7 April 1995, p. 26). Past efforts showed it was possible to etch tiny systems of channels and valves on chips to control the flow of liquids and run reactions. But a number of researchers remained skeptical that the complexity of many chemical processes could ever be handled by such micromachines. Sequencing DNA, for example, requires several steps, all needing different equipment and chemical reagents and reaction conditions. “Early on, we were asking whether all this integration was possible,” says Jed Harrison, a analytical chemist and chipmaker at the University of Alberta in Edmonton.

    Today, the picture has changed, says Harrison. “Not only is it possible, but it's being done.” At Affymetrix, Anderson and his colleagues jumped into the field 3 years ago in an effort to prepare nucleic acid samples for the company's microarray tests, in which unknown DNA or RNA strands are identified by allowing them to bind onto chip-bound DNA fragments (see p. 396). At a meeting last year, Anderson reported one of the most complex microfluidics instruments to date. At its heart is a plastic cartridge smaller than a credit card which, when plugged into a workstation that provides the needed reagents, carries out seven different processes to extract DNA from blood; amplify, prepare, and dilute it; then send it to the DNA array.

    To use the system, researchers start by mixing a blood sample with a salt compound that breaks open the cells and releases the DNA. This mixture is then injected into a storage chamber on the chip, and its computer-controlled system takes over from there. The chamber contains a tiny glass wall that uses a charge interaction to bind the sample's nucleic acids, while the rest of the sample is ejected. Then three separate samples of an ethanol and water mixture wash into the chamber to rinse the DNA. Another buffer solution is then piped in to release the DNA from the glass and carry it to a neighboring chamber, where it's combined with a cocktail of enzymes and other reagents piped in from an adjacent storage alcove. This mixture is then sent on to the next chamber, where the genetic material is amplified. During this step, a tiny heater pressed against the back of the chamber provides the cycles of hot and cold needed to carry out the reaction. Additional steps then convert the DNA to RNA, slap on numerous fluorescent tags, chop up the RNA, and combine it with a buffer, before sending it on to the DNA array.

    The key to the system, says Anderson, is a strategy for moving fluids around on a chip with pressure-backed air bubbles and strategically located membranes that allow air bubbles to pass through but halt liquids in their tracks. “We're able to move fluids around at will, position them in different chambers, and control the temperatures of different reaction chambers independently. When you have all that, you can do lots and lots of things.” The Affymetrix chip is “very impressive,” Harrison says. “If it works, it will be a major sales item, because it will eliminate all the different fluid-handling steps needed just to prepare a sample for analysis.”

    When speed counts

    Piling as much complexity as possible into one device isn't the only way to go. Other companies are churning out simpler chips that perform just one or two reactions very quickly, and often in parallel. Here, the driving force is coming from the drug industry, which is facing the challenge of screening the vast numbers of drug candidates made by combinatorial chemistry, a scheme that links a handful of chemical building blocks together in all possible combinations.

    Several microfluidics companies and pharmaceutical firms are working on chip-based drug-screening systems capable of analyzing thousands of drug candidates at once. Caliper, for example, is developing glass and plastic chips that use electric fields to steer different drug compounds along a drug- testing assembly line, in which drugs are combined with their targets and allowed to react, and the results monitored. “For the most part, we are trying to take the conventional assays used in the drug industry and put them on a chip,” says Caliper's Parce.

    They are certainly not alone. Aurora Biosciences of San Diego, California, has teamed up with SmithKline Beecham and other big pharmaceutical companies to create its own fluidic devices, capable of screening 3456 compounds at once in tiny microwells. And other companies, such as Aclara BioSciences of Hayward, California, and PerSeptive Biosystems of Framingham, Massachusetts, are also working on related systems.

    Meanwhile, researchers at Orchid Biocomputer in Princeton, New Jersey, are aiming to take a somewhat different path: Rather than carrying out a series of complex reactions on a small number of samples (like Affymetrix) or thousands of simple reactions (like Caliper), they're looking to run numerous ones of moderate complexity in parallel. One Orchid project, for example, aims to synthesize numerous analogs of a drug compound all at once—in effect, combinatorial chemistry on a chip.

    Orchid's chips look something like chemical factories shrunk down to credit card size, complete with internal piping, conduits, and reaction chambers. Electric fields and pressure send reagents through a series of channels laid out in rows and columns. Near the intersections, valves then open and close to carry these reagents down to a reaction chamber located on another level of the chip. Inside each chamber, the reagents react on the surface of a tiny plastic bead to create the first building block of a newly forming drug candidate. Leftover reagents are piped away, and new reagents are sent into the reaction chambers to continue the process. At the end of the synthesis, the newly minted drug candidates are cleaved from the beads and tested offchip for activity. Already, the company has built 2.5-centimeter-square chips containing 144 minireactors to synthesize compounds simultaneously. And Dale Pfost, Orchid's chair and chief executive, says the company is currently working on 10,000-chamber systems for its partner, SmithKline Beecham.

    Which—if any—of these miniaturization strategies will succeed commercially is still uncertain. Remaining challenges include loading tiny samples onto chips and ensuring that they contain a representative mixture of compounds in a starting sample. But microfluidics are improving rapidly, say Harrison and others, because they are building on the foundations of the stunningly successful microelectronics industry. The huge accumulated expertise in etching tiny patterns in ceramics and mass-producing chips gives these little labs a big advantage that competing technologies just don't have.


    Miniaturization Puts Chemical Plants Where You Want Them

    1. Robert F. Service

    Just before midnight on 2 December 1984, the Indian city of Bhopal became the site of the biggest industrial accident in world history. A cloud of toxic methylisocyanate (MIC) gas escaped from a pressurized tank at a Union Carbide chemical factory. More than 2000 people died immediately and thousands more later on, while tens of thousands continue to suffer breathing problems and other illnesses as a result.

    MIC, a highly reactive compound, was stockpiled at the plant for use in creating the pesticide carbaryl. Shipping and storing hazardous intermediate compounds are routine in the chemical industry, because not every ingredient in a process can be made on site. But physicist Jim Ryley and his colleagues hope to do away with this practice.

    Ryley's team, at DuPont's central research facility in Wilmington, Delaware, is developing tiny microchip-based chemical reactors that could be installed wherever a chemical such as MIC is needed, eliminating the need for stockpiles. Much like other “microfluidic” chips (see main text), these tiny reactors are typically just centimeters in size and etched with a series of micrometer-sized channels, valves, and chambers. Because they are often made with the same microfabrication techniques as those used in the computer-chip industry, they have the potential to be both cheap and small, making it easy to scale production up or down as needed. Portability, says Ryley, “lets you make your hazardous chemicals at their point of use.”

    At the time of the Bhopal disaster, no one had dreamed that you could carry out complex chemical synthesis in a device the size of a compact disc. But nearly a decade later, in 1993, the DuPont team members took advantage of microelectronics technology and the chemical inertness of silicon to make one of the first chemical plants on a chip. They demonstrated that they could synthesize MIC in a three-layer microreactor assembled from a trio of the standard silicon wafers (100 millimeters in diameter) used to make computer chips. The precursor gases oxygen and monomethylfomamide were mixed together in channels etched in the top layer, before passing down to the middle layer; there, they were preheated to about 300 degrees Celsius with a heat-exchange liquid that carried waste heat from the final-stage reaction. Last, in the bottom layer, the hot gases hit a silver-based catalyst and reacted to create MIC. Such a reactor, they calculated, could churn out 18,000 kilograms of MIC a year.

    The DuPont team has produced chips to synthesize other compounds as well, such as hydrogen cyanide, a toxic intermediate compound used in drug synthesis. And although the field of microchemical reactors is still in its early stages, several other groups are also exploring the technique to create everything from specialty compounds used in drug synthesis to fuels.

    At the Pacific Northwest National Laboratory in Richland, Washington, for example, Robert Wegeng and his colleagues have already built miniature heat exchangers and chemical separations devices; among other projects, they are currently working to build a miniaturized fuel reformer to extract hydrogen from gasoline to power automobile fuel cells. And at the Institute of Microtechnology in Mainz (IMM), Germany, Wolfgang Ehrfeld and his colleagues have focused on devices that can oxidize numerous compounds, such as converting alcohol-based precursors to aldehydes, widely used in food flavorings and perfumes.

    For those reactions, the microreactors have advantages that go beyond low cost and portability, says IMM microreactor expert Claudia Gärtner. Oxidation reactions generate such large amounts of heat that industrial reactors are often run well below their optimum temperature, to prevent the development of hot spots that cause unwanted side reactions and to stop the main reaction from spinning out of control. “Microreactors have a large surface-to-volume ratio,” says Gärtner. So heat that builds up is quickly transferred to the walls of the chip, where it can be conducted away.

    Still, Ryley cautions that microchips have their limits when it comes to chemical synthesis. One problem, he says, is control. Running a single large-scale reactor is fairly straightforward: You need only one set of control valves, circuitry, and sensors to monitor the reaction. But when you string 10 or 100 microreactors together, “you have one very large control problem,” says Ryley. Chip-based reactors also don't make sense for producing commodity chemicals—such as adipic acid, involved in making nylon—that are used by the trainload.

    “Clearly you're not going to replace all large reactors,” says Klavs Jensen, a microreactor expert at the Massachusetts Institute of Technology. Perhaps not. But the chemical industry is sure to find niche applications for these factories on a chip, and preventing future Bhopals may turn out to be one of them.


    Fomenting a Revolution, in Miniature

    1. Ivan Amato*
    1. Ivan Amato is a correspondent for National Public Radio and the author of Stuff, a book about advances in materials science.


    Researchers are finding new uses for microchip technology. Soon, DNA sequencers, chemical plants, and satellite propulsion systems will all come in credit-card-sized packages.

    A novelty a decade ago, microscopic machines are making gains in the marketplace and may be poised to become the darlings of Silicon Valley

    With the passion of an evangelist, Karen Markus crisscrosses the globe to let everyone from university professors to corporate bigwigs in on her good news. MEMS, machines invisible to the naked eye, are primed to shake up the world of microelectronics, she says. Now is the time to jump on the bandwagon—or risk getting left behind. According to Markus, director of the MEMS program at MCNC, a publicly funded technology incubator in Research Triangle Park, North Carolina, MEMS—a.k.a. micro-electromechanical systems—“are going to be everywhere.”

    Ranging from simple levers and rotors to complex accelerometers and locking systems for nuclear weapons, MEMS are fashioned largely from silicon with techniques adapted from the microchip industry. Think of MEMS as microelectronic chips that have taken an evolutionary deflection toward a new sub-Lilliputian species that not only can think like Pentium chips but also can sense the world and act upon it. “MEMS are enablers. They'll be all over, like plastic. They're viral. They will infiltrate everything,” Markus says.

    That message may be a bit unsettling to the uninitiated, but it resonates among specialists who have watched MEMS blossom in the last decade. The public was treated to its first glimpse of these devices in the late 1980s, when stunning pictures of tiny rotating gears and motors no larger than dust specks began appearing in the likes of The New York Times and Business Week. At the time, MEMS were more promise than reality; their moving parts tended to seize up in seconds or they curled like wood shavings into intriguing—but useless—microscrap. In the past few years, however, scientists say they have solved many of these bugaboos. Now, they claim, the ability to miniaturize mechanisms could well define the next technological age the way microelectronics has defined the present one. “There is a strong and growing consensus that [MEMS] will provide a new design technology having an impact on society to rival that of integrated circuits,” says MEMS elder Richard S. Muller, co-director of the University of California's Berkeley Sensor and Actuator Center (BSAC).

    The virtues of these tiny machines are many. MEMS are fast and generally cheap to mass-produce, at least after an R&D shop has hammered out a working design. And they boast startling mechanical sophistication in packages no bigger than a standard computer chip: Scientists have already moved MEMS into various stages of conception and development for making laboratories on chips, data-storage technologies, cell-manipulating gadgets, propulsion systems for microsatellites, locking mechanisms for nuclear weapons, and many other applications.

    The wee machines have already caught the eyes of systems engineers, the technological arbiters who decide which components to put in new devices. Since 1993, for example, car air-bag systems have employed MEMS-based accelerometers. The MEMS part, made by Analog Devices Inc. of Wilmington, Massachusetts, and other firms, is a tiny chunk of silicon suspended in a cavity. Jutting from it are dozens of bristles, like centipede legs, that are interwoven with bristles extending from the cavity walls. The tiniest movements jiggle the chunk and change the interweaving, altering the structure's ability to store charge. A violent jarring results in voltage perturbations that trigger the system to deploy the air bag.

    As MEMS carve a niche in today's markets, the ideas pipeline is surging, with an estimated 10,000 scientists and engineers—not to mention an army of product developers and marketers—keeping the valves open. Roger Grace, an engineer and consultant in Silicon Valley who has been promoting the emerging field for years, estimates that about 600 university, government, and private labs are working on MEMS devices worldwide. Fueling this enterprise is ample government support—the Defense Advanced Research Projects Agency (DARPA) alone spends $60 million a year on MEMS. Millions of dollars of venture capital are pouring into the field, and the resulting start-ups are getting snatched up by larger companies. Graduate students have been clamoring to get into MEMS programs, then starting up their own firms even before getting their diplomas. “There is a gold rush atmosphere here,” says Roger Howe, MEMS pioneer and BSAC co-director.

    Lurking in the shadows of this mountain of promise, however, are profound concerns that nag even the field's biggest boosters. “There are phenomenal ethical dilemmas in the making,” says Markus, arising from the power of MEMS devices to see, hear, feel, and taste—as well as their ability to record and transmit observations. Insiders whisper about “smart dust,” or MEMS particles equipped with sensors, processors, and communications elements that could monitor people and places and report back what they sense, putting to shame the tools of today's spy masters. “The incredible surveillance ability means that privacy could become a scarce commodity,” at least if the technology is put to sinister uses, Markus warns.

    Such issues have yet to be addressed substantially by the community, which for now is preoccupied with maintaining—and building—its momentum. Adherents say MEMS is at a place now where integrated circuits were in 1972: on the launch pad and ready for takeoff. Says Howe, “It's a fantastic time.”

    The report heard 'round the world

    The field of MEMS emerged from several lines of inquiry that began to converge in the 1960s and 1970s. Nurturing the infant field were iconoclasts who challenged the direction in which microelectronics was heading. Among them was Kurt Petersen, who recently co-founded Cepheid, a Silicon Valley company with a mission to create chip-sized analytical laboratories from MEMS and other microtechnologies (see p. 399). In 1975, Petersen arrived at IBM's Research Laboratory in San Jose, California, with an electrical engineering degree from the Massachusetts Institute of Technology, virtually guaranteeing him a front seat in the microelectronics revolution.

    But it wasn't electronic devices that intrigued Petersen, whose background was in integrated circuits. “I was fascinated when I saw them using this process for making ink-jet nozzles” for printers, he recalls. Using a combination of two techniques—photolithography-based tools for engraving microscopic circuitry in a silicon wafer, and acid etching, which eats patterns in a wafer according to the orientation of its silicon crystals—the IBM researchers were able to sculpt miniature structures, including precisely shaped nozzles that directed hot puffs of ink onto paper with enough precision to match the performance of a metal typeface.

    The nozzles, each smaller than a pinhead, were amazing, all right. But it wasn't the flawless performance of well-machined nozzles that kindled Petersen's imagination. “I was looking at the ones with mistakes in them,” he says. Under a microscope, he saw beams, bridges, and other minuscule structures. To Petersen, these aberrations offered a glimpse into a hidden world of possibilities. “I got excited about the whole issue of making mechanical devices on silicon,” he says.

    Pleased by Petersen's spark, his bosses gave him his own lab to test silicon's micromechanical mettle. First Petersen scoured the literature for kindred spirits. “I found there was a whole technology out there, but none of the people involved knew of the others,” he recalls. Figuring he was onto something, he spread the word about silicon's untapped riches to IBM colleagues in a confidential internal report in 1981.

    Next, Petersen took a watershed step: He published a version of the internal report in the May 1982 Proceedings of the Institute of Electrical and Electronics Engineers. The first sentence said it all: “In the same way that silicon has revolutionized the way we think about electronics, this versatile material is now in the process of altering conventional perceptions of miniature mechanical devices and components.” That message circled the globe and in short order had turned silicon from a celebrity of the microelectronics revolution into a darling of the budding micromachine movement.

    By the end of the 1980s, this movement was bursting out of the insular world of research, thanks mainly to the Tom Thumb appeal of microscopic wheels and rotors, which regularly landed MEMS in the news. The field rode its cuteness factor for years, says Markus. But a whimsical reputation was threatening to sap the MEMS momentum. “Bedbugs on merry-go-rounds trivialize things,” Markus says, especially if you don't start making devices people can use. Researchers were churning out mechanical wonders in the lab that would never earn their keep.

    It would take a few bold companies to transform MEMS from toys into products. First, they had to overcome key technical challenges in carving minuscule, but intricate, patterns in silicon. For example, the complex procedure for patterning shapes like gears onto silicon substrates and then etching them into freely moving parts often left residual strains that reduced a device's durability; researchers overcame this by such means as adding an hourlong annealing step that makes the silicon crystal structure especially uniform and thus relatively free of internal strain. Proving that MEMS are marketable were products such as the air-bag accelerometers, which hit the market about 5 years ago, and more recently the Digital Micromirror Display—a computer screen made by Texas Instruments (TI) in which a million or more swiveling mirrors etched onto a MEMS chip blend the three primary colors—red, green, and blue light—to convert electronic photo or video files into high-resolution images. These first fruits, says Petersen, “gave the field the credibility it needed.”

    Storing data, locking nukes

    Now that MEMS are on the shopping lists of the systems folks who figure out how to assemble technology into complicated devices such as bar-code scanners, fax machines, and automated teller machines, the doors are open for an invasion of MEMS into the technoscape.

    Take Quinta Corp., a 2-year-old firm caught up in the industrywide chase to pack more data into disk drives. Its researchers hope to multiply disk storage density by incorporating MEMS devices into Winchester technology, the standard rotating hard-disk drive systems in most PCs. In these disks, data are stored in concentric tracks—high-end machines pack as many as 4000 lines per centimeter. MEMS, predicts Quinta co-founder Joseph Davis, will boost that to 40,000.

    The central challenge is to find a way to keep the magnetic head, which reads and writes data, trained on much narrower tracks. Even in today's best drives, the actuators that keep the head on course have nothing like the needed finesse, says Davis. “The only way to go to the next level,” he says, “is to come up with a secondary device that rides on the actuator and enables you to move the head onto the right track”—a kind of coxswain, that is. “This is where MEMS come in.”

    Davis had in mind a laser guidance system that would force the head to stick to the skinnier tracks. He set out to try MEMS after a colleague faxed him an article about TI's micromirrors, which had proven adept at rapid, precise maneuvering. Quinta's twist was to design an “analog mirror” that rotates to many positions, compared to TI's “digital” mirror, which assumes only two positions.

    Here's how the system under development works: After an actuator brings the head to within 10 tracks of the target, a solid-state laser shoots a beam through optical fibers spanning the actuator's length. A MEMS mirror deflects the beam downward through a lens that focuses light onto the disk's surface. The mirror's position dictates which track the light will strike. If the laser beam strays off course, reflections from pits flanking the track light up photodetectors, which feed back into the magnetic head's control circuitry. “Without MEMS technology, we could not do what we are doing,” says Quinta's Phil Montero. Data-storage giant Seagate Technology Inc. of Scotts Valley, California, would seem to agree: It acquired Quinta last year for $320 million.

    Better data storage is no small prize in this information age, but technologies for preventing nuclear weapons from detonating accidentally command a greater sense of urgency. “Our goal is to enhance the security for nuclear weapons,” says Paul McWhorter, director of a multimillion-dollar MEMS program at Sandia National Laboratories in Albuquerque, New Mexico. Not that today's fist-sized security systems are problematic, he hastens to add, but MEMS have some potential advantages. For one thing, “micromachines, just by being small, are more rugged,” McWhorter says. They can take a beating without failing. What's more, he says, the space saved by MEMS-based systems opens up precious real estate inside the tightly packed control units of nuclear weapons. That can allow designers to add instruments that might, for example, improve the accuracy of missile targeting.

    To reap these benefits, Sandia's Steve Rogers has created what many experts tout as the most complex MEMS ever made: a locking device whose creation required 14 photolithographic masks and more than 240 processing steps similar to those required to forge integrated circuits. McWhorter hopes the prototype will evolve into a device that prevents “abnormal events”—such as fires, plane crashes, or terrorist bombings—from leading to weapons becoming armed.

    Rogers's Rube Goldberg machine writ small has won kudos from his peers, including a grand prize from the engineering magazine Design News. It responds to a 24-bit computer code that specifies the unlocking sequence. Each bit is a command for a pin, the size of a red blood cell, that fits into a linear maze etched on a microscopic gear. As the gear rotates, the pin must go up or down at turns in the maze. Given the correct bit, the pin takes the turn that allows it to proceed to the next turn. Given the wrong bit, the pin hits a dead end, which blocks the rest of the unlocking sequence and keeps the weapon locked and disarmed.

    If the pin successfully navigates all 24 turns, a second mechanism kicks in that pops micromachined mirrors up from the surface of the silicon-based device. After that, conventional technology kicks in: The mirror shunts a laser beam to circuitry that arms the weapon. The whole device, says McWhorter, “looks like a speck” to the naked eye.

    MEMSmerizing prospects

    To Petersen, the disk drive and nuke projects are prime examples of how engineers “are looking at MEMS as a credible technology to fit into their systems.” Other experts envision entirely new MEMS systems. DARPA is sponsoring a project at TRW Space and Electronics Group in Redondo Beach, California, to develop a MEMS-based system for steering miniature satellites, which might be “as big as your fist [and] might weigh as little as 1 kilogram,” says TRW project scientist David Lewis. Swarms of such satellites could, for instance, be arrayed as huge radio telescopes.

    The MEMS system would serve for nudging each satellite into a precise spot, then holding it there, so that it works in sync with the others. To create this propulsion system, Lewis and colleagues patterned more than 100 microthrusters into a silicon wafer. Each thruster, which can be controlled individually, includes a cavity of a specific shape, a nozzle, and heaters that ignite dollops of propellant. “We have built these. We have fired them in the laboratory. They work,” says Lewis.

    But hurdles must be surmounted before a micromachine revolution catches up to—let alone eclipses—the microelectronics revolution. One key issue is to develop standards of performance, reliability, and failure as rigorous as those in the microelectronics industry. “With integrated circuits, there is so much infrastructure in place that you know what the pitfalls are,” McWhorter says. “What are the equivalent failure mechanisms for MEMS? How do you model these? How do you develop algorithms that allow you to burn in and screen thousands of freshly minted MEMS? How can I ensure that I can put a micromechanical locking device in a weapon, that it can sit in a silo for 25 years, and the first time you want it to work, it will turn over and operate correctly?”

    To Markus, known to her colleagues as the “Queen Mother of MEMS” for nurturing the field, these issues are growing pains that MEMS will soon leave behind. As a sign of the field's growth, Markus's MEMS program, which in the last several years has converted about 1000 blueprints into working MEMS, is going private next year.

    Markus says MEMS have an appeal that is rare in today's high-tech: They hark back to last century's machine age, when you could see how something worked just by looking at it. “When you look at an integrated circuit under a microscope, you see a bunch of lines,” she says. Nothing moves. With MEMS, however, you see motors driving shafts turning gears that turn other gears, push plungers, and so on. “When you design a MEM, you design it with parts that move,” Markus says. “And when they are done, you watch them move and do things.”