News this Week

Science  03 Aug 2001:
Vol. 293, Issue 5531, pp. 774

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Etna Eruption Puts Volcano Monitoring to the Test

    1. Richard Stone

    MOUNT ETNA, SICILY—It's well past midnight, but Patrick Allard has no desire to knock off for the night. He fiddles with a computer in the back of a Land Cruiser parked on hardened lava 2600 meters up Etna's south flank, watching the raw data from a Doppler radar feed onto the screen. Fighting hunger and fatigue—Allard has been at this for 4 weeks—the research director at CNRS, France's basic research agency, gestures at the breathtaking sight rising, unnervingly, just half a kilometer away. “Ten days ago, there was nothing there,” he says. Now, in the early morning of 30 July, a volcanic cone already 150 meters tall and growing is shooting magma and gas high into the sky. Every few seconds, hundreds of bombs burst like fireworks from Etna's newest “parasitic” cone, falling in a graceful arc and landing on the steep slope with the rhythmic sound of a breaking wave. The glowing lava blobs tumble in chaotic, seemingly slow-motion patterns resembling the movements of swarms of fireflies.

    Allard, who collaborates with the Catania section of the National Institute of Geophysics and Volcanology (INGV) in Sicily, Italy, is bouncing radio waves off the erupting magma blocks to measure their velocity and volume and thus get a handle on the amount of energy streaming from the cone. Set incongruously against the lights of Catania at Etna's foot, the cone seems to pause and catch its breath for several seconds. “Get ready,” Allard says with evident glee, as acrid sulfur dioxide gas drifts down from the main crater. Suddenly, a supersonic magma jet tears from the cone with a thunderous clap, disgorging car-sized bombs and unleashing a shock wave that presses clothes against body. “Wow,” Allard says. “That was off the scale.” Little wonder he doesn't want to pack up. One of the world's most active volcanoes is on fire—and the science is pretty hot, too.

    Etna's latest eruptions began on 17 July, and they have attracted scores of volcanologists like Allard eager to use the latest technologies to study this awesome display. So far, Etna's pyrotechnics show no signs of abating. As Science went to press, the half-million-year-old volcano's activity posed little threat to surrounding villages or to individuals who stay off the mountain. Lava flows freely from its vents, as its magma is not as viscous as that in volcanoes such as Mount St. Helens that build lava domes and blow their tops in violent, explosive eruptions.

    But Etna is not a simple volcano, and INGV scientists are vigilant for signs that it may be entering a more dangerous phase. Researchers are keeping a 24-hour watch on the volcano, monitoring seismic activity, ground deformations, gas emissions, and gravity changes that track magma upwelling. If Doppler radar and other new techniques prove themselves during this eruption, they could sharpen monitoring and prediction at other hot spots around the world. The stakes are high. Nine times out of ten, says Clive Oppenheimer of the University of Cambridge, U.K., apparent signs of an imminent eruption are red herrings. “You can't evacuate people and get it wrong,” he says, as many people would inevitably ignore the next evacuation order.

    Earth tremors and ground deformations are the key observations that help emergency management authorities predict when a volcano may blow and, often just as critically, when it should begin to simmer down. Although volcanologists have long wanted to expand their monitoring toolkit, the need to do so was brought into sharp relief by Montserrat, where volcanic eruptions in 1997 obliterated the tiny Caribbean island's capital. When the eruptions ceased in March 1998, seismic and deformation measurements pointed to the volcano entering a quiescent mode. “It looked like there would be a period where people would be able to move back in,” says Oppenheimer. Offering contrary evidence, however, were analyses of venting gases suggesting that magma remained in the volcano's upper reservoir. In November 1999, Montserrat erupted anew; its violent activity continues to this day.

    Sniffing venting gases could improve the forecasting abilities of volcano oracles. Toward this end, the young discipline of volcanic geochemistry is the beneficiary of a military spin-off: portable Fourier transform infrared (FTIR) spectrometers developed for detecting chemical weapons on a battlefield. INGV's Mike Burton is pioneering the use of FTIR spectrometry in monitoring a dynamic volcano such as Etna. “No one's been able to do this before,” says Oppenheimer, who with the late Peter Francis helped develop the technique, which allows scientists to analyze gas clouds rapidly at a safe distance.

    Gas attack.

    INGV's infrared spectrometer draws a bead on Etna.


    Burton put the machine through its paces one recent blistering afternoon near Catania. Working in a fine rain of black ash perceptible only from a rustling in the foliage and the way it faintly pricks the skin, Burton measured the absorption of the sun's rays passing through the volcano's gas cloud. From this he could decipher the relative amounts of gases—such as sulfur dioxide, hydrogen chloride, and hydrogen fluoride—venting from the volcano.

    Sifting through earlier gas data, Burton has found an intriguing correlation. Just 4 days before the eruption began, the ratio of sulfur dioxide to hydrogen chloride rose more than twofold. It remains to be seen whether such a sign will presage future eruptions.

    More pressingly, the gas ratios should help researchers gauge the stamina of each of Etna's now five active vents by determining which are drawing from the main magma reservoir. “It's difficult to tell if the main central system is feeding all these vents,” says geologist Renato Cristofolini of the University of Catania. Ebbing sulfur dioxide might suggest less welling up and degassing of magma—and the eruption tapering off. A lengthy eruption—such as Etna's last major one, which lasted from December 1991 to March 1993—would be cause for concern, as it would increase the likelihood of lava tube formation. These hardened lava conduits would funnel molten lava faster and farther down the mountain, perhaps threatening towns. So far, however, all signs point to ample magma—and no end in sight to Etna's latest outburst. Although monitoring tools are getting better, says Oppenheimer, “no one could give you reliable odds on how long the eruption will go.”


    Japan Readies Rules That Allow Research

    1. Dennis Normile

    TOKYO—Japanese scientists would be allowed to derive and conduct research on human embryonic stem cells under guidelines expected to be approved this week by a top-level advisory body. Researchers say they are satisfied with the guidelines, which have been drawn up with little of the rancor that has characterized the debate in the United States.

    A committee working under Japan's highest science advisory body was set to finalize its recommended guidelines at a meeting scheduled for 1 August. Ultimately the guidelines will have to be approved by the education minister, whose concurrence is widely expected. Barring unforeseen glitches, the guidelines could be put into practice as early as this fall, clearing the way for any researcher in Japan to establish human embryonic stem cell lines and start using them for research. “We can now go ahead in making plans for research in this very exciting field,” says Norio Nakatsuji, a developmental biologist at Kyoto University who is likely to be one of the first in Japan to establish such cell lines.

    Green light.

    Norio Nakatsuji is looking forward to creating cell lines under new guidelines.


    Human embryonic stem cells, which theoretically can develop into any of the body's cells, may ultimately provide laboratory-grown replacement organs and treatments for such diseases as Parkinson's and Alzheimer's. But embryos are destroyed when stem cells are harvested, making their use ethically controversial. Unlike in the United States, there has been no organized lobbying against their use in Japan, and few politicians have addressed the issue. However, public concern over the possible commercialization of human embryos and potential misuse of the cells has led the panel to recommend tough guidelines. “Strict regulation is necessary to obtain public support,” agrees Nakatsuji.

    Under the proposed guidelines, all plans to establish embryonic stem cell lines and all research using the cells will have to be approved and monitored by each institution's ethical review board and by a newly established review board under the Ministry of Education, Science, Technology, Sports, and Culture. Researchers must have demonstrated an ability to handle stem cells through prior work with animal stem cells. Stem cells may only be harvested from “spare” embryos resulting from in vitro fertilization. The embryos must be donated, with donors giving written informed consent for their use. Clinics or hospitals planning to gather embryos for the isolation of stem cells must have their own review boards.

    The resulting cell lines are to be used only for basic research. Use of the cells for reproductive purposes, cloning, medical treatment, or drug screening is expressly prohibited. The guidelines apply to both public and private sector research. Public sector violators could lose their funding. Although the guidelines don't carry the force of law, private firms are unlikely to risk the bad publicity that would come with flaunting public policy. As yet, however, the private sector has shown little interest in the field.


    Satellite Shutdown Stirs Controversy

    1. Andrew Lawler*
    1. With reporting by Richard A. Kerr.

    NASA last week abruptly decided to shut down a venerable research satellite that has been gathering critical global climate change data for a decade. The decision, made for fiscal reasons, surprised and angered atmospheric researchers, who were planning a festive 10th anniversary celebration next month for the Upper Atmosphere Research Satellite (UARS).

    NASA officials say it's probably only the first in a series of similar shutdowns resulting from a decision several years ago to put industry in charge of satellite operations. The planned cost savings never materialized, however, forcing project scientists to make some tough decisions. “It's not a pleasant situation,” says Paul Ondrus, project manager for operational missions at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Now, NASA managers are faced with another tough question: how to bring the school bus-sized UARS back to Earth.

    Launched in 1991, UARS is still beaming data from five of its 10 instruments that are monitoring global warming factors, such as water vapor and solar radiation, as well as chemicals, such as chlorine, that destroy stratospheric ozone. The observations have already revealed a mysterious rise in stratospheric water vapor having climate implications and confirmed the peaking of ozone-destroying chemicals due to an international agreement. Although the satellite is well past its 3-year design lifetime, project scientists had hoped to keep it operating until this fall, when the European Space Agency had planned to launch Envisat, a sophisticated environmental monitoring satellite. That would have provided some continuity of data. But the launch has been delayed because of the recent failure of an Ariane rocket.

    Heads up.

    The massive UARS satellite, here being placed in orbit, must either be brought back by the shuttle or be left to an uncontrolled descent.


    NASA officials now intend to shut off UARS's instruments next week. “Giving up that overlap is difficult,” says Anne Douglass, deputy project scientist. “I'm shocked,” says Paul Crutzen of the Max Planck Institute for Chemistry in Mainz, Germany, who shared the Nobel Prize in chemistry for discovering the ozone threat. “It would be a tremendous loss.”

    NASA also must decide how to dispose of the 7-ton satellite. Most large satellites—such as the Mir space station—are designed so that they can be guided into the Pacific Ocean. But UARS was built in an era when engineers envisioned the space shuttle routinely orbiting and returning scientific spacecraft, and it lacks the thrust capacity to be placed on a path for controlled reentry. The shuttle is now busy building the international space station, however, and it may be tough to reserve one to reclaim a defunct satellite as well as find the $50 million needed for such a mission.

    Left on its own, UARS would remain aloft for another 20 years. But a slow decay of its orbit would increase the chances that it would break into large chunks containing toxic batteries and fuel. Alternatively, NASA could adjust the orbit of the spacecraft in the coming year for the best possible flight path and vent the toxic fuel, but a truly controlled reentry is not possible. “There's no guarantee where it would come down,” says Ondrus.

    The pending shutdown of UARS “puts a downer on our [anniversary] party,” says Douglass. “But we're still going ahead. This was a successful mission, and we have a lot to celebrate.”


    Berkeley Crew Unbags Element 118

    1. Charles Seife

    The superheavy element 118 just displayed an exotic property that nobody predicted: the ability to vanish into thin air. Physicists who thought they had created the most massive chemical element have retracted their claim in a short statement submitted to Physical Review Letters.

    Two years ago, scientists at Lawrence Berkeley National Laboratory in California presented evidence that they had bagged element 118 along with its slightly lighter cousin, element 116 (Science, 11 June 1999, p. 1751). The news came as a shock to many scientists in the field, who thought that the method of the Berkeley team—gently colliding krypton nuclei with lead ones in the hopes that the two would fuse—had already been exhausted. “I was really surprised in May of '99,” says Sigurd Hofmann, a nuclear physicist at the Institute for Heavy Ion Research (GSI) in Darmstadt, Germany. “If we had believed in fusion to make element 118, we certainly would have tried it here earlier.” But in the face of the experimental data—three chains of alpha-particle decays that seemed to indicate the existence of a new superheavy element—teams across the world attempted to replicate the results.

    Those attempts, at GSI, the GANIL heavy-ion research lab in France, and the Institute of Physical and Chemical Research (RIKEN) in Japan, all came to naught. But the extreme rarity of the new nuclei left it possible that a slight difference in the experimental setup or even a statistical fluke could be responsible for the failures. “Our experiment really did not disprove Berkeley's detection. There's a relatively high probability that the other experiments would see nothing,” says Hofmann. So Berkeley tried, last year and this year, to repeat their own experiment.

    They failed. In the wake of that failure, Berkeley researchers went back and reanalyzed their original data. “Those analyses showed that the chains reported are not there,” says Kenneth Gregorich, a member of the Berkeley team. Gregorich has little idea what caused the false readings. “One of the possibilities is an analysis problem,” he says. “The problem we have now is that none of the possibilities look very likely.”

    When they bagged element 118, the Berkeley team was in a hot race with a group at the Joint Institute for Nuclear Research in Dubna, near Moscow. But Gregorich doesn't think the rivalry was responsible for the error. “In '99, things did go fairly quickly,” he acknowledges, noting that researchers felt pressure to complete their work rapidly before other labs could perform similar experiments. “But we're trying to get away from the rivalry aspects of the different labs. It's pretty much a different generation of scientists from when there was a lot of rivalry in the '70s.”

    Hofmann says that Dubna's observations of elements 114 and 116 suffer from uncertainties similar to those of the Berkeley experiment, but their results have an internal consistency that gives him more confidence in the Dubna data. He praises the Berkeley team's candor, and, along with the rest of the heavy-ion community, hopes a fuller accounting will reveal what went wrong. Dieter Ackermann, also of GSI, says, “The problem now for me is that I need an explanation.”


    First Light on Genetic Roots of Bt Resistance

    1. Erik Stokstad

    For the last 5 years, farmers, particularly cotton growers, have been able to reduce their use of chemical pesticides by planting crops genetically engineered to make insecticidal proteins from the bacterium Bacillus thuringiensis (Bt). But insects can adapt to these natural toxins, just as they do to synthetic chemical pesticides. For example, some populations of diamondback moths, a devastating pest of cabbage and related crops, are no longer bothered by sprays of Bt bacteria used by organic farmers. This has raised worries that extensive use of the modified crops will lead to widespread resistance that could render both the crops and the Bt sprays useless. Now scientists have taken a big step toward understanding how Bt resistance arises—a key to predicting the occurrence of such resistance.

    In work reported in this issue of Science, two teams, one led by Linda Gahan of Clemson University in South Carolina and David Heckel of the University of Melbourne in Australia and the other by Raffi Aroian of the University of California, San Diego, have identified the first resistance genes for Bt. “It's a huge leap forward,” says Bruce Tabashnik, an entomologist at the University of Arizona, Tucson. The most practical payoff may be an easy DNA test for detecting resistance in insect pests; this could help alert farmers to burgeoning resistance in time to stop planting Bt crops and switch to chemical pesticides for a while.

    For their experiments, which are described on page 857, Gahan and Heckel used a lab strain of the tobacco budworm that was developed by Fred Gould of North Carolina State University in Raleigh. This strain, known as YHD2, resists the Bt toxin designated Cry1Ac, which is present in a genetically modified cotton produced by Monsanto Corp. of St. Louis.

    In 1997, the Gahan-Heckel team, working with Gould, obtained evidence indicating that the gene responsible for the budworm's Bt resistance is located on chromosome 9. After narrowing the location of the putative gene, which they called BtR-4, the team checked that stretch of the chromosome for known genes that code for proteins that bind the Bt toxin. Resistance might reside in one of those genes, they thought, because of the way Bt toxins kill—by binding to cells in the midguts of insects that eat them, causing the cells to burst. A mutation that could prevent that binding, either directly or indirectly, could thus confer Bt resistance.

    Lab studies have identified two classes of proteins that bind to Bt: the aminopeptidases, enzymes used by insects to help digest proteins in their gut, and cadherins, some of which are located on cell surfaces and are involved in cell adhesion. Heckel and Gahan quickly ruled out two aminopeptidase genes, as they weren't located on the same chromosome as BtR-4.

    So the researchers turned to the cadherins. They used the polymerase chain reaction to isolate a fragment of a cadherin gene that mapped to the same location as BtR-4. The fact that the cadherin gene maps to the same area as BtR-4 provides “almost irrefutable evidence” that it's the Bt resistance gene, Tabashnik says. “The odds of that being a coincidence are essentially nil.”

    The pair went on to show that this cadherin is made in the right place to confer resistance—the budworm's midgut. What's more, the researchers have evidence that the gene has been inactivated in the resistant YHD2 budworm strain. In that lab strain, but not in nonresistant budworms, the gene's coding sequence was interrupted by the insertion of a retrotransposon—a bit of movable DNA that can jump from place to place in the genome. Such an insertion would likely disable the gene, presumably preventing the Bt toxin from latching onto —and killing—the cells of the budworm's midgut. Finding such a disabling mutation was “totally unexpected,” says Heckel, as insecticide targets are usually very important to the insect and can't tolerate such large changes.

    Because large mutations such as retrotransposon insertions are easy to detect, researchers should be able to develop a rapid test for this type of resistance to Bt. But a single test won't suffice. “Insects can have more than one mechanism of resistance,” explains Ian Denholm of the Institute of Arable Crops Research's Rothamsted Experimental Station in Harpenden, United Kingdom. Indeed, that message is brought home by the Aroian team's paper, which appears on page 860.

    Aroian and his colleagues study Bt resistance in the roundworm Caenorhabditis elegans, which like insects suffers intestinal damage from Bt toxins. Last year the group located five genes, dubbed bre for Bt resistance, that when mutated confer resistance to a Bt toxin called Cry5B. Now they have cloned one of those genes, bre-5, and confirmed that blocking its activity, as a mutation might do, does in fact make the worm resistant to Cry5B and also to Cry14A.

    The BRE-5 protein turned out to be an enzyme called β-1,3-galactosyltransferase, which adds carbohydrates to lipids and proteins. Aroian's team has evidence suggesting that such carbohydrate addition to the Bt protein receptor is needed for toxin binding in the gut. The researchers also showed that losing the enzyme creates resistance. “It's an important mechanism to understand,” Aroian says, because losing the enzyme could be an effective way to gain resistance to many Bt toxins at once. If it works this way in insects, a mutation in the enzyme might help insect pests defeat the next generation of genetically modified crops, which are being endowed with multiple Bt toxins to help prevent resistance.


    Dinosaur Nostrils Get a Hole New Look

    1. Erik Stokstad

    When a snarling Tyrannosaurus rex fills the screen at your local multiplex this summer, here's a tip for remembering that the beast's not real: The nostrils are all wrong. You can even feel smug, since most paleontologists would miss the error.

    Dinosaur artists have always placed the fleshy nostril relatively high and back from the tip of the snout. But Lawrence Witmer of Ohio University College of Osteopathic Medicine in Athens argues on page 850 that the sniffers ought to be farther forward and closer to the mouth. “It may appear dramatic and bizarre, but from a scientific point of view, it's a much more conservative hypothesis,” Witmer says. “It basically says that dinosaurs are like almost all other animals today.”

    Witmer ought to know. He's been studying animal noses of all kinds for several years, as part of his DinoNose project. His interest is more than aesthetic. The position of the nostril matters, Witmer says, because something important was happening in the noses of many dinosaurs. Triceratops, for example, devoted half the volume of its skull to its nasal cavity, perhaps to cool its brain (Science, 5 November 1999, p. 1071). “It certainly is important to know where the fleshy nostril is,” says Jeffrey Wilson of the University of Michigan Museum of Paleontology in Ann Arbor.

    Pick a nose.

    X-rays and dissections of living relatives suggest that dinosaurs' fleshy nostrils were located at the front of a bony opening, near the end of the snout in crocodiles and dinosaurs.


    Still, locating the nostrils was a daunting prospect. Some ceratopsian dinosaurs had nasal openings in their skull, also called nostrils, more than half a meter long—and the fleshy nostril could theoretically have been anywhere along there. “The nostril project was one I was almost scared to get into,” Witmer confesses.

    His first step was to look at the location of the fleshy nostril in birds and crocodiles—the closest living relatives of dinosaurs—as well as other animals. The challenge was to find the relation between the position of the fleshy nostril, which is not preserved in dinosaurs, and the nasal opening. To do this, Witmer painted the fleshy nostrils of modern animals with latex and barium sulfate to make them opaque to x-rays. Then he x-rayed the heads. “What was neat is that a picture started to emerge that was surprising,” Witmer says. Time and again, the fleshy nostril was located toward the front of the bony nostril.

    If the pattern was true of the dinosaurs' closest living relatives, it was probably also the case for the dinosaurs themselves, Witmer says. But to be sure, he looked for additional evidence preserved in the skulls. Modern crocodiles and lizards have erectile tissue inside the nose, next to the fleshy nostril. The blood vessels that feed this tissue leave distinct marks in the bone. When Witmer checked dinosaur skulls of many kinds, he found similar traces of blood vessels near the front of the bony nostril. His conclusion: “The nostrils were pretty much like everybody else's, parked out in front.”

    Why is that design so popular? Having fleshy nostrils positioned forward on the snout, Witmer says, might enhance the sense of smell. It would also give creatures who depend on smell, such as shrews or tapirs, more information, because the nostril would cut a wider swath as they sweep their heads from side to side. Fleshy nostrils near the mouth might also improve the sense of taste. Not least, the position of the nostril is important for determining how air flows through the nose. A nostril toward the back of the bony nostril would mean dead air in the nose.

    So why did artists put dinosaurs' nostrils far backward? Witmer believes the idea dates back to the 1880s, when scientists thought that the gigantic sauropods must have lived in water to support their weight. Sauropods have large, bony nostrils that are close to the top of the skull; this seemed perfectly suited to be a sort of snorkel. But the bony nostrils actually extend farther down the sauropod's snout, and near the front, Witmer found the diagnostic marks of blood vessels.

    Wilson and others say Witmer's evidence for moving the nostrils is strong and credit him with setting the record straight. “He's looking at something that a lot of us took for granted and applying some common sense to it,” says paleontologist Christopher Brochu of the University of Iowa in Iowa City. “It really demonstrates the need to look at assumptions carefully and how they work in other animals.” And that's nothing to sneeze at.


    Imperial College Fined Over Hybrid Virus Risk

    1. John Pickrell

    HERTFORDSHIRE, U.K.—One of the United Kingdom's top research institutes has been ordered to pay almost $65,000 in fines and legal fees for risking the release of a potentially deadly hybrid virus. Government inspectors had charged Imperial College, London, with failure to follow health and safety rules in a study that involved the creation of a chimera of the hepatitis C and dengue fever viruses, both of which cause severe illness. On 23 July, a crown court judge upheld the charges and found the college guilty of failing to adequately protect laboratory workers and the public.

    Hepatitis C, which infects about 200 million people worldwide, has proved difficult to study because it does not replicate well in the lab. The molecular biology group based at Imperial College's St. Mary's Hospital campus was trying to create a stable form of hepatitis C to speed research into vaccines and new drugs. The group—headed by molecular biologist John Monjardino—hoped to coax the virus to grow by splicing in a number of key dengue fever virus genes. But the experiment ended after inspectors from the Health and Safety Executive (HSE) filed a devastating report on safety violations, following a laboratory inspection in 1998. Specialist Inspector Simon Warne says HSE found inadequate safety cabinets, a lack of proper equipment to fumigate the laboratory, poor facilities for waste disposal, and “confused, inadequate, and apparently untested” onsite lab rules.

    Although other researchers concur with the aims of the project, they backed the government's action. “I am very supportive of this kind of research, but there is never any excuse to take risks with health and safety,” says John Oxford, a virologist at St. Bartholomew's and the Royal London School of Medicine and Dentistry.

    Dangerous liaisons?

    Researchers hoped to splice dengue fever virus (above) and hepatitis C virus.


    Scientists in the project declined comment, but Imperial College issued a statement expressing regret and emphasizing that no one was hurt. A spokesperson says that since the safety breach was identified, the college has hired extra staff devoted to monitoring and safety and that it does not intend to continue work in this area.

    Predicting the virulence of a hybrid virus is tricky, scientists say, and for that reason this work requires the highest safety standards. “The problem,” says Richard Sullivan, an expert on bioweapons issues at University College London, is that “no matter how cautious you are, you get situations where you create something of a far higher risk than predicted.” Usually a hybrid virus is less virulent than either of its parents, says Warne. But there are exceptions. A striking example: In January, Australian researchers accidentally created a highly deadly mousepox virus (Science, 26 January, p. 585). “We are all on a big learning curve; the golden rule is always to assume the worst and have much greater security than you think you should have,” concurs Oxford.

    The court ruling is the second major embarrassment this year for Imperial College, which was fined about $28,000 in March after a similar court hearing for exposing the public to unacceptable risk by manufacturing the HIV virus in an inadequately sealed hospital laboratory. However, Warne does not see a deeper safety problem. Noting that the college has recently incorporated many disparate institutes, he says “it's inevitable that in a large research organization standards will vary across the board.”


    MIT Military Critic Rejects Secrecy Claims

    1. Eliot Marshall

    Physicist Ted Postol—a relentless critic of missile defense schemes—is fighting a Pentagon allegation that he has given away classified information. This is not the first time Postol has been targeted for a security investigation. But it may be the first time that he has accused his superiors at the Massachusetts Institute of Technology (MIT) of agreeing to help the Pentagon.

    In mid-July, Postol says, he learned by chance that he was being investigated by the U.S. Defense Security Service (DSS) for distributing a report on a defensive missile test. Although the report was labeled “unclassified draft” last year when Postol obtained it, the government has since ruled that it includes secret information. Postol rejects the notion that he can be held accountable for a retroactive decision like this—especially since the material has “gone around the world” on the Internet.

    Postol claims the Pentagon is trying to silence him. He also charges that Defense officials pressured MIT to search his office and “retrieve” materials. Pentagon officials say they're just trying to protect national secrets. When the dispute surfaced in The New York Times on 27 July, MIT President Charles Vest issued a careful statement noting that MIT “abides by the laws that protect national security” but also defends Postol and “the right of our faculty to serve as responsible critics within the limits of the law.”

    Postol is not reassured.

    The contretemps has roots in Postol's decades-old battle with the military over access to data on weapons design and testing. During the Persian Gulf War, for example, Postol argued that the U.S. defensive weapon, the PATRIOT missile, was unable to stop Iraq's SCUD missiles. Afterward, military agents began investigating him for a possible security violation but later dropped the inquiry when Congress intervened. During the past year, Postol has been publicly accusing scientists at MIT's Lincoln Laboratory in Lexington, Massachusetts, a research center largely funded by the military, of giving “fraudulent” support to a contractor's claims about missile testing. He brushed off a warning that he was at risk of violating his security clearance. Then on 17 April, Postol sent detailed fraud allegations—including data from the now-secret report—to the Government Accounting Office (GAO), a congressional agency. The GAO forwarded his letter to the Pentagon and Lincoln Laboratory, apparently provoking the Pentagon to seek a formal investigation of Postol.

    Postol says he learned of the inquiry a few weeks ago from a campus security official. DSS wrote to MIT and Lincoln Laboratory on 10 July, informing them that the U.S. Ballistic Missile Defense Organization (BMDO) had reported Postol for a security breach—sending out a report “that BMDO has determined to be classified SECRET.” The Pentagon asked MIT to make Postol stop sharing the report, to “retrieve” it, and investigate the violation. Postol appealed to Vest for support.

    Vest responded to Postol on 24 July that he had been “scoping out” possibilities that would “maximally defend” Postol's rights “without violating our contractual obligations” to enforce security. Vest warned that MIT “may be contractually obligated to move forward with at least the initial steps that we have been ordered to take by DSS.” Indeed, Vest sought to have an MIT attorney explore how to recover the report. Vest noted, however, that if the report is as widely available as claimed, “the Provost and I intend to work privately through the best channels” to have the government withdraw “what seems to be a pointless request to MIT to take action.”

    Claiming Vest “was ready to throw me to the dogs,” Postol has refused to cooperate and threatened to go to court. He accuses MIT of risking the freedom of “other scholars who don't have the notoriety that allows me to fight back.” Vest had no comment beyond last week's prepared statement, withholding a full discussion “until MIT has learned all the necessary facts.”


    Assembling Nanocircuits From the Bottom Up

    1. Robert F. Service

    As conventional silicon chips race toward their physical limits, researchers are seeking the Next Small Thing in electronics through chemistry

    LOS ANGELES, CALIFORNIA—Nothing says “high tech” like the sight of computer-chip engineers padding around a yellow-tinted clean room covered head to toe in jumpsuits resembling surgical scrubs. James Heath's chemistry lab on the third floor of the University of California, Los Angeles's (UCLA's) geology building has none of those trappings; just a few grad students and postdocs wearing the usual academic uniform of T-shirts and jeans, hunched over blacktopped lab benches. Intel territory this is not—at least, not yet.

    In a 10-centimeter glass dish atop one of these benches, some of the smallest computer circuitry ever dreamed of is in the making. The setting may not look impressive, but what's happening here may provide a glimpse of the future for the multibillion-dollar microelectronics industry.

    Mike Diehl, one of Heath's graduate students, pulls back a piece of crinkled aluminum foil from atop the dish to reveal four wedge-shaped portions of a silvery silicon wafer in a bath of clear organic solvent. On each wedge are two gold squares connected by a stair step-shaped wire. What's not visible, says Diehl, is that the step portion of the wire is actually two parallel wires close together. Spanning the gap between them are carbon nanotubes, each a three-dimensional straw of carbon atoms about 1 nanometer across and perhaps a micrometer long. Using electrical voltages applied between the pairs of invisible wires and a separate step involving moving fluids, Diehl and others in the Heath lab have come up with methods to array these nanotubes in perpendicular rows, one atop another in a crossbar arrangement. When an electrical current is applied to a nanotube in one row, it can pass that current to intersecting nanotubes. Next, Heath's group plans to put a layer of organic molecules between the nanotubes that will act as transistorlike switches. If all goes well, within weeks they'll have arrays of some of the smallest circuits ever produced.

    Crossbars are simple stuff compared with the intricate patterns on everyday semiconductor chips. What's impressive is the scale. By making devices from small groups of molecules, researchers may be able to pack computer chips with billions of transistors, more than 30 times as many as current technology can achieve. That could open the door to fanciful computing applications such as computers that recognize and respond to everyday speech and translate conversations on the fly. And it's all happening with just the beakers, solvents, and fluid flow chambers of benchtop chemistry.

    Molecular memory circuit.

    In a promising array design, currents passed between perpendicular nanowires alter the conductivity of organic molecules sandwiched in between.


    Heath and a growing cadre of chemists, materials scientists, and physicists are pursuing molecular electronics: the attempt to use chemistry to build circuits from the bottom up instead of carving bigger pieces of matter into smaller and smaller chunks, as chip manufacturers now do. The idea has long had critics and detractors, who say the approach will never achieve the reliability and performance chipmakers demand. But as the field begins to grow in sophistication, it's starting to earn new respect. In the past few years, researchers have fashioned an impressive array of chiplike devices from handfuls of individual molecules. Now they are beginning to take the next key step, linking those individual components into more complex circuits such as the Heath group's crossbars.

    Within just the last 2 years, “the progress has been pretty mind-boggling,” says William Warren, who heads the molecular electronics initiative at the Defense Advanced Research Projects Agency, which sponsors molecular electronics research at labs around the United States. Adds chemist Tom Mallouk of Pennsylvania State University, University Park: “People are publishing really interesting stuff that has the potential to change the field at a very heady clip.”

    Breaking the law

    Silicon-based electronics has been moving along at a steady clip itself. For the past 35 years, chipmakers have managed to double the number of transistors on computer chips every 18 months by shrinking their size, a trend known as Moore's Law after Intel co-founder Gordon Moore, who noted the trend in 1965. Today, chip engineers can make features close to 100 nanometers across, and they're already eyeing a version of the technology that could cut that in half (see p. 787).

    Molecular electronics has the potential to go much smaller, with components composed of just tens or hundreds of molecules. That would clearly accelerate the march of Moore's Law. But it could do much more as well. For one, it might solve a problem that is already beginning to vex chipmakers: heat. Wires carved into silicon by the standard technique of lithography are riddled with imperfections along their edges. As wires shrink, electrons coursing through them run an ever greater chance of smashing into one of these defects and generating unwanted heat. Pack too many circuits onto a chip and you get burnout. Lacking such imperfections, molecules such as nanotubes are expected to do a better job of preventing electrical losses as well as containing electrons that travel along their lengths.

    Perhaps most important, a shift from silicon to molecules could also break Moore's Second Law, a corollary to the first, which states that the cost of new chip-fabrication plants increases exponentially as the features get smaller. By 2015, experts suggest, they will cost somewhere between $50 billion and $200 billion apiece. Because molecular electronics relies on molecules to assemble themselves rather than on lithography, self-assembly “is likely to beat [Moore's Second Law] before it beats the first,” says Mallouk. Adds Mark Ratner, a chemist at Northwestern University in Evanston, Illinois, and one of the fathers of the field: “It's cheap to make molecules. It's expensive to make fabs.”

    Small-time start

    The idea of wiring small numbers of molecules into logic and memory devices has been around almost as long as the silicon chips they may one day replace. Ratner and IBM's Ari Aviram first suggested making molecular-scale electronic devices back in 1974. But at the time it was a pipe dream, because key techniques weren't available. That began to change in the mid-1980s with the invention of scanning probe microscopes, which enabled researchers to see individual atoms on surfaces and arrange them at will.

    The advance prompted Jim Tour of Rice University in Houston and other chemists to take up the challenge anew in the early 1990s. They designed organic molecules that they calculated could either store electrons, like a tiny memory device, or alter their conductivity much as a transistor controls the ability of an electrical current to flow between two electrodes.

    But turning those molecules into working molecular-scale devices was no simple matter. “Nanoelectronics are a physicist's dream but an engineer's nightmare,” says Warren. The tide finally began to turn in July 1999 when Heath, UCLA's J. Fraser Stoddart, and collaborators at Hewlett-Packard Corp. (HP) published a paper describing a molecular fuse that, when hit with the right voltage, altered the shape of molecules trapped at a junction between two wires (Science, 16 July 1999, p. 391). The change destroyed the molecules' ability to carry current across the junction. For this initial demonstration, the UCLA-HP team used lithography to make crossbars. They also wired several switches together to perform rudimentary logic operations. The devices worked well, but they had a big drawback: Once they flipped to the off position, they couldn't be turned back on again.

    Within months, Tour, Yale University's Mark Reed, and their colleagues published a separate approach to make a device that could switch current on and off like a transistor. And last year, the UCLA-HP group fired back with an improved organic compound pinned at the junction between two wires that could also be switched (Science, 18 August 2000, p. 1172).

    Meanwhile, related work on very different devices was progressing on a separate track: Several teams were making more or less conventional transistors sporting a few nanoscale components. In 1998, for example, physicist Cees Dekker of Delft University in the Netherlands reported using semiconducting nanotubes as the key charge-conducting layer, called the channel, in transistors.

    Pouring it on.

    Charles Lieber's team aligns stacks of nanowires with liquid streaming through a mold.


    Dekker's original nanotube transistors marked a breakthrough in the use of nanotubes in working electrical devices, but they performed poorly. One reason was the poor electrical contact between the nanotubes and the electrodes to which they were connected. At the March American Physical Society (APS) meeting in Seattle, however, Phaedon Avouris and his colleagues at IBM's T. J. Watson Research Center in Yorktown Heights, New York, reported solving this problem with a technique for welding the ends of nanotubes to the metal electrodes on either end of a transistor's channel. That gave individual nanotubes performance rivaling that of conventional silicon transistors. They also devised a way to chemically alter the nanotubes so they could conduct both negatively charged electrons and positive charges called holes, which are essentially the absence of electrons in a material. That feat enables them to make both “n-type” devices that conduct electrons and “p-type” devices that transport positively charged holes. In today's chips, combinations of n-and p-type transistors form the building blocks for complex circuits.

    Putting it together

    Avouris's team at IBM has already started making such circuits with nanotube transistors. At the APS meeting, Avouris described how he and his colleagues combined chemistry and conventional lithography to pattern a pair of transistors into a simple device called an inverter, a basic component of more complex circuitry. They also constructed an array of nanotube transistors, although they have yet to wire them together to carry out specialized logic or memory functions.

    Like the IBM team, several others are shrinking some components down to the molecular scale, while leaving other portions larger and thus easier to wire together and to the outside world. In April, Heath's group reported a key success with this hybrid approach. At the American Chemical Society meeting in San Diego, Heath reported having made a 16-bit memory cell. The cell, the most complex of its kind to date, used the same crossbar arrangement of nanowires that Diehl is perfecting with nanotubes. The nanowires were made by e-beam lithography, a high-resolution patterning technique that is painfully slow and thus impractical for large-scale manufacturing. Nevertheless, Heath is excited about his team's early success. “It's the first nanocircuit that works,” he says. And within a year and a half, Heath expects that his lab will complete the first molecular electronic-based integrated circuit complete with logic elements and memory circuits that can talk to one another in computer-friendly 0's and 1's.

    Heath will have competition for that prize. Another group in the hunt is led by Harvard University chemist Charles Lieber. In a trio of papers published earlier this year in Science and Nature, Lieber's team reported making nanowires out of a variety of semiconductors, which they could then arrange into either individual devices or more complex crossbar arrays.

    Lieber contends that semiconductor nanowires are better building blocks for molecular electronics than carbon nanotubes are, as their electronic properties can be more precisely controlled. Although nanotubes can conduct like either metals or semiconductors, depending on their geometry, there is no way yet to synthesize a pure batch of one type or the other. That makes it hard to get the same performance from each device, Lieber says. By contrast, the electronic properties of semiconductor nanowires can be precisely controlled by adding trace amounts of “dopant” elements during synthesis. This “doping” is a key feature of today's semiconductor chips, because it allows engineers to make both n-type and p-type devices, and the same holds true on the nanoscale, says Lieber.

    In the 4 January issue of Nature, Lieber's team reported having wired n-type and p-type indium phosphide nanowires into nanosized field-effect transistors, the devices at the heart of today's microelectronics. And in the 2 February issue of Science (p. 851), the team described related devices made from n-type and p-type silicon, the mainstay material of today's semiconductor industry.

    Lieber and colleagues showed that they could up the level of complexity as well. In the 26 January issue of Science (p. 630), they reported using a combination of prepatterned lines of adhesive compound and moving fluids to arrange nanowires in parallel arrays, triangles, and crossbars, resembling the crossbars made by Heath's group. To make the crossbars, the team started by crafting a flat, rubbery mold prepatterned with tiny parallel channels. They placed this mold atop a silicon substrate and flowed a suspension of nanowires in an ethanol solution through the channels, aligning one layer of nanowires in the same orientation. They then turned their mold 90 degrees and repeated the procedure, depositing another row of parallel nanowires atop the first. (See figure, p. 784.)

    To show that the arrays were electrically active, Lieber's team made a 2 × 2 crossbar that resembled a tic-tac-toe board. They then used e-beam lithography to place tiny electrical contacts to the outside world at each end of the four wires in the array. By applying voltages between the various pads, they showed that they could produce transistorlike performance at any of the four junctions they chose. “Using solution phase, bottom-up assembly, we can make functional devices,” says Lieber.

    A hybrid future?

    Crossbars and nanotubes may be fine for basic research. But many researchers doubt whether they will ever produce circuitry that can run Quake, surf the Web, or even handle a simple word processor. Early on, researchers in the field “made all kinds of crazy promises,” says Edwin Chandross, a materials chemist at Bell Laboratories, the research arm of Lucent Technologies in Murray Hill, New Jersey. In particular, says Chandross, molecular electronics researchers pushed the notion that engineers would make computing devices out of single molecules. That's nonsense, he says, because a single unruly molecule could disrupt a device and thus corrupt the larger system. Today, Chandross is pleased by what he sees as a more realistic approach of using ensembles of molecules to work together in individual devices. Still, “it's a real long way off from being practical,” says Rick Lytel, a physicist at Sun Microsystems in Palo Alto, California. Sunlin Chou, who helps direct Intel's work on advanced circuitry, agrees: “It's very blue sky.”

    “Gee, the vacuum tube guys said that too” about semiconductor electronics, says Heath, undaunted. “If we can make a nano-integrated circuit and interface it to lithography, you've got to argue that's pretty interesting,” he says. “I want to know how far we can go.”

    Heath, Mallouk, and many others expect that even increasingly sophisticated molecular-electronics devices are unlikely to make it into the computing world on their own. Rather, they will form a hybrid technology that combines self-assembling molecular electronics components with traditional silicon electronics made by lithography. “I think the most likely approach will use lithography to get down to submicrometer dimensions and then self-assemble the little pieces inside,” says Mallouk.

    Even in the established world of silicon electronics, that vision is opening eyes. In addition to Hewlett-Packard, companies including IBM and Motorola are starting to pump research dollars into the area. So are start-ups such as Molecular Electronics Corp. of State College, Pennsylvania. “A number of companies are looking at this, because none of them want to be in a position of not being up on the technology if and when the breakthroughs come,” Mallouk says.

    Those breakthroughs may or may not ultimately make circuitry smaller than high-tech silicon fabs can achieve today. But if molecular-electronics researchers can teach circuits to assemble themselves, that trick will give them a cost advantage that no chipmakers will be able to ignore.


    Yet Another Role for DNA?

    1. Dennis Normile,
    2. Robert F. Service

    As they struggle to join nanotubes and nanowires into simple X shapes, molecular electronics researchers dream of making much more complex circuitry. “Everybody is trying to make larger arrays” of devices, says Tom Mallouk, a chemist at Pennsylvania State University, University Park. “What we're seeing now is just the beginning.” To move from the simple to the complex, though, scientists will need to develop a much defter touch.

    Some think the key to that dexterity lies in that consummate molecular sleight-of-hand artist, DNA. By taking advantage of DNA's ability to recognize molecules and self-assemble—not to mention the huge toolkit of enzymes and techniques biologists have developed for working with the molecule—they hope to use DNA as a template for crafting metallic wiring, or even to wire circuits with strands of DNA itself.

    Mallouk's group, also led by chemist Christine Keating and electrical engineers Tom Jackson and Theresa Mayer, starts by growing metal nanowires in the tiny pores of commercially available filtration membranes. Because the researchers can vary the composition of the metals laid down in the pores, they make nanowires with one type of metal, such as platinum, on the ends, and another metal, such as gold, in the middle. By attaching gold-linking thiol groups to single-stranded DNA, they can bind the DNA to the gold midsections of the nanowire. To coax the nanowires to assemble into different shapes, they simply attach complementary DNA strands to the gold segments of other nanowires. The complementary strands then bind to each other, welding pairs of wires together.

    In initial experiments, the team has used the technique to make simple shapes such as crosses and triangles. And they are currently using it in an attempt to assemble more complex circuitry, Keating says: “You can envision using this to carry out the deterministic assembly of a circuit.” That hasn't happened yet, in part because the DNA on some nanowires tends to bind indiscriminately to other noncomplementary DNA rather than its partner strand. But because biochemists have learned to solve this problem with applications such as DNA chips, Keating is confident that DNA will soon become a type of addressable glue for a wide variety of molecular electronics components.


    A piece of single-stranded DNA links corresponding sequences on nanowires to forge a cross.


    Erez Braun and his group at the Technion-Israel Institute of Technology in Haifa take a different approach. Instead of using DNA to join wires together, they make wires by silver-plating DNA itself (Science, 20 March 1998, p. 1967). The researchers start with a pair of gold electrodes 1200 nanometers apart on a sheet of glass. First they attach snippets of DNA 12 oligonucleotides long to each electrode. Then they immerse the electrodes in a solution containing short lengths of viral DNA. The viral DNA attaches itself to the snippets, creating a DNA bridge between the electrodes. Next, by soaking the bridge in a solution containing silver ions, Braun and colleagues coat it with silver. The result is a nanometer-scale metallic wire between the electrodes, with properties that can be varied by fiddling with the developing conditions.

    Braun says they have extended the approach and are now close to completing a three-terminal switching device that would function much like a transistor. They are also studying how they might scale up these processes to create more complex networks.

    More exotically, it's even possible that wires might be made of DNA itself. First, though, researchers will need a much better understanding of DNA's basic electrical properties. Since the first report, in 1993, that DNA can carry current, measurements of its conductivity have ranged from zero, a perfect insulator, to superconductivity when the electrodes are spaced very closely together. Christian Schönenberger, a physicist at the Swiss Nanoscience Center in Basel, says most researchers now think that DNA is a semiconductor whose conductivity depends on how it is “doped” with foreign molecules. The wide range of conductivity is good news, Schönenberger says. “It means that we can, in principle, tailor the doping and control the conductivity.” To make electronic devices, though, scientists must sort out precisely which parts of DNA's complex chemistry do the doping—and that may be no simple task.


    Optical Lithography Goes to Extremes--And Beyond

    1. Robert F. Service

    In search of ever finer detail, chipmakers are pushing conventional printing techniques toward the physical limits of light

    LIVERMORE, CALIFORNIA—The speed of light sets an upper limit for those dabbling in relativity, but makers of computer chips are concerned with another of light's limiting properties: its wavelength. Current chipmaking technologies may soon bump up against that limit, say proponents of molecular electronics and other futuristic computing schemes, and that will confound the semiconductor industry's ability to shrink transistors and other devices. Craig Barrett, CEO of the world's largest chipmaker Intel, begs to differ.

    In April, Barrett and other leaders of a chip-patterning research consortium gathered here to unveil a first-of-its-kind machine that uses extreme ultraviolet light to print features on chips. The new machine has already created features as small as 80 nanometers across on silicon wafers, a resolution that is expected to boost the speed of integrated circuits from 1.5 gigahertz today to 10 gigahertz in 2005–06. Ultimately, Barrett and others argue, the technology will be able to turn out features as small as 10 nanometers, nearly the same scale as molecular electronic devices.

    The triumph makes the technique, known as extreme ultraviolet (EUV) lithography, “one of the leading horses in the race” to succeed today's optical lithography for carving ever smaller features into silicon, Barrett says. Today, conventional lithography patterns chips by shining ultraviolet light through a stencil with slits in the shape of features to be transferred onto a chip. Lenses below the stencil then reduce that pattern to one-quarter its original size and project it onto a region of a silicon wafer coated with a polymer known as a resist. The light transforms the resist so that chemical etchants can eat away either the region hit with the light or the shaded region. Engineers can then carve away part of the silicon wafer below and fill the space with metals, insulators, or other materials for making transistors.

    Mirror, mirror.

    EUV lithography employs a complex arrangement of reflective optics to pattern chips.


    The shorter the wavelength of the light used, the smaller the features that can be printed on chips. As a general rule of thumb, a given wavelength can make features down to half its length. Because the current generation of optical lithography devices use 248-nanometer light, the smallest features they can make are about 120 nanometers.

    Recent advances in lithography have come by continually switching to shorter and shorter wavelengths. Chipmakers are now in the process of converting their equipment from using 248-nanometer light to 193-nanometer light. In a few years they expect to switch again to light 157 nanometers long. That's likely to be an excruciating change, as it means lithography toolmakers will have to give up reliable fused silica lenses—which are not transparent at that wavelength—and adopt ones made from calcium fluoride, a soft, temperamental, and rare material.

    Still, most chip engineers believe that technological leap will be simple compared to what comes next. No materials are transparent to electromagnetic radiation below 157 nanometers—the realm of EUV light, or soft x-rays. As a result, lenses cannot be made to focus light patterns. That limitation will finally put an end to optical lithography's incredible run. “When you go to EUV, everything changes,” says David Merkle, the chief technical officer with Ultratech Stepper, a lithography toolmaker in San Jose, California.

    To get EUV lithography to work, the consortium's engineers have had to transform some parts of the technology almost beyond recognition. Most important, they switched from using transparent lenses to reflective mirrors to reduce the size of image patterns. First, to generate EUV light, researchers focus a laser on a jet of xenon gas. The gas emits 13-nanometer light that is then focused on a reflective stencil. To reduce this image, researchers engineered curved mirrors coated with 80 alternating layers of silicon and molybdenum, polished with atomic-scale precision. Because air absorbs EUV radiation, the entire apparatus must be placed in a vacuum. The result is a 3-meter-by-3-meter machine that stands some 4 meters high and is ensconced in a clean room to keep out possible contaminants. The fact that such a demanding scheme can work at all “is truly a major technical achievement,” says Merkle, whose company is not affiliated with the EUV consortium.

    Assembly required.

    Technician works on the EUV machine.


    The accomplishment didn't come quickly. EUV research started in the 1980s at AT&T Bell Laboratories, now a part of Lucent Technologies, and Japan's Nippon Telephone and Telegraph. Sandia National Laboratories and Lawrence Livermore National Laboratory did early work as well, and along with AT&T and Intel formed two cooperative research programs on the topic in 1991. Congress cut funds for the programs in 1996, calling them corporate welfare. The change prompted Bell Labs to bail out. But feeling the need for a successor to optical lithography, Intel, Advanced Micro Devices, and other semiconductor companies stepped in, committing $250 million over 5 years to develop a prototype EUV machine.

    Even with a working prototype in place, EUV's success isn't sealed. IBM and others continue to pursue alternative technologies, including methods of patterning chips with tight beams of electrons. But some of these alternatives have taken hits recently. Last year, IBM phased out an initiative that used more energetic hard x-rays to pattern chips, a program it had supported for decades. And in March, IBM announced that it was joining the EUV consortium. Another contender, which uses beams of ions, also appears to be faltering without a major corporate backer. And although IBM, as well as Canon—a major lithography tool producer in Japan—continue to pursue e-beam technology, “EUV is the prime contender,” Merkle says. Barrett says Intel hopes to begin using EUV lithography for chipmaking in 2005. But to get the prototype technology ready for the factory floor, Merkle estimates that chip and lithography toolmakers will have to spend another $2.5 billion.

    That illustrates the lengths to which the chip industry will go in pursuing ever greater processing power. It also underscores the semiconductor industry's desire to squeeze every ounce of juice out of silicon circuits, rather than turning to novel computing schemes such as molecular electronics or quantum computing. Computer chips are now a nearly $1-trillion-a-year business. And that constitutes a potent economic driver for continued improvements to silicon-based technologies, says Rick Stulen, who heads the U.S.-based EUV consortium. “I learned a long time ago never to bet against an economic driver,” he adds. For researchers hoping molecular electronics will have a future, that means they'd do well to find a way to work with silicon electronics rather than try to overthrow it.


    World's Smallest Transistor

    1. Robert F. Service

    Suppose researchers slash the size of silicon-chip circuit elements to one-tenth their current size, or about 20 nanometers. Will standard transistors still work normally? Many researchers fear not. Metal wires, they suspect, will lose their ability to keep electrons from wandering off into the surrounding material, dissipating energy and information. But in June, Intel researchers led by Robert Chau reported making a standard three-terminal transistor—the architecture used in today's chips—with the smallest features ever reported, including an insulating barrier only three atoms wide. Far from being crippled, the tiny device could switch at a blinding 1.5 trillion times per second, more than 10 times faster than devices that sit on current chips.


    The End--Not Here Yet, But Coming Soon

    1. Dennis Normile*
    1. With reporting by Robert F. Service.

    For years researchers showed up skeptics by cramming chips with more and more transistors. One day the skeptics will be right

    Ever since the advent of silicon chips, doomsayers have seen limits to the technology just over the horizon. Challenges in controlling the lithography used in drawing the patterns on the chips and limitations of the materials themselves have loomed as insurmountable roadblocks at one time or another. So far, researchers have found ways around all these obstacles. But this time, many say, real physical barriers may be in sight.

    The barriers threaten Moore's Law, the 1965 prediction by Intel co-founder Gordon Moore that manufacturers would double the number of transistors on a chip every 18 months, with resulting declining prices and increasing performance. The semiconductor has stayed on track for 4 decades. But the latest edition of the annual International Technology Roadmap for Semiconductors—a joint effort of semiconductor industry associations in Europe, Japan, Korea, Taiwan, and the United States—lists reasons for thinking that may soon change.

    The Roadmap explores “technology nodes”—advances needed to keep shrinking the so-called DRAM half-pitch, half the spacing between cells in memory chips. Currently, the industry is moving to a DRAM half-pitch of 130 nanometers, about three-thousandths the width of the proverbial human hair. The Roadmap forecasts that researchers must lower that figure to 35 nanometers by 2014, simply to continue doubling the number of transistors. In the year 2000 update (available online at, 12 working groups representing various aspects of chipmaking assess whether those technology node targets can be achieved.

    Their conclusions are laid out in dozens of spreadsheets in the Roadmap, giving such arcana of semiconductor production as the wavelength of the light needed in the lithography and the required grain size in the silicon substrates. The spreadsheets are color coded. White cells show technologies in production; yellow ones, known solutions under development; red ones, problems with no known solutions.

    The 2000 edition shows plenty of red cells. Experts can't imagine how to get silicon wafer grain sizes down to the 60 nanometers believed necessary for the chips of 2003. Nor do they agree on the lithography methods that will be needed in 2003 (see p. 785).

    How much Moore?

    Problems loom for efforts to pack more transistors on chips.


    Despite the red flags, industry officials are confident of finding solutions in time to meet at least the near-term technology node targets. “Historically, the industry has always faced these red areas, but with sufficient research they have all been overcome,” says Juri Matisoo, vice president for technology at the Semiconductor Industry Association, a San Jose, California-based industry group.

    So far, manufacturers have crammed more transistors into the same space by simply shrinking, or scaling in industry parlance, all of the features of an integrated circuit in roughly equal proportions. But around 2014—the last year the current Roadmap covers—the technology will hit what one researcher says is truly the ultimate “red brick wall”: A key feature called the gate oxide will become so thin that it will effectively disappear. “It will be the end of scaling as we know it,” Matisoo says.

    The gate oxide is a layer of material that separates the chip's gate electrode—which controls the flow of electrons through the device—from the channel through which that current flows. It might be thought of as a wall that keeps electrons traveling down the proper corridor. As the gate oxide gets thinner, electrons are more prone to break out of the corridor and leak away.

    Currently this gate oxide is made of silicon dioxide. Experts have long debated just how thin it could get before leakage becomes unacceptable. In what may be a brush with the ultimate limit, last December Intel researchers led by S. J. Lin reported making a standard three-terminal transistor —the architecture used in today's chips—with a gate oxide a mere three atoms thick (see sidebar, p. 786). The little transistor not only works, but it precisely follows the pattern of increased speed and lower voltage requirements that has always accompanied scaling. “It's amazing,” says Jason Jenq, head of the front-end processing research program at Semiconductor Research Corp., an industry consortium based in Research Triangle Park, North Carolina, that funds semiconductor research at universities.

    “People have always speculated when transistors are going to stop working,” says Gerald Marcyk, director of component research at an Intel laboratory near Portland, Oregon. “That's part of the reason we did [this transistor]: I was tired of seeing stories on the end of Moore's Law.” But even Marcyk agrees that three atomic layers is the limit for silicon dioxide.

    To improve the oxide, researchers are working with materials such as zirconium and hafnium, which are better insulators than silicon dioxide and thus are better at confining electrons. Unfortunately, so far interaction between the silicon substrate and the zirconium or hafnium materials has kept them from achieving their full insulating potential. Jenq says researchers are trying various surface treatments to condition the silicon before the deposition of the zirconium or hafnium. If the problem can be overcome, scaling could continue until about 2010.

    Looking farther into the future, researchers are trying to modify slightly the current planar gate architecture, in which the gate is a simple layer of material on top of the oxide which is on top of the conducting channel. One of the most promising approaches is to use a double gate, in which the conducting channel is a small vertical fin with thin vertical layers of oxide and gate material on both sides. Such structures could allow scaling to continue until 2014 or possibly later.

    As for what comes next, Jenq says, all bets are off. “It could be molecular devices or single electron devices or something with nanotubes,” he says. But it will certainly be something different from what lies at the heart of semiconductors in use today.


    Developmental Progress Fills the Air in Kyoto

    1. Dennis Normile

    KYOTO, JAPAN—Organizers of the 14th International Congress of Developmental Biology arranged for a special fireworks display to accent Kyoto's temples, shrines, and hot springs. But the 1400 biologists who gathered here already had plenty to get excited about, from new insights into limb and segmentation development to a heroic effort to define the regulatory network controlling early sea urchin development.

    The End of the Progress Zone?

    For the past 50 years, the progress zone model of limb development has been one of the most firmly entrenched notions of developmental biology. But now that model has come under fire.

    The progress zone model is based on classic experiments in which scientists produced dramatic changes in limb structure by removing a specialized layer of tissue called the apical ectodermal ridge (AER) from the end of the developing front limb of the chick embryo. Cutting off the AER early produced a limb with a normal upper bone (humerus) but nothing farther out, while removing it progressively later in development allowed more and more of the limb to develop normally. The results were taken to indicate that the AER continually reprograms the cells nearest to it in the so-called progress zone, directing them to become first shoulder, then humerus, elbow, and so on. This model was strengthened by more recent experiments showing that replacing the AER with a bead soaked with fibroblast growth factors (FGFs), proteins involved in many developmental processes, produced normal limbs.

    In Kyoto, however, Cliff Tabin, a developmental biologist at Harvard Medical School in Boston, described results from a postdoc, Andrew Dudley, that upset the progress zone model. “The term progress zone is very misleading,” Tabin says. “There is no special zone, and it is not progressive.”

    Dudley set out to define the size of the progress zone by injecting dye and virus markers into the limb bud at different depths. The model predicts that markers placed in the zone should spread through the limb as cells proliferate. But Dudley never found a trail of markers. Instead, their final locations depended only on the depth at which they were first placed. Tabin says this leads to a very different model of limb bud development. Instead of having a progress zone, he suggests, the fate of the cells that produce the various limb structures is determined quite early; they then proliferate to produce the fully developed limb.

    To confirm that the cells' fates were set early on, Dudley took the tips of limb buds from embryos of various ages and grafted them onto the stubs of limb buds that had been cut off. Under the progress zone model, the fate of the cells that produce the digits is determined last. But Dudley found that regardless of whether the tips came from early or late limb buds, they always produced digits—a strong indication that the fate of the cells that produced them had been determined early instead.

    Tabin also offered a new interpretation of the older results. Building on work suggesting that removing the AER causes cell death to a depth of about 200 micrometers into the limb bud, the team analyzed the size of the limb bud at different stages of development. They concluded that the loss of 200 micrometers' worth of cells under the AER at an early stage would wipe out the entire limb. Later on, the loss of 200 micrometers would knock out just the digits.

    Support for this idea came when Dudley again removed the AER of a number of limb buds at different stages of development. He then replaced it, either immediately or 10 hours later when the cells had died, with a bead containing FGFs, which were presumed to be involved in progress zone development. Only when the bead was implanted immediately did the full limb develop. That result suggests it is not the FGFs but the prevention of cell death achieved by covering the wound that allows normal development to continue. Further experiments tended to support the idea that cell death, rather than the absence of the AER, caused the limb deformations of the early experiments.

    Other researchers had previously questioned the progress zone model. But Miguel Torres, a developmental biologist at Spain's National Center for Biotechnology in Madrid, says it “will be a shock to the community” if it needs to be discarded.

    Segmentation Gets Pieced Together

    Nature likes segmentation, the making of repetitive embryonic units that serve as the building blocks of all insect bodies and those of many higher animals as well. Although the mechanics are different, segmentation can be seen in the shells of crustaceans, the body rings of the earthworm, the stripes on the abdomens of bees, and the vertebrae of vertebrates. But developmental biologists have only just begun to understand what drives segment formation, particularly in higher organisms.

    In previous work, Olivier Pourquié, a developmental biologist at the Université de la Méditerranée in Marseille, France, showed that in vertebrates many of the genes involved in forming somites, the segmental units that develop into the vertebrae and muscles of the torso, repeatedly cycle on and off. He suggested that a “segmentation clock” controls the timing of somite formation. Two presentations at the conference, the basis for papers appearing in the 27 July issue of Cell, shed further light on the interaction of this clock with other factors controlling segmentation.

    This time Pourquié and his team focused on fibroblast growth factor 8 (FGF8). The somites form from head to tail by repeatedly budding off from groups of columnlike cells, called the presomitic mesoderm, that line up along either side of the neural tube, which runs along the posterior surface of the embryo and eventually forms the spinal cord. FGF8 is expressed at the posterior end of these columns, and the concentration of FGF8 surrounding the budding somites forms a gradient, decreasing with distance from the end of the columns toward the forming somites.

    To test FGF8's role in somite formation, the team either blocked its activity by treating chick embryos with a drug that inhibits FGF binding to receptors or increased its concentration by inserting an FGF8-saturated bead into the embryos alongside the presomitic mesoderm. Reducing FGF8's concentration produced bigger somites, while increasing it led to smaller ones. These results suggest, Pourquié says, that FGF8 signaling, above a certain threshold, prevents the presomitic cells from beginning the differentiation that allows them to be incorporated into somites even as the segmentation clock keeps ticking. “The boundaries [between somites] are determined at a given time, and if fewer cells are available you get smaller somites,” he says. In other words, the clock determines when the boundaries form, whereas the FGF8 signaling gradient controls where they form.

    Another piece of the segmentation puzzle has been put in place by Denis Duboule and colleagues at the University of Geneva, Switzerland. Using mice, they closely monitored the timing and level of expression of Hox genes, which are known to be involved in giving each somite its identity, telling it where in the somite chain it is located so it can develop into the proper bone and muscle tissue. The researchers found that the same signaling that starts the formation of a somite triggers the expression of the Hox genes, a finding that suggests that the genes are linked to the segmentation clock.

    Yoshiko Takahashi, a developmental biologist at Nara Institute of Science and Technology in Japan, calls the work “astonishing.” The results from the two groups, she says, “are having a great impact on understanding a very important phenomenon.” They also increase the need to better understand how the clock works and the role of somite formation in segmentation.

    Coming to Grips With Gene Regulation

    Most researchers content themselves with unraveling the functions of one gene or gene family, or maybe with tracing one signaling pathway. Not Eric Davidson, a developmental biologist at the California Institute of Technology in Pasadena. His team is trying to define the entire network of genes and regulatory signals needed to form the endomesoderm, the primordial cell layers that produce all the organs and tissues of the sea urchin.

    Davidson is particularly interested in the cis-regulatory elements, DNA sequences associated with each gene that turn them on or off in response to the various developmental and environmental signals conveyed by the regulatory pathways. He believes that the different shapes and sizes of animals are due primarily to cis-regulatory activity, which controls where within an embryo and when the genes are active. “These regulatory networks are the key to really understanding development and evolution,” Davidson says.

    To identify the cis-regulatory elements, the researchers turned to comparative genomics. They performed a computer analysis of sequence data from two evolutionarily distant species of sea urchin, looking for evolutionarily conserved sequences that are likely to be cis-regulatory elements.

    Another part of this Herculean task is identifying all the genes involved in the development of that endomesoderm. Because the organism is a popular experimental model, bits and pieces of the immense network of regulatory pathways that control its development were already known.

    In the spotlight.

    Work with the embryos of sea urchins is providing insights into gene regulatory networks.


    To fill in the rest, Davidson and his team disrupt the known regulatory pathways in a variety of ways. For example, they inject embryos with messenger RNAs that interfere with the expression of known genes or overexpress regulatory genes. Using microarrays bearing tens of thousands of clone complementary DNAs, they compare levels of gene expression between control embryos and embryos in which the pathways have been disrupted. In this way, they have identified nearly 100 genes not previously linked to endomesoderm specification. They also tied the expression of the genes to particular stages of development and areas of the embryo. They then knocked out or altered the expression of these genes to check their effect on other genes.

    The end result of all these analyses is a computational model developed in collaboration with Hamid Bolouri of the University of Hertfordshire, U.K. (Continually updated diagrams of the regulatory networks can be seen at∼mirsky.) The model shows the flow of developmental regulatory information both in time and in the different spatial domains of the embryo. This makes it possible, Davidson says, to see which genes come on and when as the embryo forms first the endomesodermal precursor cells and then the endodermal, mesodermal, and skeletogenic domains. “It's a very complex diagram,” he says, but one with “a tremendous amount of explanatory value.”

    Others aren't so sure. One researcher who doesn't want to be identified says the diagram is too complicated to be useful. But many others think Davidson is on to something. John Coleman, a developmental biologist at Brown University in Providence, Rhode Island, says it demonstrates how the use of microarrays “can lead to new insights rather than just masses of expression data.” Noriyuki Satoh, a developmental biologist at Kyoto University, also likes the fact that Davidson has avoided the simplification that most developmental biologists do out of necessity. “To understand real evolutionary and developmental processes, we need to understand more of the details in gene networks,” he says.


    Fathoming the Chemistry of the Deep Blue Sea

    1. Robert Irion

    In the distinguished career of ocean chemist Peter Brewer, his startling new research on carbon dioxide may make the most waves

    MOSS LANDING, CALIFORNIA—Peter Brewer doesn't usually talk about windmills. His hot topic these days is carbon dioxide—specifically, how to store the greenhouse gas in the deep ocean. That tactic might slow the buildup of CO2 in Earth's atmosphere, but many people are dead set against even testing it. They cite the dangers of perturbing the sea and the risks of harming wildlife. Some of their arguments puzzle Brewer, and that's where the windmills come in.

    “People say, ‘I vote for renewables,’ but look at this,” he says, displaying a recent column in the San Francisco Chronicle. Brewer supports green energy, but the column reminds him that no option is squeaky clean—not even wind power. “Wind turbines at Altamont Pass have killed 1025 birds, including 149 golden eagles,” he says. “We have video of fish swimming up to a blob of liquid CO2 on the sea floor and happily chowing down on worms, but if an eagle flies up to a windmill, it dies. If I'd killed 149 eagles, I'd have the world on my case.”

    The responsible thing, Brewer says, is to do both: Pursue alternatives to fossil fuels and try to squirrel away some of the CO2 that nations will keep churning into the air. After all, he notes, the upper ocean already soaks up millions of tons of CO2 each day, so a grand test of how the sea will respond is under way, like it or not. “We need to be levelheaded about this issue,” he says. “Maybe the deep ocean is a better place for CO2 than the atmosphere or the surface ocean, but we don't know yet. Until we ask the questions in an objective way, we won't get reasonable answers.”

    Brewer, age 60, is speaking in his impeccable office here at the Monterey Bay Aquarium Research Institute (MBARI). His windows overlook a sweeping beach and the ebbs and flows of what he calls “the greatest fluid on Earth.” Pelicans soar past, momentarily obscuring the horizon. Over that horizon, beyond the Monterey Bay National Marine Sanctuary, lies a spot 3.6 kilometers deep where Brewer and his colleagues are conducting experiments unlike any others in the world.

    At the crushing pressures on the ocean floor, CO2 combines with seawater to form odd, semiliquid compounds called hydrates that should slowly dissolve into the abyss, segregating the gas from the atmospheric part of the global carbon cycle. On three cruises during June and July, the team set up the first controlled tests on the unpredictable nature of CO2 hydrates. The sea-floor lab, erected by a remotely operated vehicle (ROV), included time-lapse cameras, pH probes, and several pens to observe the reactions of animals to the strange brew. The results, now being analyzed, should help shape the growing debate on whether society can handle its CO2 problem by shoving some of it out of sight.

    “The sense is that [ocean sequestration] is technologically feasible and that it's effective for hundreds of years if it's deep enough,” says climate modeler Ken Caldeira of Lawrence Livermore National Laboratory in Livermore, California. “The big question, and the potential showstopper, is biological impact. Those kinds of experiments are really important, and without Peter they wouldn't be done.”

    Liverpool to Woods Hole

    Probing the sea hardly crossed Brewer's mind when he was growing up in Ulverston, a market town in northwestern England. The young Brewer loved walking and exploring nature, but moving vehicles were another story. “I couldn't go 10 miles [16 km] in a car without throwing up,” he recalls. “I'm amazed I ended up on a ship.” But he did, thanks to ocean chemist John Riley of the University of Liverpool. In his senior year as an unhappy chemistry major, Brewer talked to Riley about ocean sciences. “All of a sudden, life seemed to get better,” he says.

    His expeditions as a doctoral student at Liverpool were whoppers: two 9-month voyages on the Indian Ocean in 1963–64 as part of an international team. “I was graduate student slave labor, and it was immensely tedious. We did thousands of measurements by hand,” he says. The final data, taken in the Red Sea, exposed a deep pool of hot and salty brines. Brewer's first paper, published with co-workers in Nature, described what others later identified as the world's first hydrothermal vents.

    The tedium Brewer experienced on those first cruises led to his lifelong focus on building automated instruments to study the ocean and to nail data beyond doubt. “He has pushed the community to be meticulously accurate in its measurements,” says oceanographer Mary Scranton of the State University of New York, Stony Brook, one of Brewer's students. “He forced people to worry about the analytical details and to make sure their answers were right to the fifth decimal place.”

    Brewer's research on the Red Sea brines caught the attention of John Hunt, chemistry chair at the Woods Hole Oceanographic Institution (WHOI) in Massachusetts. Hunt recruited Brewer, who crossed the pond in 1967 with his wife, Hilary. “Our plan was to go for 2 years,” Brewer says. Instead, it became two dozen years as his career took off.

    The highlight of Brewer's first decade at WHOI was his work on the Geochemical Ocean Sections Program. On several cruises in the 1970s, researchers derived pictures of 3D circulation patterns in the ocean basins. Their tracers were carbon-14, tritium, radium, and other radioactive isotopes. Brewer became an expert on the chemistry of suspended particles and the ocean's carbon cycle. However, researchers were at an impasse about how to sort the marine CO2 signal into its various sources and fates.

    During a chat with new WHOI director John Steele in 1978, Brewer volunteered to try to identify the imprint of CO2 in the ocean from the burning of fossil fuels. He devised equations that separated out the biological influences on oceanic CO2, leaving behind a clean, inorganic signal from fossil fuels. His paper caused an uproar. “The calculation was so simple that it raised suspicion,” he recalls. But his method has survived the test of time with few modifications.

    The carbon work led to Brewer's primary legacy: the Joint Global Ocean Flux Study (JGOFS). Brewer drove the early stages of this sweeping program, which gave researchers their first global view of carbon flux to the deep ocean and other cycles that link climate to the biogeochemistry of the sea. “JGOFS needed a strong person to move it into the international arena,” says Steele. “Peter took that step when it wasn't at all obvious what it would mean for the rest of his career. But it became one of the dominant elements of marine science in the last decade.”

    A billionaire calls

    Unbeknownst to Brewer, another marine force was emerging at the same time. It was MBARI, conceived by the sheer will of engineer David Packard.

    Packard, co-founder of Hewlett-Packard Corp., established MBARI in 1987. “He wanted to make an experimental attack on the deep ocean with a world-class lab, the best ship, and the best ROV,” Brewer says. It took time to overcome growing pains, including a culture clash between scientists and engineers and turnover among directors. When the MBARI board coaxed a résumé from Brewer in 1990, Packard called him 2 days later. They clicked, and Brewer crossed the continent in 1991 as the new director.

    Brewer steered the creation of MBARI's three key assets by 1996: its waterfront lab and office building, the twin-hulled Western Flyer research vessel, and the ROV Tiburon, which can dive to a depth of 4 kilometers. All told, those big-ticket items cost $52 million, funded by the David and Lucile Packard Foundation. Brewer also broadened the institute's reach beyond Monterey Bay and its dramatic canyon.

    Packard's death in 1996 was one factor in Brewer's decision to step down as director. “The death of a billionaire is a traumatic experience,” he says. “There was internal strife. After five-and-a-half years, I'd shot off most of my silver bullets.” Brewer's management style also caused friction, some oceanographers say. “Peter can be fairly opinionated, and as a result he is not beloved by everyone in the community,” one comments. Still, that's not what matters to most researchers. “He has not gone out of his way to be best friends with colleagues, but few would argue with his intellectual integrity and his desire to come up with the right answer,” says Scranton.

    Indeed, Brewer's standing is evident in “Ocean Sciences at the Millennium,” a report issued in May by the National Science Foundation. NSF convened a “decadal committee” of marine scientists and asked Brewer to co-chair it with Ted Moore of the University of Michigan, Ann Arbor. The report, which resounds with Brewer's forthrightness, calls upon NSF and the community to move toward technologically advanced and agile observing platforms to capture the variability of the ocean environment.

    Brewer echoes NASA Administrator Dan Goldin when he discusses the report. “We need to have smaller, faster, smarter observing systems that we can deploy to capture complex events,” he says. “The old strategy of doing it by brute force has got to yield to more sophisticated ways of measurement and scientific observation. I feel very strongly about that.”

    Gases under pressure

    Brewer's recent work on greenhouse gases in the deep sea is anything but brute. At first he was fascinated by the properties of gas hydrates—high-pressure, low-temperature phases that behave as liquids with icy skins. However, it became clear that the research meant far more than playing with eerie liquid forms of methane or CO2.

    Nothing's fishy.

    This Pacific grenadier seemed oblivious to blobs of liquid CO2 on the sea floor near Monterey Bay last month. Researchers have set up tests to look for more subtle effects on deep-sea life.


    Grim forecasts of accelerating fossil fuel use had led some scientists to ponder the viability of capturing CO2 in power plant exhaust and pumping it to the sea floor. Italian chemical engineer Cesare Marchetti first raised the notion in 1977, and a few teams worked on models or lab studies in high-pressure vessels, but no one had data from the ocean. Then, a report from the U.S. President's Council of Advisors on Science and Technology in 1997 mentioned sequestration via CO2 hydrates. That caught the eye of Brewer and his colleagues, including geologist Franklin Orr of Stanford University.

    The weird chemistry of the deep ocean was made plain by Brewer's team 2 years ago (Science, 7 May 1999, p. 943). That paper—and an accompanying video—showed the unearthly sight of CO2 hydrates bubbling over the top of a beaker at a depth of 3650 meters and rolling along the sea floor like tumbleweeds. The CO2 sucked in seawater more voraciously than anyone had thought, and the icy hydrate skin appeared surprisingly impervious. In one striking sequence, a deep-dwelling fish swam up to the transparent blob.

    “That little video clip, which he has shown around the world, has had a tremendous impact,” says Steele. Adds MBARI oceanographer Ed Peltzer: “If we didn't have this on videotape, I don't think we could convince anybody that we saw it. The images were startling and totally unexpected.”

    For the latest missions, MBARI engineers developed a new system for the ROV that injects not just a few liters of liquid CO2, but about 50 liters. That's about as much CO2 as the United States churns into the air for each citizen, every day. The scientists have just started analyzing the latest oddities, but they can't hide their excitement. One image shows a “frost heave” rising within a corral filled with liquid CO2—the first evidence that hydrates can penetrate into sediments. Another shows a pH electrode plunging deep into a hydrate blob without breaking the skin, like a finger pushing into a water-filled balloon.

    No one can hazard a guess about what all this means for deep-sea sequestration of CO2. The biological impacts, Caldeira's “potential showstopper,” are still unknown. Nearby fishes have shown no ill effects to date, save for one that swam into a CO2 plume and fell asleep. “From what we've observed so far, it looks pretty good,” Brewer says.

    However, MBARI biological oceanographer Jim Barry is concerned about potential sublethal effects, such as slower growth rates or inability to reproduce. “There are reasons to suspect that deep-water organisms may be more sensitive to pH changes or CO2 changes in comparison to shallow-water organisms,” he says. To try to discern such effects, Barry placed sea cucumbers and sea urchins into corrals containing CO2 for about 3 weeks, while others went into control corrals. He'll use genetic analysis to search for impacts.

    Beyond biology, policy-makers will weigh many sequestration pros and cons. Not least among them are economics, technological capability, and public acceptance. Brewer hopes his team's data will help keep those discussions on track.

    “I suspect most CO2 probably will be disposed of underground, because it's been done and people are comfortable with that idea,” Brewer says. “But we shouldn't exclude the ocean from our thinking. We're already putting 25 million tons of CO2 into the surface ocean every day through the atmospheric loop. Why is the deep ocean any different? It's a much larger place, and it's far more benign.”

    Brewer draws guidance from the words of a Japanese student, who spoke at a Kyoto workshop on the ethics of CO2 disposal. “‘Proceed with caution, and have the courage to stop if necessary,’” he says. “I like that.”


    Randomly Distributed Slices of π

    1. Charles Seife

    Mathematicians slowly circle in on a proof that π's unpredictable digits really are as random as they seem

    The digits of π dance about so unpredictably that scientists and statisticians have long used them as a handy stand-in for randomly generated numbers in applications from designing clinical trials to performing numerical simulations. But surprisingly, mathematicians have been completely at sea when they try to prove that the digits of π (or of any other important irrational number for that matter) are indeed randomly distributed. When a number's digits are randomly distributed, you have no information about what any given digit will be even when you know the previous one. Now two mathematicians have taken a large step toward proving π's randomness, perhaps opening the door to a solution of a centuries-old conundrum.

    The problem has been around for some 900 years, says Richard Crandall, a computational mathematician at Reed College in Portland, Oregon. But mathematicians have precious little to show for their centuries of work, according to David Bailey, a mathematician at Lawrence Berkeley National Laboratory in California. “It is basically a blank,” he says. “It's embarrassing.”

    The degree of embarrassment is hard to imagine. The overwhelming majority of numbers have digits that are truly random when expressed in a given base—a property called normality. In a normal number, any string appears exactly as often as you'd expect it to—in base 10, the digit 1 appears a tenth of the time, for example, and the string 111 appears 1/1000 of the time. But even though normal numbers are everywhere, and almost all numbers are normal, mathematicians have failed to prove the normality of even a single number other than a handful of oddballs carefully constructed for the purpose. “As far as the naturally occurring constants of math are concerned, like π, the square root of 2, log 2, and [natural] log 10, there are basically no results,” says Bailey.

    Now Bailey and Crandall have breathed new life into the randomness problem by building on a discovery that flabbergasted the math world 5 years ago. In 1996, Bailey, along with two mathematicians at Simon Fraser University in Vancouver, Canada, Peter Borwein and Simon Plouffe, came up with an algorithm for calculating any digit of π without having to calculate all the digits that precede it—unlike every other known π recipe. If you want to know, say, the 289th digit of π, plug 289 into the formula, dubbed BBP. Out will pop a number between 0 and 1. This number, converted back into base 16, reveals the digit you're after. “It was a pleasant surprise,” says Jonathan Borwein (Peter's brother, also a mathematician at Simon Fraser).

    The formula looked as if it might help mathematicians solve the centuries-old conundrum of the randomness of π's digits. “If you can stick your hand down into the digits that way, then it's strong evidence that the numbers are independent,” adds Jonathan Borwein. This thought struck Bailey when he came up with the BBP formula. “My immediate reaction was, ‘Oh my God, this might allow us to work on the normality of π,’” he says. “I was consumed with this.”

    Bailey and Crandall have now made a hypothesis that formulas such as BBP (except for particularly boring ones) will spit out values that skitter chaotically between 0 and 1 for different digits that get plugged in. If true, this chaotic motion ensures that the output of the BBP formula would be essentially random for any given digit that is plugged in. This, in turn, would mean that π's digits are also random. As the two mathematicians report in the summer 2001 issue of Experimental Mathematics, if the hypothesis is true, it would prove not only π's randomness, but also that of other constants that have BBP-type formulas, such as the natural log of 2.

    Although their hypothesis is as yet unproven, it has restated the ancient problem in a new language. Instead of attacking the problem with the mathematical tools of older disciplines such as number theory or measure theory, Bailey and Crandall's hypothesis turns the normality of π into a problem of chaotic dynamics—the sort of discipline that attracts applied mathematicians, computer scientists, and even cryptographers. Jonathan Borwein hopes that this insight will finally allow mathematicians to prove that π's digits are random. “Whenever you recast an old problem in a new language, there's hope that the new language will provide a new impetus,” he says. “It can open up better avenues for looking at these things.”

    But even Crandall himself expects a mere “10% chance of a partial solution” to the hypothesis in the next decade. For mathematicians, apparently, π is not a piece of cake.