News this Week

Science  13 Aug 2004:
Vol. 305, Issue 5686, pp. 926

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    NSF Takes the Plunge on a Bigger, Faster Research Sub

    1. David Malakoff

    Deciding who will go down in history as Alvin's last crew may be the biggest issue still on the table now that the U.S. government has decided to retire its famous research submarine and build a faster, roomier, and deeper diving substitute. Last week, the National Science Foundation (NSF) put an end to a decade of debate about the sub's future by announcing that it will shelve the 40-year-old Alvin in late 2007 and replace it with a $21.6 million craft packed with features long coveted by deep-sea scientists.

    “It's a bittersweet moment. Alvin is a beloved symbol of ocean exploration,” says Robert Gagosian, president of the Woods Hole Oceanographic Institution (WHOI) in Massachusetts, which operates Alvin and will run the new craft. “But there's a lot of excitement about the new things we'll be able to do.”

    Going Down.

    New submersible will be able to dive 6500 meters.


    The 6 August decision ended an often feisty debate over how to replace Alvin, which entered service in 1967 and is one of five research subs in the world that can dive below 4000 meters (Science, 19 July 2002, p. 326). Its storied, nearly 4000-dive career has witnessed many high-profile moments, including the discovery of sulfur-eating sea-floor ecosystems and visits to the Titanic. Some researchers argued for replacing the aging Alvin with cheaper, increasingly capable robotic vehicles. Others wanted a human-piloted craft able to reach the 11,000-meter bottom of the deepest ocean trench—far deeper than Alvin's 4500-meter rating, which enables it to reach just 63% of the sea floor. Last year, after examining the issues, a National Research Council panel endorsed building a next-generation Alvin, but put a higher priority on constructing a $5 million robot that could dive to 7000 meters (Science, 14 November 2003, p. 1135).

    Coming out.

    Alvin's last dive is scheduled for late 2007.


    That vehicle has yet to appear, although NSF officials say an automated sub currently under construction at WHOI partly fills the bill. And NSF and WHOI have chosen what the panel judged the riskiest approach to building a new Alvin: starting from scratch with a new titanium hull able to reach 6500 meters or 99% of the sea floor. The panel had suggested using leftover Russian or U.S. hulls rated to at least 4500 meters, partly because few shipyards know how to work with titanium. WHOI engineers, however, are confident that hurdle can be overcome.

    Overall, the new submarine will be about the same size and shape as the current Alvin, so that it can operate from the existing mother ship, the Atlantis. But there will be major improvements.

    One change is nearly 1 cubic meter more elbowroom inside the sphere that holds the pilot and two passengers. It will also offer five portholes instead of the current three, and the scientists' views will overlap with the pilot's, eliminating a long-standing complaint. A sleeker design means researchers will sink to the bottom faster and be able to stay longer. Alvin currently lingers about 5 hours at 2500 meters; the new craft will last up to 7 hours. A new buoyancy system will allow the sub to hover in midwater, allowing researchers to study jellyfish and other creatures that spend most of their lives suspended. And an ability to carry more weight means researchers will be able to bring more instruments—and haul more samples from the depths.

    At the same time, improved electronics will allow colleagues left behind to participate in real time. As the new vehicle sinks, it will spool out a 12-kilometer-long fiber-optic cable to relay data and images. “It will put scientists, children in classrooms, and the public right in the sphere,” says NSF's Emma Dieter.

    Officials predict a smooth transition between the two craft. The biggest effect could be stiffer competition for time on board, because the new submersible will be able to reach areas—such as deep-sea trenches with interesting geology—once out of reach.

    In the meantime, Alvin's owner, the U.S. Navy (NSF will own the new craft), must decide its fate. NSF and WHOI officials will also choose a name for the new vessel, although its current moniker, taken from a 1960s cartoon chipmunk, appears to have considerable support.


    NIH Declines to March In on Pricing AIDS Drug

    1. David Malakoff

    The National Institutes of Health (NIH) has rejected a controversial plea to use its legal muscle to rein in the spiraling cost of a widely used AIDS drug. NIH Director Elias Zerhouni last week said his agency would not “march in” and reclaim patents on a drug it helped develop because pricing issues are best “left to Congress.”

    The decision disappointed AIDS activists, who said it opened the door to price gouging by companies. But major research universities were quietly pleased. “This was the only decision NIH could make [based] on the law,” says Andrew Neighbour, an associate vice chancellor at the University of California, Los Angeles.

    The 4 August announcement was NIH's answer to a request filed in January by Essential Inventions, a Washington, D.C.-based advocacy group (Science, 4 June, p. 1427). It asked NIH to invoke the 1980 Bayh-Dole Act, which allows the government to reclaim patents on taxpayer-funded inventions if companies aren't making the resulting products available to the public. Specifically, the group asked NIH to march in on four patents held by Abbott Laboratories of Chicago, Illinois. All cover the anti-AIDS drug Norvir, which Abbott developed in the early 1990s with support from a 5-year, $3.5 million NIH grant.

    Last year, Abbott increased U.S. retail prices for some Norvir formulations by up to 400%, prompting the call for NIH to intervene and allow other manufacturers to make the drug. University groups and retired government officials who wrote the law, however, argued that such a move would be a misreading of Bayh-Dole and would undermine efforts to commercialize government-funded inventions.

    In a 29 July memo, Zerhouni concluded that Abbott has made Norvir widely available to the public and “that the extraordinary remedy of march-in is not an appropriate means of controlling prices.” The price-gouging charge, he added, should be investigated by the Federal Trade Commission (which is looking into the matter). Essential Inventions, meanwhile, says it will appeal to NIH's overseer, Health and Human Services Secretary Tommy Thompson. Observers doubt Thompson will intervene.


    NASA Climate Satellite Wins Reprieve

    1. Andrew Lawler

    Facing pressure from Congress and the White House, NASA agreed last week to rethink plans to retire a climate satellite that weather forecasters have found useful for monitoring tropical storms. The space agency said it would extend the life of the $600 million Tropical Rainfall Measuring Mission (TRMM) until the end of the year and ask the National Research Council (NRC) for advice on its future.

    Eye opener.

    TRMM monitored the season's first hurricane, Alex, as it approached the North Carolina coast last week.


    TRMM, launched on a Japanese rocket in 1997, measures rainfall and latent heating in tropical oceans and land areas that traditionally have been undersampled. Although designed for climate researchers, TRMM has also been used by meteorologists eager to improve their predictions of severe storms. “TRMM has proven helpful in complementing other satellite data,” says David Johnson, director of the National Oceanic and Atmospheric Administration's (NOAA's) weather service, which relies on a fleet of NOAA spacecraft.

    Climate and weather scientists protested last month's announcement by NASA that it intended to shut off TRMM on 1 August. NASA officials pleaded poverty and noted that the mission had run 4 years longer than planned. The agency said it needed to put the satellite immediately into a slow drift out of orbit before a controlled descent next spring, a maneuver that would avoid a potential crash in populated areas.

    The satellite's users attracted the attention of several legislators, who complained that shutting down such a spacecraft at the start of the Atlantic hurricane season would put their constituents in danger. “Your Administration should be able to find a few tens of millions of dollars over the next 4 years to preserve a key means of improving coastal and maritime safety,” chided Representative Nick Lampson (D-TX) in a 23 July letter to the White House. “A viable funding arrangement can certainly be developed between NASA and the other agencies that use TRMM's data if you desire it to happen.” In an election year, that argument won the ear of the Bush Administration, in particular, NOAA Chief Conrad C. Lautenbacher Jr., who urged NASA Administrator Sean O'Keefe to rethink his decision.

    On 6 August, O'Keefe said he would keep TRMM going through December. He joined with Lautenbacher in asking NRC, the operating arm of the National Academies, to hold a September workshop to determine if and how TRMM's operations should be continued. Whereas NOAA is responsible for weather forecasting, NASA conducts research and would prefer to divest itself of TRMM. “We'd be happy to give it to NOAA or a university,” says one agency official. Keeping the satellite going through December will cost an additional $4 million to $5 million—“and no one has decided who is going to pay,” the official added. By extending TRMM's life, NASA hopes “to aid NOAA in capturing another full season of storm data,” says Ghassem Asrar, deputy associate administrator of NASA's new science directorate.

    Technically, satellite operators could keep TRMM operating another 18 months, but this would come with a hidden cost. NASA would have to monitor the craft for a further 3 years before putting it on a trajectory to burn up. That option would cost about $36 million. Now that TRMM has so many highly placed friends, its supporters hope that one of them will also have deep pockets.


    Proposed Leukemia Stem Cell Encounters a Blast of Scrutiny

    1. Jennifer Couzin

    A prominent California stem cell lab says it has hit on a cadre of cells that helps explain how a form of leukemia transitions from relative indolence to life-threatening aggression. In an even more provocative claim, Irving Weissman of Stanford University and his colleagues propose in this week's New England Journal of Medicine that these cells, granulocyte-macrophage progenitors, metamorphose into stem cells as the cancer progresses. Some cancer experts doubt the solidity of the second claim, however.


    Immature blood cells proliferate wildly as a CML blast crisis takes hold.


    The concept that stem cells launch and sustain a cancer has gained credence as scientists tied such cells to several blood cancers and, more recently, to breast cancer and other solid tumors (Science, 5 September 2003, p. 1308). Weissman's group explored a facet of this hypothesis, asking: Can nonstem cells acquire such privileged status in a cancer environment? The investigators focused on chronic myelogenous leukemia (CML), which the drug Gleevec has earned fame for treating.

    The researchers gathered bone marrow samples from 59 CML patients at different stages of the disease. A hallmark of CML is its eventual shift, in patients who don't respond to early treatment, from a chronic phase to the blast crisis, in which patients suffer a massive proliferation of immature blood cells. Weissman, his colleague Catriona Jamieson, and their team noticed that among blood cells, the proportion of granulocyte-macrophage progenitors, which normally differentiate into several types of white blood cells, rose from 5% in chronic- phase patients to 40% in blast-crisis patients.

    When grown in the lab, these cells appeared to self-renew—meaning that one granulocyte-macrophage progenitor spawned other functionally identical progenitor cells rather than simply giving rise to more mature daughter cells. This self-renewal, a defining feature of a stem cell, seemed dependent on the β-catenin pathway, which was previously implicated in a number of cancers, including a form of acute leukemia. Weissman and his co-authors postulate that the pathway could be a new target for CML drugs aiming to stave off or control blast crisis.

    Forcing expression of β-catenin protein in granulocyte-macrophage progenitors from healthy volunteers enabled the cells to self-renew in lab dishes, the researchers report. Whereas the first stage of CML is driven by a mutant gene called bcr-abl, whose protein Gleevec targets, Weissman theorizes that a β-catenin surge in granulocyte-macrophage progenitors leads to the wild cell proliferation that occurs during the dangerous blast phase.

    Some critics, however, say that proof can't come from the petri dish. “To ultimately define a stem cell” one needs to conduct tests in animals, says John Dick, the University of Toronto biologist who first proved the existence of a cancer stem cell in the 1990s. Studies of acute myelogenous leukemia uncovered numerous progenitor cells that seemed to self-renew, notes Dick. But when the cells were given to mice, many turned out not to be stem cells after all.

    Michael Clarke of the University of Michigan, Ann Arbor, who first isolated stem cells in breast cancer, is more impressed with Weissman's results. The cells in question “clearly self-renew,” he says. “The implications of this are just incredible.” The suggestion that nonstem cells can acquire stemness could apply to other cancers and shed light on how they grow, he explains.

    All agree that the next step is injecting mice with granulocyte-macrophage progenitors from CML patients to see whether the cells create a blast crisis. Weissman's lab is conducting those studies, and results so far look “pretty good,” he says.

    “What we really need to know is what cells persist in those patients” who progress to blast crisis, concludes Brian Druker, a leukemia specialist at Oregon Health & Science University in Portland. That question still tops the CML agenda, although Weissman suspects that his team has found the culprits.


    Bone Study Shows T. rex Bulked Up With Massive Growth Spurt

    1. Erik Stokstad

    Tyrannosaurus rex was a creature of superlatives. As big as a bull elephant, T. rex weighed 15 times as much as the largest carnivores living on land today. Now, paleontologists have for the first time charted the colossal growth spurt that carried T. rex beyond its tyrannosaurid relatives. “It would have been the ultimate teenager in terms of food intake,” says Thomas Holtz of the University of Maryland, College Park.

    Growth rates have been studied in only a half-dozen dinosaurs and no large carnivores. That's because the usual method of telling ages—counting annual growth rings in the leg bone—is a tricky task with tyrannosaurids. “I was told when I started in this field that it was impossible to age T. rex,” recalls Gregory Erickson, a paleobiologist at Florida State University in Tallahassee, who led the study. The reason is that the weight-bearing bones of large dinosaurs become hollow with age and the internal tissue tends to get remodeled, thus erasing growth lines.


    Growth rings (right) in a rib show that Sue grew fast during its teenage years.


    But leg bones aren't the only place to check age. While studying a tyrannosaurid called Daspletosaurus at the Field Museum of Natural History (FMNH) in Chicago, Illinois, Erickson noticed growth rings on the end of a broken rib. Looking around, he found similar rings on hundreds of other bone fragments in the museum drawers, including the fibula, gastralia, and the pubis. These bones don't bear substantial loads, so they hadn't been remodeled or hollowed out.

    Switching to modern alligators, crocodiles, and lizards, Erickson found that the growth rings accurately recorded the animals' ages. He and his colleagues then sampled more than 60 bones from 20 specimens of four closely related tyrannosaurids. Counting the growth rings with a microscope, the team found that the tyrannosaurids had died at ages ranging from 2 years to 28.

    By plotting the age of each animal against its mass—conservatively estimated from the circumference of its femur—they constructed growth curves for each species. Gorgosaurus and Albertosaurus, both more primitive tyrannosaurids, began to put on weight more rapidly at about age 12. For 4 years or so, they added 310 to 480 grams per day. By about age 15, they were full-grown at about 1100 kilograms. The more advanced Daspletosaurus followed the same trend but grew faster and maxed out at roughly 1800 kilograms.

    T. rex, in comparison, was almost off the chart. As the team describes this week in Nature, it underwent a gigantic growth spurt starting at age 14 and packed on 2 kilograms a day. By age 18.5 years, the heaviest of the lot, FMNH's famous T. rex named Sue, weighed more than 5600 kilograms. Jack Horner of the Museum of the Rockies in Bozeman, Montana, and Kevin Padian of the University of California, Berkeley, have found the same growth pattern in other specimens of T. rex. Their paper is in press at the Proceedings of the Royal Society of London, Series B.

    It makes sense that T. rex would grow this way, experts say. Several lines of evidence suggest that dinosaurs had a higher metabolism and faster growth rates than living reptiles do (although not as fast as birds'). Previous work by Erickson showed that young dinosaurs stepped up the pace of growth, then tapered off into adulthood; reptiles, in contrast, grow more slowly, but they keep at it for longer. “Tyrannosaurus rex lived fast and died young,” Erickson says. “It's the James Dean of dinosaurs.”

    Being able to age the animals will help shed light on the population structure of tyrannosaurids. For instance, the researchers determined the ages of more than half a dozen Albertosaurs that apparently died together. They ranged in age from 2 to 20 in what might have been a pack. “You've got really young living with the really old,” Erickson says. “These things probably weren't loners.”

    The technique could also help researchers interpret the medical history of individuals. Sue, in particular, is riddled with pathologies, and the growth rings might reveal at what age various kinds of injuries occurred. “We could see if they had a really rotten childhood or lousy old age,” Holtz says. And because a variety of scrap bones can be analyzed for growth rings, more individuals can be examined. “Not many museums will let you cut a slice out of the femur of a mounted specimen,” notes co-author Peter Makovicky of FMNH. “A great deal of the story about Sue was still locked in the drawers,” Erickson adds.


    Los Alamos's Woes Spread to Pluto Mission

    1. Andrew Lawler

    The impact of the shutdown of Los Alamos National Laboratory in New Mexico could ripple out to the distant corners of the solar system. The lab's closure last month due to security concerns (Science, 23 July, p. 462) has jeopardized a NASA mission to Pluto and the Kuiper belt. “I am worried,” says S. Alan Stern, a planetary scientist with the Southwest Research Institute in Boulder, Colorado, who is the principal investigator.

    That spacecraft, slated for a 2006 launch, is the first in a series of outer planetary flights. In those far reaches of space, solar power is not an option. Instead, the mission will be powered by plutonium-238, obtained from Russia and converted by Los Alamos scientists into pellets. But the 16 July “stand down” at the lab has shut down that effort, which already was on a tight schedule due to the lengthy review required for any spacecraft containing nuclear material.

    The 2006 launch date was chosen to make use of a gravity assist from Jupiter to rocket the probe to Pluto by 2015. A 1-year delay could cost an additional 3 to 4 years in transit time. “It won't affect the science we will be able to do in a serious way, but it will delay it and introduce risks,” says Stern. Some researchers fear that Pluto's thin atmosphere could freeze and collapse later in the next decade, although the likelihood and timing of that possibility are in dispute.

    Los Alamos officials are upbeat. “Lab activity is coming back on line,” says spokesperson Nancy Ambrosiano. Even so, last week lab director George “Pete” Nanos suspended four more employees in connection with the loss of several computer disks containing classified information, and Nanos says that it could take as long as 2 months before everyone is back at work. NASA officials declined comment, but Stern says “many people are working to find remedies.”


    Do Black Hole Jets Do the Twist?

    1. Robert Irion

    Among the dark secrets that nestle in galactic cores, one of the most vexing is how the gargantuan energy fountains called radio-loud quasars propel tight beams of particles and energy across hundreds of thousands of light-years. Astrophysicists agree that the power comes from supermassive black holes, but they differ sharply about how the machinery works. According to a new model, the answer might follow a familiar maxim: One good turn deserves another.

    Winding up.

    Coiled magnetic fields launch jets from massive black holes, a model claims.


    On page 978, three astrophysicists propose that a whirling black hole at the center of a galaxy can whip magnetic fields into a coiled frenzy and expel them along two narrow jets. The team's simulations paint dramatic pictures of energy spiraling sharply into space. “It has a novelty to it—it's very educational and illustrative,” says astrophysicist Maurice van Putten of the Massachusetts Institute of Technology in Cambridge. But the model's simplified astrophysical assumptions allow other explanations, he says.

    The paper, by physicist Vladimir Semenov of St. Petersburg State University, Russia, and Russian and American colleagues, is the latest word in an impassioned debate about where quasars get their spark. Some astrophysicists think the energy comes from a small volume of space around the black holes themselves, which are thought to spin like flywheels weighing a billion suns or more. Others suspect the jets blast off from blazingly hot “accretion disks” of gas that swirl toward the holes. Astronomical observations aren't detailed enough to settle the argument, and computer models require a complex mixture of general relativity, plasma physics, and magnetic fields. “We're still a few years away from realistic time-dependent simulations,” says astrophysicist Ken-Ichi Nishikawa of the National Space Science and Technology Center in Huntsville, Alabama.

    Semenov and his colleagues depict the churning matter near a black hole as individual strands of charged gas, laced by strong magnetic lines of force. Einstein's equations of relativity dictate the outcome, says co- author Brian Punsly of Boeing Space and Intelligence Systems in Torrance, California. The strands get sucked into the steep vortex of spacetime and tugged around the equator just outside the rapidly spinning hole, a relativistic effect called frame dragging. Tension within the magnetized ribbons keeps them intact. Repeated windings at close to the speed of light torque the stresses so high that the magnetic fields spring outward in opposite directions along the poles, expelling matter as they go.

    The violent spin needed to drive such outbursts arises as a black hole consumes gas at the center of an active galaxy, winding up like a merry-go-round getting constant shoves, Punsly says. In that environment, he notes, “Frame dragging dominates everything.”

    Van Putten agrees, although his research suggests that parts of the black hole close to the axis of rotation also play a key role in forming jets by means of frame dragging.

    Still, the basic picture—a fierce corkscrew of magnetized plasma unleashed by a frantically spinning black hole—is valuable for quasar researchers, says astrophysicist Ramesh Narayan of the Harvard- Smithsonian Center for Astrophysics in Cambridge. “This gives me a physical sense for how the black hole might dominate over the [accretion] disk in terms of jet production,” he says.


    Three Degrees of Consensus

    1. Richard A. Kerr

    Climate researchers are finally homing in on just how bad greenhouse warming could get—and it seems increasingly unlikely that we will escape with a mild warming

    PARIS—Decades of climate studies have made some progress. Researchers have convinced themselves that the world has indeed warmed by 0.6°C during the past century. And they have concluded that human activities—mostly burning fossil fuels to produce the greenhouse gas carbon dioxide (CO2)—have caused most of that warming. But how warm could it get? How bad is the greenhouse threat anyway?

    For 25 years, official assessments of climate science have been consistently vague on future warming. In report after report, estimates of climate sensitivity, or how much a given increase in atmospheric CO2 will warm the world, fall into the same subjective range. At the low end, doubling CO2—the traditional benchmark—might eventually warm the world by a modest 1.5°C, or even less. At the other extreme, temperatures might soar by a scorching 4.5°, or more warming might be possible, given all the uncertainties.


    At an international workshop* here late last month on climate sensitivity, climatic wishy-washiness seemed to be on the wane. “We've gone from hand waving to real understanding,” said climate researcher Alan Robock of Rutgers University in New Brunswick, New Jersey. Increasingly sophisticated climate models seem to be converging on a most probable sensitivity. By running a model dozens of times under varying conditions, scientists are beginning to pin down statistically the true uncertainty of the models' climate sensitivity. And studies of natural climate changes from the last century to the last ice age are also yielding climate sensitivities.

    Although the next international assessment is not due out until 2007, workshop participants are already reaching a growing consensus for a moderately strong climate sensitivity. “Almost all the evidence points to 3°C” as the most likely amount of warming for a doubling of CO2, said Robock. That kind of sensitivity could make for a dangerous warming by century's end, when CO2 may have doubled. At the same time, most attendees doubted that climate's sensitivity to doubled CO2 could be much less than 1.5°C. That would rule out the feeble greenhouse warming espoused by some greenhouse contrarians. But at the high and especially dangerous end of climate sensitivity, confidence faltered; an upper limit to possible climate sensitivity remains highly uncertain.

    Hand-waving climate models

    As climate modeler Syukuro Manabe of Princeton University tells it, formal assessment of climate sensitivity got off to a shaky start. In the summer of 1979, the late Jule Charney convened a committee of fellow meteorological luminaries on Cape Cod to prepare a report for the National Academy of Sciences on the possible effects of increased amounts of atmospheric CO2 on climate. None of the committee members actually did greenhouse modeling themselves, so Charney called in the only two American researchers modeling greenhouse warming, Manabe and James Hansen of NASA's Goddard Institute for Climate Studies (GISS) in New York City.

    On the first day of deliberations, Manabe told the committee that his model warmed 2°C when CO2 was doubled. The next day Hansen said his model had recently gotten 4°C for a doubling. According to Manabe, Charney chose 0.5°C as a not-unreasonable margin of error, subtracted it from Manabe's number, and added it to Hansen's. Thus was born the 1.5°C-to-4.5°C range of likely climate sensitivity that has appeared in every greenhouse assessment since, including the three by the Intergovernmental Panel on Climate Change (IPCC). More than one researcher at the workshop called Charney's now-enshrined range and its attached best estimate of 3°C so much hand waving.

    Model convergence, finally?

    By the time of the IPCC's second assessment report in 1995, the number of climate models available had increased to 13. After 15 years of model development, their sensitivities still spread pretty much across Charney's 1.5°C-to-4.5°C range. By IPCC's third and most recent assessment report in 2001, the model-defined range still hadn't budged.

    Now model sensitivities may be beginning to converge. “The range of these models, at least, appears to be narrowed,” said climate modeler Gerald Meehl of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, after polling eight of the 14 models expected to be included in the IPCC' s next assessment. The sensitivities of the 14 models in the previous assessment ranged from 2.0°C to 5.1°C, but the span of the eight currently available models is only 2.6°C to 4.0°C, Meehl found.

    If this limited sampling really has detected a narrowing range, modelers believe there's a good reason for it: More-powerful computers and a better understanding of atmospheric processes are making their models more realistic. For example, researchers at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey, recently adopted a better way of calculating the thickness of the bottommost atmospheric layer—the boundary layer—where clouds form that are crucial to the planet's heat balance. When they made the change, the model's sensitivity dropped from a hefty 4.5°C to a more mainstream 2.8°C, said Ronald Stouffer, who works at GFDL. Now the three leading U.S. climate models—NCAR's, GFDL's, and GISS's—have converged on a sensitivity of 2.5°C to 3.0°C. They once differed by a factor of 2.

    Less-uncertain modeling

    If computer models are increasingly brewing up similar numbers, however, they sometimes disagree sharply about the physical processes that produce them. “Are we getting [similar sensitivities] for the same reason? The answer is clearly no,” Jeffrey Kiehl of NCAR said of the NCAR and GFDL models. The problems come from processes called feedbacks, which can amplify or dampen the warming effect of greenhouse gases.

    The biggest uncertainties have to do with clouds. The NCAR and GFDL models might agree about clouds' net effect on the planet's energy budget as CO2 doubles, Kiehl noted. But they get their similar numbers by assuming different mixes of cloud properties. As CO2 levels increase, clouds in both models reflect more shorter-wavelength radiation, but the GFDL model's increase is three times that of the NCAR model. The NCAR model increases the amount of low-level clouds, whereas the GFDL model decreases it. And much of the United States gets wetter in the NCAR model when it gets drier in the GFDL model.

    In some cases, such widely varying assumptions about what is going on may have huge effects on models' estimates of sensitivity; in others, none at all. To find out, researchers are borrowing a technique weather forecasters use to quantify uncertainties in their models. At the workshop and in this week's issue of Nature, James Murphy of the Hadley Center for Climate Prediction and Research in Exeter, U.K., and colleagues described how they altered a total of 29 key model parameters one at a time—variables that control key physical properties of the model, such as the behavior of clouds, the boundary layer, atmospheric convection, and winds. Murphy and his team let each parameter in the Hadley Center model vary over a range of values deemed reasonable by a team of experts. Then the modelers ran simulations of present-day and doubled-CO2 climates using each altered version of the model.

    Volcanic chill.

    Debris from Pinatubo (right) blocked the sun and chilled the world (left), thanks in part to the amplifying effect of water vapor.


    Using this “perturbed physics” approach to generate a curve of the probability of a whole range of climate sensitivities (see figure), the Hadley group found a sensitivity a bit higher than they would have gotten by simply polling the eight independently built models. Their estimates ranged from 2.4°C to 5.4°C (with 5% to 95% confidence intervals), with a most probable climate sensitivity of 3.2°C. In a nearly completed extension of the method, many model parameters are being varied at once, Murphy reported at the workshop. That is dropping the range and the most probable value slightly, making them similar to the eight-model value as well as Charney's best guess.

    Murphy isn't claiming they have a panacea. “We don't want to give a sense of excessive precision,” he says. The perturbed physics approach doesn't account for many uncertainties. For example, decisions such as the amount of geographic detail to build into the model introduce a plethora of uncertainties, as does the model's ocean. Like all model oceans used to estimate climate sensitivity, it has been simplified to the point of having no currents in order to make the extensive simulations computationally tractable.

    Looking back

    Faced with so many caveats, workshop attendees turned their attention to what may be the ultimate reality check for climate models: the past of Earth itself. Although no previous change in Earth's climate is a perfect analog for the coming greenhouse warming, researchers say modeling paleoclimate can offer valuable clues to sensitivity. After all, all the relevant processes were at work in the past, right down to the formation of the smallest cloud droplet.

    One telling example from the recent past was the cataclysmic eruption of Mount Pinatubo in the Philippines in 1991. The debris it blew into the stratosphere, which stayed there for more than 2 years, was closely monitored from orbit and the ground, as was the global cooling that resulted from the debris blocking the sun. Conveniently, models show that Earth's climate system generally does not distinguish between a shift in its energy budget brought on by changing amounts of greenhouse gases and one caused by a change in the amount of solar energy allowed to enter. From the magnitude and duration of the Pinatubo cooling, climate researcher Thomas Wigley of NCAR and his colleagues have recently estimated Earth's sensitivity to a CO2 doubling as 3.0°C. A similar calculation for the eruption of Agung in 1963 yielded a sensitivity of 2.8°C. And estimates from the five largest eruptions of the 20th century would rule out a climate sensitivity of less than 1.5°C.

    Probably warm.

    Running a climate model over the full range of parameter uncertainty suggests that climate sensitivity is most likely a moderately high 3.2°C (red peak).

    CREDIT: J. MURPHY ET AL., NATURE 430 (2004)

    Estimates from such a brief shock to the climate system would not include more sluggish climate system feedbacks, such as the expansion of ice cover that reflects radiation, thereby cooling the climate. But the globally dominant feedbacks from water vapor and clouds would have had time to work. Water vapor is a powerful greenhouse gas that's more abundant at higher temperatures, whereas clouds can cool or warm by intercepting radiant energy.

    More climate feedbacks come into play over centuries rather than years of climate change. So climate researchers Gabriele Hegerl and Thomas Crowley of Duke University in Durham, North Carolina, considered the climate effects from 1270 to 1850 produced by three climate drivers: changes in solar brightness, calculated from sunspot numbers; changing amounts of greenhouse gases, recorded in ice cores; and volcanic shading, also recorded in ice cores. They put these varying climate drivers in a simple model whose climate sensitivity could be varied over a wide range. They then compared the simulated temperatures over the period with temperatures recorded in tree rings and other proxy climate records around the Northern Hemisphere.

    The closest matches to observed temperatures came with sensitivities of 1.5°C to 3.0°C, although a range of 1.0°C to 5.5°C was possible. Other estimates of climate sensitivity on a time scale of centuries to millennia have generally fallen in the 2°C-to-4°C range, Hegerl noted, although all would benefit from better estimates of past climate drivers.

    The biggest change in atmospheric CO2 in recent times came in the depths of the last ice age, 20,000 years ago, which should provide the best chance to pick the greenhouse signal out of climatic noise. So Thomas Schneider von Deimling and colleagues at the Potsdam Institute for Climate Impact Research (PIK) in Germany have estimated climate sensitivity by modeling the temperature at the time using the perturbed-physics approach. As Stefan Rahmstorf of PIK explained at the workshop, they ran their intermediate complexity model using changing CO2 levels, as recorded in ice cores. Then they compared model-simulated temperatures with temperatures recorded in marine sediments. Their best estimate of sensitivity is 2.1°C to 3.6°C, with a range of 1.5°C to 4.7°C.

    More confidence

    In organizing the Paris workshop, the IPCC was not yet asking for a formal conclusion on climate sensitivity. But participants clearly believed that they could strengthen the traditional Charney range, at least at the low end and for the best estimate. At the high end of climate sensitivity, however, most participants threw up their hands. The calculation of sensitivity probabilities goes highly nonlinear at the high end, producing a small but statistically real chance of an extreme warming. This led to calls for more tests of models against real climate. They would include not just present-day climate but a variety of challenges, such as the details of El Niño events and Pinatubo's cooling.

    Otherwise, the sense of the 75 or so scientists in attendance seemed to be that Charney's range is holding up amazingly well, possibly by luck. The lower bound of 1.5°C is now a much firmer one; it is very unlikely that climate sensitivity is lower than that, most would say. Over the past decade, some contrarians have used satellite observations to argue that the warming has been minimal, suggesting a relatively insensitive climate system. Contrarians have also proposed as-yet-unidentified feedbacks, usually involving water vapor, that could counteract most of the greenhouse warming to produce a sensitivity of 0.5°C or less. But the preferred lower bound would rule out such claims.

    Most meeting-goers polled by Science generally agreed on a most probable sensitivity of around 3°C, give or take a half- degree or so. With three complementary approaches—a collection of expert-designed independent models, a thoroughly varied single model, and paleoclimates over a range of time scales—all pointing to sensitivities in the same vicinity, the middle of the canonical range is looking like a good bet. Support for such a strong sensitivity ups the odds that the warming at the end of this century will be dangerous for flora, fauna, and humankind. Charney, it seems, could have said he told us so.

    • * Workshop on Climate Sensitivity of the Intergovernmental Panel on Climate Change Working Group I, 26–29 July 2004, Paris.


    A General Surrenders the Field, But Black Hole Battle Rages On

    1. Charles Seife

    Stephen Hawking may have changed his mind, but questions about the fate of information continue to expose fault lines between relativity and quantum theories

    Take one set of the Encyclopedia Britannica. Dump it into an average-sized black hole. Watch and wait. What happens? And who cares?

    Physicists care, you might have thought, reading last month's breathless headlines from a conference in Dublin, Ireland. There, Stephen Hawking announced that, after proclaiming for 30 years that black holes destroy information, he had decided they don't (Science, 30 July, p. 586). All of which, you might well have concluded, seems a lot like debating how many angels can dance on the head of a pin.

    Eternal darkness?

    Spherical “event horizon” marks the region where a black hole's gravity grows so intense that even light can't escape. But is the point of no return a one-way street?


    Yet arguments about what a black hole does with information hold physicists transfixed. “The question is incredibly interesting,” says Andrew Strominger, a string theorist at Harvard University. “It's one of the three or four most important puzzles in physics.” That's because it gives rise to a paradox that goes to the heart of the conflict between two pillars of physics: quantum theory and general relativity. Resolve the paradox, and you might be on your way to resolving the clash between those two theories.

    Yet, as Hawking and others convince themselves that they have solved the paradox, others are less sure—and everybody is desperate to get real information about what goes on at the heart of a black hole.

    The hairless hole

    A black hole is a collapsed star—and a gravitational monster. Like all massive bodies, it attracts and traps other objects through its gravitational force. Earth's gravity traps us, too, but you can break free if you strap on a rocket that gets you moving beyond Earth's escape velocity of about 11 kilometers per second.

    Black holes, on the other hand, are so massive and compressed into so small a space that if you stray too close, your escape velocity is faster than the speed of light. According to the theory of relativity, no object can move that fast, so nothing, not even light, can escape the black hole's trap once it strays too close. It's as if the black hole is surrounded by an invisible sphere known as an event horizon. This sphere marks the region of no return: Cross it, and you can never cross back.

    Cosmic refugees.

    Virtual particles that escape destruction near a black hole (case 3) create detectable radiation but can't carry information.


    The event horizon shields the star from prying eyes. Because nothing can escape from beyond the horizon, an outside observer will never be able to gather any photons or other particles that would reveal what's going on inside. All you can ever know about a black hole are the characteristics that you can spot from a distance: its mass, its charge, and how fast it's spinning. Beyond that, black holes lack distinguishing features. As Princeton physicist John Wheeler put it in the 1960s, “A black hole has no hair.” The same principle applies to any matter or energy a black hole swallows. Dump in a ton of gas or a ton of books or a ton of kittens, and the end product will be exactly the same.

    Not only is the information about the infalling matter gone, but information upon the infalling matter is as well. If you take an atom and put a message on it somehow (say, make it spin up for a “yes” or spin down for a “no”), that message is lost forever if the atom crosses a black hole's event horizon. It's as if the message were completely destroyed. So sayeth the theory of general relativity. And therein lies a problem.

    The laws of quantum theory say something entirely different. The mathematics of the theory forbids information from disappearing. Particle physicists, string theorists, and quantum scientists agree that information can be transferred from place to place, that it can dissipate into the environment or be misplaced, but it can never be obliterated. Just as someone with enough energy and patience (and glue) could, in theory, repair a shattered coffee cup, a diligent observer could always reconstitute a chunk of information no matter how it's abused—even if you dump it down a black hole.

    “If the standard laws of quantum mechanics are correct, for an observer outside the black hole, every little bit of information has to come back out,” says Stanford University's Leonard Susskind. Quantum mechanics and general relativity are telling scientists two contradictory things. It's a paradox. And there's no obvious way out.

    Can the black hole be storing the information forever rather than actually destroying it? No. In the mid-1970s, Hawking realized that black holes don't live forever; they evaporate thanks to something now known as Hawking radiation.

    One of the stranger consequences of quantum theory is that the universe is seething with activity, even in the deepest vacuum. Pairs of particles are constantly winking in and out of existence (Science, 10 January 1997, p. 158). But the vacuum near a black hole isn't ordinary spacetime. “Vacua aren't all created equal,” says Chris Adami, a physicist at the Keck Graduate Institute in Claremont, California. Near the edge of the event horizon, particles are flirting with their demise. Some pairs fall in; some pairs don't. And they collide and disappear as abruptly as they appeared. But occasionally, the pair is divided by the event horizon. One falls in and is lost; the other flies away partnerless. Without its twin, the particle doesn't wink out of existence—it becomes a real particle and flies away (see diagram). An outside observer would see these partnerless particles as a steady radiation emitted by the black hole.

    Like the particles of any other radiation, the particles of Hawking radiation aren't created for free. When the black hole radiates, a bit of its mass converts to energy. According to Hawking's equations, this slight shrinkage raises the “temperature” of the black hole by a tiny fraction of a degree; it radiates more strongly than before. This makes it shrink faster, which makes it radiate more strongly, which makes it shrink faster. It gets smaller and brighter and smaller and brighter and—flash!—it disappears in a burst of radiation. This process takes zillions of years, many times longer than the present lifetime of the universe, but eventually the black hole disappears. Thus it can't store information forever.

    If the black hole isn't storing information eternally, can it be letting swallowed information escape somehow? No, at least not according to general relativity. Nothing can escape from beyond the event horizon, so that idea is a nonstarter. And physicists have shown that Hawking radiation can't carry information away either. What passes the event horizon is gone, and it won't come out as the black hole evaporates.

    This seeming contradiction between relativity and quantum mechanics is one of the burning unanswered questions in physics. Solving the paradox, physicists hope, will give them a much deeper understanding of the rules that govern nature—and that hold under all conditions. “We're trying to develop a new set of physical laws,” says Kip Thorne of the California Institute of Technology in Pasadena.

    Paradox lost

    Clearly, somebody's old laws will have to yield—but whose? Relativity experts, including Stephen Hawking and Kip Thorne, long believed that quantum theory was flawed and would have to discard the no-information-destruction dictum. Quantum theorists such as Caltech's John Preskill, on the other hand, held that the relativistic view of the universe must be overlooking something that somehow salvages information from the jaws of destruction. That hope was more than wishful thinking; indeed, the quantum camp argued its case convincingly enough to sway most of the scientific community.

    The clincher, many quantum and string theorists believed, lay in a mathematical correspondence rooted in a curious property of black holes. In the 1970s, Jacob Bekenstein of Hebrew University in Jerusalem and Stephen Hawking came to realize that when a black hole swallows a volume of matter, that volume can be entirely described by the increase of surface area of the event horizon. In other words, if the dimension of time is ignored, the essence of a three-dimensional object that falls into the black hole can be entirely described by its “shadow” on a two-dimensional object.

    In the early 1990s, Susskind and the University of Utrecht's Gerard't Hooft generalized this idea to what is now known as the “holographic principle.” Just as information about a three-dimensional object can be entirely encoded in a two-dimensional hologram, the holographic principle states that objects that move about and interact in our three-dimensional world can be entirely described by the mathematics that resides on a two-dimensional surface that surrounds those objects. In a sense, our three-dimensionality is an illusion, and we are truly two-dimensional creatures—at least mathematically speaking.

    Most physicists accept the holographic principle, although it hasn't been proven. “I haven't conducted any polls, but I think that a very large majority believes in it,” says Bekenstein. Physicists also accept a related idea proposed in the mid-1990s by string theorist Juan Maldacena, currently at the Institute for Advanced Study in Princeton, New Jersey. Maldacena's so-called AdS/CFT correspondence shows that the mathematics of gravitational fields in a volume of space is essentially the same as the nice clean gravity-free mathematics of the boundary of that space.

    Gambling on nature.

    The 1997 wager among physicists Preskill, Thorne, and Hawking (left) became famous, but Hawking's concession (right) left battle lines drawn.


    Although these ideas seem very abstract, they are quite powerful. With the AdS/CFT correspondence in particular, the mathematics that holds sway upon the boundary automatically conserves information; like that of quantum theory, the boundary's mathematical framework simply doesn't allow information to be lost. The mathematical equivalence between the boundary and the volume of space means that even in a volume of space where gravity runs wild, information must be conserved. It's as if you can ignore the troubling effects of gravity altogether if you consider only the mathematics on the boundary, even when there's a black hole inside that volume. Therefore, black holes can't destroy information; paradox solved—sort of.

    “String theorists felt they completely nailed it,” says Susskind. “Relativity people knew something had happened; they knew that perhaps they were fighting a losing battle, but they didn't understand it on their own terms.” Or, at the very least, many general relativity experts didn't think that the matter was settled—that information would still have to be lost, AdS/CFT correspondence or no. Stephen Hawking was the most prominent of the naysayers.

    Paradox regained

    Last month in Dublin, Hawking reversed his 30-year-old stance. Convinced by his own mathematical analysis that was unrelated to the AdS/CFT correspondence, he conceded that black holes do not, in fact, destroy information—nor can a black hole transport information into another universe as Hawking once suggested. “The information remains firmly in our universe,” he said. As a result, he conceded a bet with Preskill and handed over a baseball encyclopedia (Science, 30 July, p. 586).

    Despite the hoopla over the event, Hawking's concession changed few minds. Quantum and string theorists already believed that information was indestructible, thanks to the AdS/CFT correspondence. “Everybody I know in the string theory community was completely convinced,” says Susskind. “What's in [Hawking's] own work is his way of coming to terms with it, but it's not likely to paint a whole new picture.” Relativity experts in the audience, meanwhile, were skeptical about Hawking's mathematical method and considered the solution too unrealistic to be applied to actual, observable black holes. “It doesn't seem to me to be convincing for the evolution of a black hole where you actually see the black hole,” says John Friedman of the University of Wisconsin, Milwaukee.

    With battle lines much as they were, physicists hope some inspired theorist will break the stalemate. Susskind thinks the answer lies in a curious “complementarity” of black holes, analogous to the wave-particle duality of quantum mechanics. Just as a photon can behave like either a wave or a particle but not both, Susskind argues, you can look at information from the point of view of an observer behind the event horizon or in front of the event horizon but not both at the same time. “Paradoxes were apparent because people tried to mix the two different experiments,” Susskind says.

    Other scientists look elsewhere for the resolution of the paradox. Adami, for instance, sees an answer in the seething vacuum outside a black hole. When a particle falls past the event horizon, he says, it sparks the vacuum to emit a duplicate particle in a process similar to the stimulated emission that makes excited atoms emit laser light. “If a black hole swallows up a particle, it spits one out that encodes precisely the same information,” says Adami. “The information is never lost.” When he analyzed the process, Adami says, a key equation in quantum information theory—one that limits how much classical information quantum objects can carry—made a surprise appearance. “It simply pops out. I didn't expect it to be there,” says Adami. “At that moment, I knew it was all over.”

    Although it might be all over for Hawking, Susskind, and Adami, it's over for different reasons—none of which has completely convinced the physics community. For the moment, at least, the black hole is as dark and mysterious as ever, despite legions of physicists trying to wring information from it. Perhaps the answer lies just beyond the horizon.


    The River Doctor

    1. David Malakoff

    Dave Rosgen rides in rodeos, drives bulldozers, and has pioneered a widely used approach to restoring damaged rivers. But he's gotten a flood of criticism too

    STOLLSTEIMER CREEK, COLORADO—”Don't be a pin-headed snarf. … Read the river!” Dave Rosgen booms as he sloshes through shin-deep water, a swaying surveying rod clutched in one hand and a toothpick in the other. Trailing in his wake are two dozen rapt students—including natural resource managers from all over the world—who have gathered on the banks of this small Rocky Mountain stream to learn, in Rosgen's words, “how to think like a river.” The lesson on this searing morning: how to measure and map an abused waterway, the first step toward rescuing it from the snarfs—just one of the earthy epithets that Rosgen uses to describe anyone, from narrow-minded engineers to loggers, who has harmed rivers. “Remember,” he says, tugging on the wide brim of his cowboy hat, “your job is to help the river be what it wants to be.”

    It's just another day at work for Rosgen, a 62-year-old former forest ranger who is arguably the world's most influential force in the burgeoning field of river restoration. Over the past few decades, the folksy jack-of-all-trades—equally at home talking hydrology, training horses, or driving a bulldozer—has pioneered an approach to “natural channel design” that is widely used by government agencies and nonprofit groups. He has personally reconstructed nearly 160 kilometers of small- and medium-sized rivers, using bulldozers, uprooted trees, and massive boulders to sculpt new channels that mimic nature's. And the 12,000-plus students he's trained have reengineered many more waterways. Rosgen is also the author of a best-selling textbook and one of the field's most widely cited technical papers—and he just recently earned a doctorate, some 40 years after graduating from college.

    Class act.

    Dave Rosgen's system for classifying rivers is widely used in stream restoration—and detractors say commonly misused.


    “Dave's indefatigable, and he's had a remarkable influence on the practice of river restoration,” says Peggy Johnson, a civil engineer at Pennsylvania State University, University Park. “It's almost impossible to talk about the subject without his name coming up,” adds David Montgomery, a geomorphologist at the University of Washington, Seattle.

    But although many applaud Rosgen's work, he's also attracted a flood of criticism. Many academic researchers question the science underpinning his approach, saying it has led to oversimplified “cookbook” restoration projects that do as much harm as good. Rosgen-inspired projects have suffered spectacular and expensive failures, leaving behind eroded channels choked with silt and debris. “There are tremendous doubts about what's being done in Rosgen's name,” says Peter Wilcock, a geomorphologist who specializes in river dynamics at Johns Hopkins University in Baltimore, Maryland. “But the people who hold the purse strings often require the use of his methods.”

    All sides agree that the debate is far from academic. At stake: billions of dollars that are expected to flow to tens of thousands of U.S. river restoration projects over the next few decades. Already, public and private groups have spent more than $10 billion on more than 30,000 U.S. projects, says Margaret Palmer, an ecologist at the University of Maryland, College Park, who is involved in a new effort to evaluate restoration efforts. “Before we go further, it would be nice to know what really works,” she says, noting that such work can cost $100,000 a kilometer or more.

    Going with the flow

    Rosgen is a lifelong river rat. Raised on an Idaho ranch, he says a love of forests and fishing led him to study “all of the ‘-ologies’” as an undergraduate in the early 1960s. He then moved on to a job with the U.S. Forest Service as a watershed forester—working in the same Idaho mountains where he fished as a child. But things had changed. “The valleys I knew as a kid had been trashed by logging,” he recalled recently. “My trout streams were filled with sand.” Angry, Rosgen confronted his bosses: “But nothing I said changed anyone's mind; I didn't have the data.”

    Class act.

    Dave Rosgen's system for classifying rivers is widely used in stream restoration—and detractors say commonly misused.


    Rosgen set out to change that, doggedly measuring water flows, soil types, and sediments in a bid to predict how logging and road building would affect streams. As he waded the icy waters, he began to have the first inklings of his current approach: “I realized that the response [to disturbance] varied by stream type:Some forms seemed resilient, others didn't.”

    In the late 1960s, Rosgen's curiosity led him to contact one of the giants of river science, Luna Leopold, a geomorphologist at the University of California, Berkeley, and a former head of the U.S. Geological Survey. Invited to visit Leopold, the young cowboy made the trek to what he still calls “Berzerkley,” then in its hippie heyday. “Talk about culture shock,” Rosgen says. The two men ended up poring over stream data into the wee hours.

    By the early 1970s, the collaboration had put Rosgen on the path to what has become his signature accomplishment: Drawing on more than a century of research by Leopold and many others, he developed a system for lumping all rivers into a few categories based on eight fundamental characteristics, including the channel width, depth, slope, and sediment load (see graphic, below). Land managers, he hoped, could use his system (there are many others) to easily classify a river and then predict how it might respond to changes, such as increased sediment. But “what started out as a description for management turned out to be so much more,” says Rosgen.

    A field guide to rivers.

    Drawing on data from more than 1000 waterways, Rosgen grouped streams into nine major types.


    In particular, he wondered how a “field guide to rivers” might help the nascent restoration movement. Frustrated by traditional engineering approaches to flood and erosion control—which typically called for converting biologically rich meandering rivers to barren concrete channels or dumping tons of ugly rock “rip rap” on failing banks—river advocates were searching for alternatives. Rosgen's idea: Use the classification scheme to help identify naturally occurring, and often more aesthetically pleasing, channel shapes that could produce stable rivers—that is, a waterway that could carry floods and sediment without significantly shifting its channel. Then, build it.

    In 1985, after leaving the Forest Service in a dispute over a dam he opposed, Rosgen retreated to his Colorado ranch to train horses, refine his ideas—and put them into action. He founded a company—Wildland Hydrology—and began offering training. (Courses cost up to $2700 per person.) And he embarked on two restoration projects, on overgrazed and channelized reaches of the San Juan and Blanco rivers in southern Colorado, that became templates for what was to come.

    After classifying the target reaches, Rosgen designed new “natural” channel geometries based on relatively undisturbed rivers, adding curves and boulder-strewn riffles to reduce erosion and improve fish habitat. He then carved the new beds, sometimes driving the earthmovers himself. Although many people were appalled by the idea of bulldozing a river to rescue it, the projects—funded by public and private groups—ultimately won wide acceptance, including a de facto endorsement in a 1992 National Research Council report on restoration.

    Two years later, with Leopold's help, Rosgen won greater visibility by publishing his classification scheme in Catena, a prestigious peer-reviewed journal. Drawing on data he and others had collected from 450 rivers in the United States, Canada, and New Zealand, Rosgen divided streams into seven major types and dozens of subtypes, each denoted by a letter and a number. (Rosgen's current version has a total of 41 types.) Type “A” streams, for instance, are steep, narrow, rocky cascades; “E” channels are gentler, wider, more meandering waterways.

    Although the 30-page manifesto contains numerous caveats, Rosgen's system held a powerful promise for restorationists. Using relatively straightforward field techniques—and avoiding what Rosgen calls “high puke-factor equations”—users could classify a river. Then, using an increasingly detailed four-step analysis, they could decide whether its channel was currently “stable” and forecast how it might alter its shape in response to changes, such as increased sediment from overgrazed banks. For instance, they could predict that a narrow, deep, meandering E stream with eroding banks would slowly degrade into a wide, shallow F river, then—if given enough time—restore itself back to an E. But more important, Rosgen's system held out hope of predictably speeding up the restoration process by reducing the sediment load and carving a new E channel, for instance.

    The Catena paper—which became the basis for Rosgen's 1996 textbook, Applied River Morphology—distilled “decades of field observations into a practical tool,” says Rosgen. At last, he had data. And people were listening—and flocking to his talks and classes. “It was an absolute revelation listening to Dave back then,” recalls James Gracie of Brightwater Inc., a Maryland-based restoration firm, who met Rosgen in 1985. “He revolutionized river restoration.”

    Rough waters

    Not everyone has joined the revolution, however. Indeed, as Rosgen's reputation has grown, so have doubts about his classification system—and complaints about how it is being used in practice.

    Much of the criticism comes from academic researchers. Rosgen's classification scheme provides a useful shorthand for describing river segments, many concede. But civil engineers fault Rosgen for relying on nonquantitative “geomagic,” says Richard Hey, a river engineer and Rosgen business associate at the University of East Anglia in the United Kingdom. And geomorphologists and hydrologists argue that his scheme oversimplifies complex, watershed-wide processes that govern river behavior over long time scales.

    Last year, in one of the most recent critiques, Kyle Juracek and Faith Fitzpatrick of the U.S. Geological Survey concluded that Rosgen's Level II analysis—a commonly used second step in his process—failed to correctly assess stream stability or channel response in a Wisconsin river that had undergone extensive study. A competing analytical method did better, they reported in the June 2003 issue of the Journal of the American Water Resources Association. The result suggested that restorationists using Rosgen's form-based approach would have gotten off on the wrong foot. “It's a reminder that classification has lots of limitations,” says Juracek, a hydrologist in Lawrence, Kansas.

    Rosgen, however, says the paper “is a pretty poor piece of work … that doesn't correctly classify the streams. … It seems like they didn't even read my book.” He also emphasizes that his Level III and IV analyses are designed to answer just the kinds of questions the researchers were asking. Still, he concedes that classification may be problematic on some kinds of rivers, particularly urban waterways where massive disturbance has made it nearly impossible to make key measurements. One particularly problematic variable, all sides agree, is “bankfull discharge,” the point at which floodwaters begin to spill onto the floodplain. Such flows are believed to play a major role in determining channel form in many rivers.

    Overall, Rosgen says he welcomes the critiques, although he gripes that “my most vocal critics are the ones who know the least about what I'm doing.” And he recently fired back in a 9000-word essay he wrote for his doctorate, which he earned under Hey.

    Rosgen's defenders, meanwhile, say the attacks are mostly sour grapes. “The academics were working in this obscure little field, fighting over three grants a year, and along came this cowboy who started getting millions of dollars for projects; there was a lot of resentment,” says Gracie.

    River revival?

    The critics, however, say the real problem is that many of the people who use Rosgen's methods—and pay for them—aren't aware of its limits. “It's deceptively accessible; people come away from a week of training thinking they know more about rivers than they really do,” says Matthew Kondolf, a geomorphologist at the University of California, Berkeley. Compounding the problem is that Rosgen can be a little too inspirational, adds Scott Gillilin, a restoration consultant in Bozeman, Montana. “Students come out of Dave's classes like they've been to a tent revival, their hands on the good book, proclaiming ‘I believe!’”

    The result, critics say, is a growing list of failed projects designed by “Rosgenauts.” In several cases in California, for instance, they attempted to carve new meander bends reinforced with boulders or root wads into high-energy rivers—only to see them buried and abandoned by the next flood. In a much cited example, restorationists in 1995 bulldozed a healthy streamside forest along Deep Run in Maryland in order to install several curves—then watched the several-hundred-thousand-dollar project blow out, twice, in successive years. “It's the restoration that wrecked a river reach. … The cure was worse than the disease,” says geomorphologist Sean Smith, a Johns Hopkins doctoral student who monitored the project.

    Errors on trial.

    Rosgen's ideas have inspired expensive failures, critics say, such as engineered meanders on California's Uvas Creek (left) that were soon destroyed by floods.


    Gracie, the Maryland consultant who designed the Deep Run restoration, blames the disaster on inexperience and miscalculating an important variable. “We undersized the channel,” he says. But he says he learned from that mistake and hasn't had a similar failure in dozens of projects since. “This is an emerging profession; there is going to be trial and error,” he says. Rosgen, meanwhile, concedes that overenthusiastic disciples have misused his ideas and notes that he's added courses to bolster training. But he says he's had only one “major” failure himself—on Wolf Creek in California—out of nearly 50 projects. “But there [are] some things I sure as hell won't do again,” he adds.

    What works?

    Despite these black marks, critics note, a growing number of state and federal agencies are requiring Rosgen training for anyone they fund. “It's becoming a self-perpetuating machine; Dave is creating his own legion of pin-headed snarfs who are locked into a single approach,” says Gillilin, who believes the requirement is stifling innovation. “An expanding market is being filled by folks with very limited experience in hydrology or geomorphology,” adds J. Steven Kite, a geomorphologist at West Virginia University in Morgantown.

    Kite has seen the trend firsthand: One of his graduate students was recently rejected for a restoration-related job because he lacked Rosgen training. “It seemed a bit odd that years of academic training wasn't considered on par with a few weeks of workshops,” he says. The experience helped prompt Kite and other geomorphologists to draft a recent statement urging agencies to increase their training requirements and universities to get more involved (see∼kite). “The bulldozers are in the water,” says Kite. “We can't just sit back and criticize.”

    Improving training, however, is only one need, says the University of Maryland's Palmer. Another is improving the evaluation of new and existing projects. “Monitoring is woefully inadequate,” she says. In a bid to improve the situation, a group led by Palmer and Emily Bernhardt of Duke University in Durham, North Carolina, has won funding from the National Science Foundation and others to undertake the first comprehensive national inventory and evaluation of restoration projects. Dubbed the National River Restoration Science Synthesis, it has already collected data on more than 35,000 projects. The next step: in-depth analysis of a handful of projects in order to make preliminary recommendations about what's working, what's not, and how success should be measured. A smaller study evaluating certain types of rock installations—including several championed by Rosgen—is also under way in North Carolina. “We're already finding a pretty horrendous failure rate,” says Jerry Miller of Western Carolina University in Cullowhee, a co-author of one of the earliest critiques of Rosgen's Catena paper.

    A National Research Council panel, meanwhile, is preparing to revisit the 1992 study that helped boost Rosgen's method. Many geomorphologists criticized that study for lacking any representatives from their field. But this time, they've been in on study talks from day one.

    Whatever these studies conclude, both Rosgen's critics and supporters say his place in history is secure. “Dave's legacy is that he put river restoration squarely on the table in a very tangible and doable way,” says Smith. “We wouldn't be having this discussion if he hadn't.”

  11. The Hydrogen Backlash

    1. Robert F. Service

    As policymakers around the world evoke grand visions of a hydrogen- fueled future, many experts say that a broader-based, nearer-term energy policy would mark a surer route to the same goals


    In the glare of a July afternoon, the HydroGen3 minivan threaded through the streets near Capitol Hill. As a Science staffer put it through its stop-and-go paces, 200 fuel cells under the hood of the General Motors prototype inhaled hydrogen molecules, stripped off their electrons, and fed current to the electric engine. The only emissions: a little extra heat and humidity. The result was a smooth, eerily quiet ride—one that, with H3's priced at $1 million each, working journalists won't be repeating at their own expense anytime soon.

    Hydrogen-powered vehicles may be rareties on Pennsylvania Avenue, but in Washington, D.C., and other world capitals they and their technological kin are very much on people's minds. Switching from fossil fuels to hydrogen could dramatically reduce urban air pollution, lower dependence on foreign oil, and reduce the buildup of greenhouse gases that threaten to trigger severe climate change.


    With those perceived benefits in view, the United States, the European Union, Japan, and other governments have sunk billions of dollars into hydrogen initiatives aimed at revving up the technology and propelling it to market. Car and energy companies are pumping billions more into building demonstration fleets and hydrogen fueling stations. Many policymakers see the move from oil to hydrogen as manifest destiny, challenging but inevitable. In a recent speech, Spencer Abraham, the U.S. secretary of energy, said such a transformation has “the potential to change our country on a scale of the development of electricity and the internal combustion engine.”

    The only problem is that the bet on the hydrogen economy is at best a long shot. Recent reports from the U.S. National Academy of Sciences (NAS) and the American Physical Society (APS) conclude that researchers face daunting challenges in finding ways to produce and store hydrogen, convert it to electricity, supply it to consumers, and overcome vexing safety concerns. Any of those hurdles could block a broad-based changeover. Solving them simultaneously is “a very tall order,” says Mildred Dresselhaus, a physicist at the Massachusetts Institute of Technology (MIT), who has served on recent hydrogen review panels with the U.S. Department of Energy (DOE) and APS as well as serving as a reviewer for the related NAS report.

    As a result, the transition to a hydrogen economy, if it comes at all, won't happen soon. “It's very, very far away from substantial deployed impact,” says Ernest Moniz, a physicist at MIT and a former undersecretary of energy at DOE. “Let's just say decades, and I don't mean one or two.”

    In the meantime, some energy researchers complain that, by skewing research toward costly large-scale demonstrations of technology well before it's ready for market, governments risk repeating a pattern that has sunk previous technologies such as synfuels in the 1980s. By focusing research on technologies that aren't likely to have a measurable impact until the second half of the century, the current hydrogen push fails to address the growing threat from greenhouse gas emissions from fossil fuels. “There is starting to be some backlash on the hydrogen economy,” says Howard Herzog, an MIT chemical engineer. “The hype has been way overblown. It's just not thought through.”

    A perfect choice?

    Almost everyone agrees that producing a viable hydrogen economy is a worthy long-term goal. For starters, worldwide oil production is expected to peak within the next few decades, and although supplies will remain plentiful long afterward, oil prices are expected to soar as international markets view the fuel as increasingly scarce. Natural gas production is likely to peak a couple of decades after oil. Coal, tar sands, and other fossil fuels should remain plentiful for at least another century. But these dirtier fuels carry a steep environmental cost: Generating electricity from coal instead of natural gas, for example, releases twice as much carbon dioxide (CO2). And in order to power vehicles, they must be converted to a liquid or gas, which requires energy and therefore raises their cost.

    Even with plenty of fossil fuels available, it's doubtful we'll want to use them all. Burning fossil fuels has already increased the concentration of CO2 in the atmosphere from 280 to 370 parts per million (ppm) over the past 150 years. Unchecked, it's expected to pass 550 ppm this century, according to New York University physicist Martin Hoffert and colleagues in a 2002 Science paper (Science, 1 November 2002, p. 981). “If sustained, [it] could eventually produce global warming comparable in magnitude but opposite in sign to the global cooling of the last Ice Age,” the authors write. Development and population growth can only aggravate the problems.

    Over a barrel.

    The world is growing increasingly dependent on fossil fuels.


    On the face of it, hydrogen seems like the perfect alternative. When burned, or oxidized in a fuel cell, it emits no pollution, including no greenhouse gases. Gram for gram, it releases more energy than any other fuel. And as a constituent of water, hydrogen is all around us. No wonder it's being touted as the clean fuel of the future and the answer to modern society's addiction to fossil fuels. In April 2003, Wired magazine laid out “How Hydrogen Can Save America.” Environmental gadfly Jeremy Rifkin has hailed the hydrogen economy as the next great economic revolution. And General Motors has announced plans to be the first company to sell 1 million hydrogen fuel cell cars by the middle of the next decade.

    Last year, the Bush Administration plunged in, launching a 5-year, $1.7 billion initiative to commercialize hydrogen-powered cars by 2020. In March, the European Commission launched the first phase of an expected 10-year, €2.8 billion public-private partnership to develop hydrogen fuel cells. Last year, the Japanese government nearly doubled its fuel cell R&D budget to $268 million. Canada, China, and other countries have mounted efforts of their own. Car companies have already spent billions of dollars trying to reinvent their wheels—or at least their engines—to run on hydrogen: They've turned out nearly 70 prototype cars and trucks as well as dozens of buses. Energy and car companies have added scores of hydrogen fueling stations worldwide, with many more on the drawing boards (see p. 964). And the effort is still gaining steam.

    The problem of price

    Still, despite worthwhile goals and good intentions, many researchers and energy experts say current hydrogen programs fall pitifully short of what's needed to bring a hydrogen economy to pass. The world's energy infrastructure is too vast, they say, and the challenges of making hydrogen technology competitive with fossil fuels too daunting unless substantially more funds are added to the pot. The current initiatives are just “a start,” Dresselhaus says. “None of the reports say it's impossible,” she adds. However, Dresselhaus says, “the problem is very difficult no matter how you slice it.”

    Economic and political difficulties abound, but the most glaring barriers are technical. At the top of the list: finding a simple and cheap way to produce hydrogen. As is often pointed out, hydrogen is not a fuel in itself, as oil and coal are. Rather, like electricity, it's an energy carrier that must be generated using another source of power. Hydrogen is the most common element in the universe. But on Earth, nearly all of it is bound to other elements in molecules, such as hydrocarbons and water. Hydrogen atoms must be split off these molecules to generate dihydrogen gas (H2), the form it needs to be in to work in most fuel cells. These devices then combine hydrogen and oxygen to make water and liberate electricity in the process. But every time a fuel is converted from one source, such as oil, to another, such as electricity or hydrogen, it costs energy and therefore money.

    Today, by far the cheapest way to produce hydrogen is by using steam and catalysts to break down natural gas into H2 and CO2. But although the technology has been around for decades, current steam reformers are only 85% efficient, meaning that 15% of the energy in natural gas is lost as waste heat during the reforming process. The upshot, according to Peter Devlin, who runs a hydrogen production program at DOE, is that it costs $5 to produce the amount of hydrogen that releases as much energy as a gallon of gasoline. Current techniques for liberating hydrogen from coal, oil, or water are even less efficient. Renewable energy such as solar and wind power can also supply electricity to split water, without generating CO2. But those technologies are even more expensive. Generating electricity with solar power, for example, remains 10 times more expensive than doing so with a coal plant. “The energy in hydrogen will always be more expensive than the sources used to make it,” said Donald Huberts, chief executive officer of Shell Hydrogen, at a hearing before the U.S. House Science Committee in March. “It will be competitive only by its other benefits: cleaner air, lower greenhouse gases, et cetera.”

    The good news, Devlin says, is that production costs have been coming down, dropping about $1 per gallon ($0.25/liter) of gasoline equivalent over the past 3 years. The trouble is that DOE's own road map projects that drivers will buy hydrogen- powered cars only if the cost of the fuel drops to $1.50 per gallon of gasoline equivalent by 2010 and even lower in the years beyond. “The easy stuff is over,” says Devlin. “There are going to have to be some fundamental breakthroughs to get to $1.50.”

    There are ideas on the drawing board. In addition to stripping hydrogen from fossil fuels, DOE and other funding agencies are backing innovative research ideas to produce hydrogen with algae, use sunlight and catalysts to split water molecules directly, and siphon hydrogen from agricultural waste and other types of “biomass.” Years of research in all of these areas, however, have yet to yield decisive progress.

    To have and to hold

    If producing hydrogen cheaply has researchers scratching their heads, storing enough of it on board a car has them positively stymied. Because hydrogen is the lightest element, far less of it can fit into a given volume than other fuels. At room temperature and pressure, hydrogen takes up roughly 3000 times as much space as gasoline containing the same amount of energy. That means storing enough of it in a fuel tank to drive 300 miles (483 kilometers)—DOE's benchmark—requires either compressing it, liquefying it, or using some other form of advanced storage system.

    Unfortunately, none of these solutions is up to the task of carrying a vehicle 300 miles on a tank. Nearly all of today's prototype hydrogen vehicles use compressed gas. But these are still bulky. Tanks pressurized to 10,000 pounds per square inch (70 MPa) take up to eight times the volume of a current gas tank to store the equivalent amount of fuel. Because fuel cells are twice as efficient as gasoline internal combustion engines, they need fuel tanks four times as large to propel a car the same distance.

    Liquid hydrogen takes up much less room but poses other problems. The gas liquefies at −253°C, just a few degrees above absolute zero. Chilling it to that temperature requires about 30% of the energy in the hydrogen. And the heavily insulated tanks needed to keep liquid fuel from boiling away are still larger than ordinary gasoline tanks.

    Other advanced materials are also being investigated to store hydrogen, such as carbon nanotubes, metal hydrides, and substances such as sodium borohydride that produce hydrogen by means of a chemical reaction. Each material has shown some promise. But for now, each still has fatal drawbacks, such as requiring high temperature or pressures, releasing the hydrogen too slowly, or requiring complex and time- consuming materials recycling. As a result, many experts are pessimistic. A report last year from DOE's Basic Energy Sciences Advisory Committee concluded: “A new paradigm is required for the development of hydrogen storage materials to facilitate a hydrogen economy.” Peter Eisenberger, vice provost of Columbia University's Earth Institute, who chaired the APS report, is even more blunt. “Hydrogen storage is a potential showstopper,” he says.


    Current hydrogen storage technologies fall short of both the U.S. Department of Energy target and the performance of petroleum.


    Breakthroughs needed

    Another area in need of serious progress is the fuel cells that convert hydrogen to electricity. Fuel cells have been around since the 1800s and have been used successfully for decades to power spacecraft. But their high cost and other drawbacks have kept them from being used for everyday applications such as cars. Internal combustion engines typically cost $30 for each kilowatt of power they produce. Fuel cells, which are loaded with precious-metal catalysts, are 100 times more expensive than that.

    If progress on renewable technologies is any indication, near-term prospects for cheap fuel cells aren't bright, says Joseph Romm, former acting assistant secretary of energy for renewable energy in the Clinton Administration and author of a recent book, The Hype About Hydrogen: Fact and Fiction in the Race to Save the Climate. “It has taken wind power and solar power each about twenty years to see a tenfold decline in prices, after major government and private sector investments, and they still each comprise well under 1% of U.S. electricity generation,” Romm said in written testimony in March before the House Science Committee reviewing the Administration's hydrogen initiative. “A major technology breakthrough is needed in transportation fuel cells before they will be practical.” Various technical challenges—such as making fuel cells rugged enough to withstand the shocks of driving and ensuring the safety of cars loaded with flammable hydrogen gas—are also likely to make hydrogen cars costlier to engineer and slower to win public acceptance.

    If they clear their internal technical hurdles, hydrogen fuel cell cars face an obstacle from outside: the infrastructure they need to refuel. If hydrogen is generated in centralized plants, it will have to be trucked or piped to its final destination. But because of hydrogen's low density, it would take 21 tanker trucks to haul the amount of energy a single gasoline truck delivers today, according to a study by Switzerland-based energy researchers Baldur Eliasson and Ulf Bossel. A hydrogen tanker traveling 500 kilometers would devour the equivalent of 40% of its cargo.

    Ship the hydrogen as a liquid? Commercial-scale coolers are too energy-intensive for the job, Eliasson and Bossel point out. Transporting hydrogen through long-distance pipelines wouldn't improve matters much. Eliasson and Bossel calculate that 1.4% of the hydrogen flowing through a pipeline would be required to power the compressors needed to pump it for every 150 kilometers the gas must travel. The upshot, Eliasson and Bossel report: “Only 60% to 70% of the hydrogen fed into a pipeline in Northern Africa would actually arrive in Europe.”

    To lower those energy penalties, some analysts favor making hydrogen at fueling stations or in homes where it will be used, with equipment powered by the existing electricity grid or natural gas. But onsite production wouldn't be cheap, either. Eliasson and Bossel calculate that to supply hydrogen for 100 to 2000 cars per day, an electrolysis-based fueling station would require between 5 and 81 megawatts of electricity. “The generation of hydrogen at filling stations would make a threefold increase of electric power generating capacity necessary,” they report. And at least for the foreseeable future, that extra electricity is likely to come from fossil fuels.

    Whichever approach wins out, it will need a massive new hydrogen infrastructure to deliver the goods. The 9 million tons of hydrogen (enough to power between 20 million and 30 million cars) that the United States produces yearly for use in gasoline refining and chemical plants pale beside the needs of a full-blown transportation sector. For a hydrogen economy to catch on, the fuel must be available in 30% to 50% of filling stations when mass-market hydrogen cars become available, says Bernard Bulkin, former chief scientist at BP. A recent study by Marianne Mintz and colleagues at Argonne National Laboratory in Illinois found that creating the infrastructure needed to fuel 40% of America's cars would cost a staggering $500 billion or more.

    Energy and car companies are unlikely to spend such sums unless they know mass-produced hydrogen vehicles are on the way. Carmakers, however, are unlikely to build fleets of hydrogen vehicles without stations to refuel them. “We face a ‘chicken and egg’ problem that will be difficult to overcome,” said Michael Ramage, a former executive vice president of ExxonMobil Research and Engineering, who chaired the NAS hydrogen report, when the report was released in February.

    CO2 free.

    To be a clean energy technology, hydrogen must be generated from wind, solar, or other carbon-free sources.


    Stress test

    Each of the problems faced by the hydrogen economy—production, storage, fuel cells, safety, and infrastructure—would be thorny enough on its own. For a hydrogen economy to succeed, however, all of these challenges must be solved simultaneously. One loose end and the entire enterprise could unravel. Because many of the solutions require fundamental breakthroughs, many U.S. researchers question their country's early heavy emphasis on expensive demonstration projects of fuel cell cars, fueling stations, and other technologies.

    To illustrate the dangers of that approach, the APS report cites the fate of synfuels research in the 1970s and '80s. President Gerald Ford proposed that effort in 1975 as a response to the oil crisis of the early 1970s. But declining oil prices in the 1980s and unmet expectations from demonstration projects undermined industrial and congressional support for the technology. For hydrogen, the report's authors say, the “enormous performance gaps” between existing technology and what is needed for a hydrogen economy to take root means that “the program needs substantially greater emphasis on solving the fundamental science problems.”

    Focusing the hydrogen program on basic research will naturally give it the appropriate long-term focus it deserves, Romm and others believe. In the meantime, they say, the focus should be on slowing the buildup of greenhouse gases. “If we fail to limit greenhouse gas emissions over the next decade—and especially if we fail to do so because we have bought into the hype about hydrogen's near-term prospects—we will be making an unforgivable national blunder that may lock in global warming for the U.S. of 1 degree Fahrenheit [0.56°C] per decade by mid- century,” Romm told the House Science Committee in March in written testimony.

    To combat the warming threat, funding agencies should place a near-term priority on promoting energy efficiency, research on renewables, and development of hybrid cars, critics say. After all, many researchers point out, as long as hydrogen for fuel cell cars is provided from fossil fuels, much the same environmental benefits can be gained by adopting hybrid gasoline-electric and advanced diesel engines. As MIT chemist and former DOE director of energy research John Deutch and colleagues point out on page 974, hybrid electric vehicles—a technology already on the market—would improve energy efficiency and reduce greenhouse gas emissions almost as well as fuel cell vehicles that generate hydrogen from an onboard gasoline reformer, an approach that obviates the need for building a separate hydrogen infrastructure.

    Near-term help may also come from capturing CO2 emissions from power and industrial plants and storing them underground, a process known as carbon sequestration (see p. 962). Research teams from around the world are currently testing a variety of schemes for doing that. But the process remains significantly more expensive than current energy. “Until an economical solution to the sequestration problem is found, net reductions in overall CO2 emissions can only come through advances in energy efficiency and renewable energy,” the APS report concludes.

    In response to the litany of concerns over making the transition to a hydrogen economy, JoAnn Milliken, who heads hydrogen-storage research for DOE, points out that DOE and other funding agencies aren't promoting hydrogen to the exclusion of other energy research. Renewable energy, carbon sequestration, and even fusion energy all remain in the research mix. Criticism that too much is being spent on demonstration projects is equally misguided, she says, noting that such projects make up only 13% of DOE's hydrogen budget, compared with 85% for basic and applied research. Both are necessary, she says: “We've been doing basic research on hydrogen for a long time. We can't just do one or the other.” Finally, she points out, funding agencies have no illusions about the challenge in launching the hydrogen economy. “We never said this is going to be easy,” Milliken says. The inescapable truth is that “we need a substitute for gasoline. Gas hybrids are going to improve fuel economy. But they can't solve the problem.”

    Yet, if that's the case, many energy experts argue, governments should be spending far more money to lower the technical and economic barriers to all types of alternative energy—hydrogen included—and bring it to reality sooner. “Energy is the single most important problem facing humanity today,” says Richard Smalley of Rice University in Houston, Texas, a 1996 Nobel laureate in chemistry who has been campaigning for increased energy sciences funding for the last 2 years. Among Smalley's proposals: a 5-cent-per-gallon tax on gasoline in the United States to fund $10 billion annually in basic energy sciences research. Because of the combination of climate change and the soon-to-be-peaking production in fossil fuels, Smalley says, “it really ought to be the top project in worldwide science right now.”

    Although not all researchers are willing to wade into the political minefield of backing a gasoline tax, few disagree with his stand. “I think he's right,” Dresselhaus says of the need to boost the priority of basic energy sciences research. With respect to the money needed to take a realistic stab at making an alternative energy economy a reality, Dresselhaus says: “Most researchers think there isn't enough money being spent. I think the investment is pretty small compared to the job that has to be done.” Even though it sounds like a no-brainer, the hydrogen economy will take abundant gray matter and greenbacks to bring it to fruition.

  12. The Carbon Conundrum

    1. Robert F. Service

    En route to hydrogen, the world will have to burn huge amounts of fossil fuels—and find ways to deal with their climate-changing byproducts

    Even if the hydrogen economy were technically and economically feasible today, weaning the world off carbon-based fossil fuels would still take decades. During that time, carbon combustion will continue to pour greenhouse gases into the atmosphere—unless scientists find a way to reroute them. Governments and energy companies around the globe have launched numerous large-scale research and demonstration projects to capture and store, or sequester, unwanted carbon dioxide (see table). Although final results are years off, so far the tests appear heartening. “It seems to look more and more promising all the time,” says Sally Benson, a hydrogeologist at Lawrence Berkeley National Laboratory in California. “For the first time, I think the technical feasibility has been established.”

    Burial at sea.

    A million tons a year of CO2 from the Sleipner natural-gas field in the North Sea are reinjected underground.


    Last hope?

    Fossil fuels account for most of the 6.5 billion tons (gigatons) of carbon—the amount present in 25 gigatons of CO2—that people around the world vent into the atmosphere every year. And as the amount of the greenhouse gas increases, so does the likelihood of triggering a debilitating change in Earth's climate. Industrialization has already raised atmospheric CO2 levels from 280 to 370 parts per million, which is likely responsible for a large part of the 0.6°C rise in the average global surface temperature over the past century. As populations explode and economies surge, global energy use is expected to rise by 70% by 2020, according to a report last year from the European Commission, much of it to be met by fossil fuels. If projections of future fossil fuel use are correct and nothing is done to change matters, CO2 emissions will increase by 50% by 2020.

    To limit the amount of CO2 pumped into the air, many scientists have argued for capturing a sizable fraction of that CO2 from electric plants, chemical factories, and the like and piping it deep underground. In June, Ronald Oxburgh, Shell's chief in the United Kingdom, called sequestration essentially the last best hope to combat climate change. “If we don't have sequestration, then I see very little hope for the world,” Oxburgh told the British newspaper The Guardian.

    Although no one has adopted the strategy on a large scale, oil companies have been piping CO2 underground for decades to extract more oil from wells by reducing the viscosity of underground oil. Because they weren't trying to maximize CO2 storage, companies rarely tracked whether the CO2 remained underground or caused unwanted side effects.

    That began to change in the early 1990s, when researchers began to consider sequestering CO2 to keep it out of the atmosphere. The options for doing so are limited, says Robert Kane, who heads carbon-sequestration programs at the U.S. Department of Energy in Washington, D.C. You can grow plants that consume CO2 to fuel their growth, or pipe the gas to the deep ocean or underground. But planted vegetation can burn or be harvested, ultimately returning the CO2 back into the atmosphere. And placing vast amounts of CO2 into the ocean creates an acidic plume, which can wreak havoc on deep-water ecosystems (Science, 3 August 2001, p. 790). As a result, Kane and others say, much recent research has focused on storing the CO2 underground in depleted oil and gas reservoirs, coal seams that are too deep to mine, and underground pockets of saltwater called saline aquifers.

    “Initially, it sounded like a wild idea,” Benson says, in part because the volume of gas that would have to be stored is enormous. For example, storing just 1 gigaton of CO2—about 4% of what we vent annually worldwide—would require moving 4.8 million cubic meters of gas a day, equivalent to about one-third the volume of all the oil shipped daily around the globe. But early studies suggest that there is enough underground capacity to store hundreds of years' worth of CO2 injection, and that potential underground storage sites exist worldwide. According to Benson, studies in the mid-1990s pegged the underground storage capacity between 1000 and 10,000 gigatons of CO2. More detailed recent analyses are beginning to converge around the middle of that range, Benson says. But even the low end is comfortably higher than the 25 gigatons of CO2 humans produce each year, she notes.

    To test the technical feasibility, researchers have recently begun teaming up with oil and gas companies to study their CO2 piping projects. One of the first, and the biggest, is the Weyburn project in Saskatchewan, Canada. The site is home to an oil field discovered in 1954. Since then, about one-quarter of the reservoir's oil has been removed, producing 1.4 billion barrels. In 1999, the Calgary-based oil company EnCana launched a $1.5 billion, 30-year effort to pipe 20 million metric tons of CO2 into the reservoir after geologists estimated that it would increase the field's yield by another third. For its CO2, EnCana teamed up with the Dakota Gasification Co., which operates a plant in Beulah, North Dakota, that converts coal into a hydrogen-rich gas used in industry and that emits CO2 as a byproduct. EnCana built a 320-km pipeline to carry pressurized CO2 to Weyburn, where it's injected underground.

    In September 2000, EnCana began injecting an estimated 5000 metric tons of CO2 a day 1500 meters beneath the surface. The technology essentially just uses compressors to force compressed CO2 down a long pipe drilled into the underground reservoir. To date, nearly 3.5 million metric tons of CO2 have been locked away in the Weyburn reservoir.

    When the project began, the United States was still party to the Kyoto Protocol, the international treaty designed to reduce greenhouse gas emissions. So the United States, Canada, the European Union, and others funded $28 million worth of modeling, monitoring, and geologic studies to track the fate of Weyburn's underground CO2.

    For the first phase of that study, which ended in May, 80 researchers including geologists and soil scientists monitored the site for 4 years. “The short answer is it's working,” says geologist and Weyburn team member Ben Rostron of the University of Alberta in Edmonton: “We've got no evidence of significant amounts of injected CO2 coming out at the surface.” That was what they expected, Rostron says: Wells are sealed and capped, and four layers of rock thought to be impermeable to CO2 lie between the oil reservoir and the surface.

    A similar early-stage success story is under way in the North Sea off the coast of Norway. Statoil, Norway's largest oil company, launched a sequestration pilot project from an oil rig there in 1996 to avoid a $55-a-ton CO2 tax that the Norwegian government levies on energy producers. The rig taps a natural gas field known as Sleipner, which also contains large amounts of CO2. Normally, gas producers separate the CO2 from the natural gas before feeding the latter into a pipeline or liquefying it for transport. The CO2 is typically vented into the air. But for the past 8 years, Statoil has been injecting about 1 million tons of CO2 a year back into a layer of porous sandstone, which lies between 550 and 1500 meters beneath the ocean floor. Sequestering the gas costs about $15 per ton of CO2 but saves the company $40 million a year in tax.

    Researchers have monitored the fate of the CO2 with the help of seismic imaging and other tools. So far, says Stanford University petroleum engineer Franklin Orr, everything suggests that the CO2 is staying put. Fueled by these early successes, other projects are gearing up as well. “One can't help but be struck by the dynamism in this community right now,” says Princeton University sequestration expert Robert Socolow. “There is a great deal going on.”

    Despite the upbeat early reviews, most researchers and observers are cautious about the prospects for large-scale sequestration. “Like every environmental issue, there are certain things that happen when the quantity increases,” Socolow says. “We have enough history of getting this [type of thing] wrong that everyone is wary.”

    Safety tops the concerns. Although CO2 is nontoxic (it constitutes the bubbles in mineral water and beer), it can be dangerous. If it percolates into a freshwater aquifer, it can acidify the water, potentially leaching lead, arsenic, or other dangerous trace elements into the mix. If the gas rises to the subsurface, it can affect soil chemistry. And if it should escape above ground in a windless depression, the heavier-than-air gas could collect and suffocate animals or people. Although such a disaster hasn't happened yet with sequestered CO2, the threat became tragically clear in 1986, when an estimated 80 million cubic meters of CO2 erupted from the Lake Nyos crater in Cameroon, killing 1800 people.

    View this table:

    Money is another issue. Howard Herzog, an economist at the Massachusetts Institute of Technology in Cambridge and an expert on the cost of sequestration, estimates that large-scale carbon sequestration would add 2 to 3 cents per kilowatt-hour to the cost of electricity delivered to the consumer—about one-third the average cost of residential electricity in the United States. (A kilowatt-hour of electricity can power 10 100-watt light bulbs for an hour.) Says Orr: “The costs are high enough that this won't happen on a big scale without an incentive structure” such as Norway's carbon tax or an emissions-trading program akin to that used with sulfur dioxide, a component of acid rain.

    But although sequestration may not be cheap, Herzog says, “it's affordable.” Generating electricity with coal and storing the carbon underground still costs only about 14% as much as solar-powered electricity. And unlike most renewable energy, companies can adopt it more easily on a large scale and can retrofit existing power plants and chemical plants. That's particularly important for dealing with the vast amounts of coal that are likely to be burned as countries such as China and India modernize their economies. “Coal is not going to go away,” Herzog says. “People need energy, and you can't make energy transitions easily.” Sequestration, he adds, “gives us time to develop 22nd century energy sources.” That could give researchers a window in which to develop and install the technologies needed to power the hydrogen economy.

  13. Choosing a CO2 Separation Technology

    1. Robert F. Service
    Dark victory.

    Coal can be made cleaner for a price.


    If governments move to deep-six carbon dioxide, much of the effort is likely to target emissions from coal-fired power plants. Industrial companies have used detergent-like chemicals and solvents for decades to “scrub” CO2 from flue gases, a technique that can be applied to existing power plants. The downside is that the technique is energy intensive and reduces a coal plant's efficiency by as much as 14%. Another option is to burn coal with pure oxygen, which produces only CO2 and water vapor as exhaust gases. The water vapor can then be condensed, leaving just the CO2. But this technology too consumes a great deal of energy to generate the pure oxygen in the first place and reduces a coal plant's overall efficiency by about 11%. A third approach extracts CO2 from coal before combustion. This technique is expected to be cheaper and more efficient, but it requires building plants based on a newer technology, known as Integrated Gasification Combined Cycle. But it will take a carbon tax or some other incentive to drive utility companies away from proven electricity-generating technology.

  14. Fire and ICE: Revving Up for H2

    1. Adrian Cho

    The first hydrogen-powered cars will likely burn the stuff in good old internal combustion engines. But can they drive the construction of hydrogen infrastructure?

    In the day we sweat it out in the streets of a runaway American dream.

    At night we ride through mansions of glory in suicide machines,

    Sprung from cages out on highway 9,

    Chrome wheeled, fuel injected

    and steppin' out over the line …


    Fear not, sports car aficionados and Bruce Springsteen fans: Even if the hydrogen economy takes off, it may be decades before zero-emission fuel cells replace your beloved piston-pumping, fuel-burning, song-inspiring internal combustion engine. In the meantime, however, instead of filling your tank with gasoline, you may be pumping hydrogen.

    A handful of automakers are developing internal combustion engines that run on hydrogen, which burns more readily than gasoline and produces almost no pollutants. If manufacturers can get enough of them on the road in the next few years, hydrogen internal combustion engine (or H2 ICE) vehicles might spur the construction of a larger infrastructure for producing and distributing hydrogen—the very same infrastructure that fuel cell vehicles will require.

    If all goes as hoped, H2 ICE vehicles could solve the chicken-or-the-egg problem of which comes first, the fuel cell cars or the hydrogen stations to fuel them, says Robert Natkin, a mechanical engineer at Ford Motor Co. in Dearborn, Michigan. “The prime reason for doing this is to get the hydrogen economy under way as quickly as possible,” Natkin says. In fact, some experts say that in the race to economic and technological viability, the more cumbersome, less powerful fuel cell may never catch up to the lighter, peppier, and cheaper H2 ICE. “If the hydrogen ICEs work the way we think they can, you may never see fuel cells” powering cars, says Stephen Ciatti, a mechanical engineer at Argonne National Laboratory in Illinois.

    BMW, Ford, and Mazda expect to start producing H2 ICE vehicles for government and commercial fleets within a few years. But to create demand for hydrogen, those cars and trucks will have to secure a niche in the broader consumer market, and that won't be a drive in the countryside. The carmakers have taken different tacks to keeping hydrogen engines running smoothly and storing enough hydrogen onboard a vehicle to allow it to wander far from a fueling station, and it remains to be seen which approach will win out. And, of course, H2 ICE vehicles will require fueling stations, and most experts agree that the public will have to help pay for the first ones.

    Most important, automakers will have to answer a question that doesn't lend itself to simple, rational analysis: At a time when gasoline engines run far cleaner than they once did and sales of gas-guzzling sport utility vehicles continue to grow in spite of rising oil prices, what will it take to put the average driver behind the wheel of an exotic hydrogen-burning car?


    Hydrogen engines, such as the one that powers Ford's Model U concept car, may provide the technological steppingstone to fuel-cell vehicles.


    Running lean and green

    An internal combustion engine draws its power from a symphony of tiny explosions in four beats. Within an engine, pistons slide up and down within snug-fitting cylinders. First, a piston pushes up into its cylinder to compress a mixture of air and fuel. When the piston nears the top of its trajectory, the sparkplug ignites the vapors. Next, the explosion pushes the piston back down, turning the engine's crankshaft and, ultimately, the wheels of the car. Then, propelled by inertia and the other pistons, the piston pushes up again and forces the exhaust from the explosion out valves in the top of the cylinder. Finally, the piston descends again, drawing a fresh breath of the air-fuel mixture into the cylinder through a different set of valves and beginning the four-stroke cycle anew.

    A well-tuned gasoline engine mixes fuel and air in just the right proportions to ensure that the explosion consumes essentially every molecule of fuel and every molecule of oxygen—a condition known as “running at stoichiometry.” Of course, burning gasoline produces carbon monoxide, carbon dioxide, and hydrocarbons. And when running at stoichiometry, the combustion is hot enough to burn some of the nitrogen in the air, creating oxides of nitrogen (NOx), which seed the brown clouds of smog that hang over Los Angeles and other urban areas.

    In contrast, hydrogen coughs up almost no pollutants. Burning hydrogen produces no carbon dioxide, the most prevalent heat-trapping greenhouse gas, or other carbon compounds. And unlike gasoline, hydrogen burns even when the air-fuel mixture contains far less hydrogen than is needed to consume all the oxygen—a condition known as “running lean.” Reduce the hydrogen-air mixture to roughly half the stoichiometric ratio, and the temperature of combustion falls low enough to extinguish more than 90% of NOx production. Try that with a gasoline engine and it will run poorly, if at all.

    But before they can take a victory lap, engineers working on H2 ICEs must solve some problems with engine performance. Hydrogen packs more energy per kilogram than gasoline, but it's also the least dense gas in nature, which means it takes up a lot of room in an engine's cylinders, says Christopher White, a mechanical engineer at Sandia National Laboratories in Livermore, California. “That costs you power because there's less oxygen to consume,” he says. At the same time, it takes so little energy to ignite hydrogen that the hydrogen-air mixture tends to go off as soon as it gets close to something hot, like a sparkplug. Such “preignition” can make an engine “knock” or even backfire like an old Model T.

    Power play

    To surmount such problems, BMW, Ford, and Mazda are taking different tacks. Ford engineers use a mechanically driven pump called a supercharger to force more air and fuel into the combustion chamber, increasing the energy of each detonation. “We basically stuff another one-and-a-half times more air, plus an appropriate amount of fuel, into the cylinders,” says Ford's Natkin. Keeping the hydrogen-air ratio very lean—less than 40% of the stoichiometric ratio—prevents preignition and backfire, he says. A hydrogen-powered Focus compact car can travel about 240 kilometers before refueling its hydrogen tanks, which are pressurized to 350 times atmospheric pressure. And with an electric hybrid system and tanks pressurized to 700 atmospheres, or 70 MPa, Ford's Model U concept car can range twice as far.

    Mazda's H2 ICE prototype also carries gaseous hydrogen, but it burns it in a rotary engine driven by two triangular rotors. To overcome hydrogen's propensity to displace air, Mazda engineers force the hydrogen into the combustion chamber only after the chamber has filled with air and the intake valves have closed. As well as boosting power, such “direct injection” eliminates backfiring by separating the hydrogen and oxygen until just before they're supposed to detonate, explains Masanori Misumi, an engineer at Mazda Motor Corp. in Hiroshima, Japan. Mazda's hydrogen engine will also run on gasoline.

    When BMW's H2 ICE needs maximum power, it pours on the hydrogen to run at stoichiometry. Otherwise, it runs lean. A hydrogen-powered Beemer also carries denser liquid hydrogen, boiling away at −253°C inside a heavily insulated tank, which greatly increases the distance a car can travel between refueling stops. In future engines, the chilly gas might cool the air- fuel mixture, making it denser and more potent than a warm mixture. The cold hydrogen gas might also cool engine parts, preventing backfire and preignition. BMW's H2 ICE can run on gasoline as well.

    Unlike Ford and Mazda, BMW has no immediate plans to pursue fuel cell technology alongside its H2 ICEs. A fuel cell can wring more useful energy from a kilogram of hydrogen, but it cannot provide the wheel-spinning power that an internal combustion engine can, says Andreas Klugescheid, a spokesperson for the BMW Group in Munich. “Our customers don't buy a car just to get from A to B, but to have fun in between,” he says. “At BMW we're pretty sure that the hydrogen internal combustion engine is the way to satisfy them.”

    The first production H2 ICE vehicles will likely roll off the assembly line within 5 years, although the automakers won't say precisely when. “We would anticipate a lot more hydrogen internal combustion engine vehicles on the road sooner rather than later as we continue to develop fuel cell vehicles,” says Michael Vaughn, a spokesperson for Ford in Dearborn. Focusing on the market for luxury performance cars, BMW plans to produce some hydrogen-powered vehicles in the current several-year model cycle of its flagship 7 Series cars. Automakers will introduce the cars into commercial and government fleets, taking advantage of the centralized fueling facilities to carefully monitor their performance.

    What a gas!

    H2 ICE vehicles, such as BMW's and Mazda's prototypes, promise performance as well as almost zero emissions.


    Supplying demand

    In the long run, most experts agree, the hydrogen fuel cell holds the most promise for powering clean, ultraefficient cars. If they improve as hoped, fuel cells might usefully extract two-thirds of the chemical energy contained in a kilogram of hydrogen. In contrast, even with help from an electric hybrid system, an H2 ICE probably can extract less than half. (A gasoline engine makes use of about 25% of the energy in its fuel, the rest going primarily to heat.) And whereas an internal combustion engine will always produce some tiny amount of pollution, a fuel cell promises true zero emissions. But H2 ICE vehicles enjoy one advantage that could bring them to market quickly and help increase the demand for hydrogen filling stations, says James Francfort, who manages the Department of Energy's Advanced Vehicle Testing Activity at the Idaho National Engineering and Environmental Laboratory in Idaho Falls. “The car guys know how to build engines,” Francfort says. “This looks like something that could be done now.”

    Developers are hoping that H2 ICE vehicles will attract enough attention in fleets that a few intrepid individuals will go out of their way to buy them, even if they have to fuel them at fleet depots and the odd publicly funded fueling station. If enough people follow the trendsetters, eventually the demand for hydrogen refueling stations will increase to the point at which they become profitable to build and operate—or so the scenario goes.

    All agree that if H2 ICEs are to make it out of fleets and into car dealers' showrooms, they'll need a push from the public sector. They're already getting it in California, which gives manufacturers environmental credits for bringing H2 ICE vehicles to market. And in April, California Governor Arnold Schwarzenegger announced a plan to line California's interstate highways with up to 200 hydrogen stations by 2010, just in time to kick-start the market for H2 ICEs.

    Ultimately, the fate of H2 ICE vehicles lies with consumers, who have previously turned a cold shoulder to alternative technologies such as cars powered by electricity, methanol, and compressed natural gas. With near-zero emissions and an edge in power over fuel cells, the H2 ICE might catch on with car enthusiasts yearning to go green, a demographic that has few choices in today's market, says BMW's Klugescheid. If the H2 ICE can help enough gearheads discover their inner tree-hugger, the technology might just take off. “There are enough people who are deeply in love with performance cars but also have an environmental conscience,” Klugescheid says.

    Developers hope that the H2 ICE vehicles possess just the right mixture of environmental friendliness, futuristic technology, and good old-fashioned horsepower to capture the imagination of the car-buying public. A few short years should tell if they do. In the meantime, it wouldn't hurt if Bruce Springsteen wrote a song about them, too.

  15. Will the Future Dawn in the North?

    1. Gretchen Vogel*
    1. *With reporting by Daniel Clery.

    With geothermal and hydroelectric sources supplying almost all of its heat and electricity, Iceland is well on the way to energy self-suffiency. Now it is betting that hydrogen-fueled transportation will supply the last big piece of the puzzle

    REYKJAVIK—As commuters in this coastal city board the route 2 bus, some get an unexpected chemistry lesson. The doors of the bus are emblazoned with a brightly painted diagram of a water molecule, H2O. When they swing open, oxygen goes right and hydrogen left, splitting the molecule in two. The bus's sponsors hope that soon all of Iceland's nearly 300,000 residents not only will know this chemical reaction but will be relying on it to get around: By 2050, they say, Iceland should run on a completely hydrogen-based energy economy. The hydrogen-powered fuel cell buses—three ply the route regularly—are the first step toward weaning Iceland off imported fossil fuels, says Thorsteinn Sigfusson, a physicist at the University of Iceland in Reykjavik and founding chair of Icelandic New Energy, a company launched to test and develop hydrogen-based transport and energy systems.

    Although other European countries are fielding similar pilot projects, this volcanic island nation is uniquely poised to tap hydrogen's potential. Iceland already uses water—either hot water from geothermal sources or falling water from hydroelectric dams—to provide hot showers, heat its homes, and light its streets and buildings through the long, dark winter just south of the Arctic Circle. “This is a sustainable Texas,” says Sigfusson, referring to the plentiful energy welling from the ground. But although the country has the capacity to produce much more energy than it currently needs, it still imports 30% of its energy as oil to power cars and ships. Converting those vehicles to hydrogen fuel could make the country self-sufficient in energy.

    Iceland started taking the idea of a hydrogen economy seriously more than a decade ago, after the 1990 Kyoto Protocol required that it cut its nonindustrial CO2 emissions. Having already converted more than 95% of its heat and electricity generation to hydroelectric and geothermal energy, which emit almost no CO2, the country focused on the transportation sector. Spurred by an expert commission that recommended hydrogen fuels as the most promising way to convert its renewable energy into a fuel to power cars, the government formed Icelandic New Energy in 1997. Today 51% of the shares are owned by Icelandic sources, including power companies, the University of Iceland, and the government itself. The rest are held by international corporations Norsk Hydro, DaimlerChrysler, and Shell Hydrogen. With $4 million from the European Union and $5 million from its investors, the company bought three hydrogen-powered buses for the city bus fleet and built a fueling station to keep them running.

    The fueling station is part of the country's busiest Shell gasoline station, clearly visible from the main road out of Reykjavik. It boldly proclaims to all passersby in English that hydrogen is “the ultimate fuel” and “We're making the fuel of the future.” The hydrogen is produced overnight using electricity from the city's power grid to split water into hydrogen and oxygen. The hydrogen is then stored as a compressed gas. It takes just over 6 minutes to fill a bus with enough fuel to run a full day's journey. The buses are built from standard bodies outfitted with rooftop hydrogen tanks that make them slightly taller than their diesel cousins. “The first cars were horse carriages retrofitted with engines. We're seeing the same process here,” Sigfusson says. Piped to the fuel cells, the hydrogen combines with oxygen from the air and produces electricity, which drives a motor behind the rear wheels.

    As the buses tour the city, their exhaust pipes emit trails of steam strikingly reminiscent of those that waft from the hot springs in the mountains outside the city. The exhaust is more visible than that from other cars, Sigfusson says, because the fuel cell runs at about 70°C and so the steam is close to the saturation point. But it is almost pure water, he says, so clean “you can drink it.” And because the buses are electric, they are significantly quieter than diesel buses.

    The pilot project has not been trouble-free. On a recent Friday, for example, two of the three buses were out of service for repairs. The buses are serviced in a garage equipped with special vents that remove any highly explosive hydrogen that might escape while a bus is being repaired. On cold winter nights the vehicles must be kept warm in specially designed bays, lest the water vapor left in the system freeze and damage the fuel cells. “They need to be kept like stallions in their stalls,” says Sigfusson, who notes that newer generations of fuel cells are drier and may not need such coddling.

    All aboard.

    Fuel cell buses and a planned car fleet are the latest entries in Iceland's marathon push for total energy independence.


    But despite the hiccups, Sigfusson says the project so far has been encouraging. In 9 months, the buses have driven a total of 40,000 kilometers, while surveys show that public support for a hydrogen economy has remained at a surprising 93%. The next step is a test fleet of passenger cars, Sigfusson says. Icelandic New Energy is negotiating to buy more than a dozen hydrogen-powered cars for corporate or government employees, he says.

    Economic leaders are also optimistic. “I am not a believer that we will have a hydrogen economy tomorrow,” says Fri_rik Sophusson, a former finance minister and now managing director of the government-owned National Power Co., a shareholder in Icelandic New Energy. But he believes the investment will pay long-term dividends—not least to his company, which will supply electricity needed to produce the gas. “In 20 years, I believe we will have vehicles running on hydrogen efficiently generated from renewable sources,” Sophusson says. “We are going to produce hydrogen in a clean way, and if the project takes off, we will be in business.”

    Fill 'er up.

    Reykjavik's lone hydrogen station, which manufactures the fuel on site by splitting water molecules, can fill a bus with pressurized gas in 6 minutes.


    Although Iceland may harbor the most ambitious vision of a fossil fuel-free future, other countries without its natural advantages in renewable energy are also experimenting with hydrogen-based technologies. Sigfusson thinks the gas's biggest potential could lie in developing countries that have not yet committed themselves to fossil fuels (see sidebar), but industrialized nations are also pushing hard. In a project partly funded by the European Union (E.U.), nine other European cities now have small fleets of buses—similar to the ones in Reykjavik—plying regular routes. The E.U. imports 50% of its oil, and that figure is expected to rise to 70% over the next 20 to 30 years. In January, European Commission President Romano Prodi pledged to create “a fully integrated hydrogen economy, based on renewable energy sources, by the middle of the century.”

    Realizing that bold ambition is now the job of the Hydrogen and Fuel Cell Technology Platform, an E.U. body. Its advisory panel of 35 prominent industry, research, and civic leaders will coordinate efforts in academia and industry at both the national and European levels and will draw up a research plan and deployment strategy. Planned demonstration projects include a fossil fuel power plant that will produce electricity and hydrogen on an industrial scale while separating and storing the CO2 it produces and a “hydrogen village” where new technologies and hydrogen infrastructure can be tested. In all, the E.U.'s Framework research program intends to spend $300 million on hydrogen and fuel cell research during the 5-year period from 2002 to 2006. Political and public interest in a hydrogen economy “is like a snowball growing and growing,” says Joaquín Martín Bernejo, the E.U. official responsible for research into hydrogen production and distribution.

    As Iceland (not an E.U. member) treads that same path on a more modest scale, its biggest hurdle remains the conversion of its economically vital shipping fleet, which uses half of the country's imported oil. Boats pose tougher technical problems than city buses do. Whereas a bus can run its daily route on only 40 kilograms of hydrogen, Sigfusson says, a small trawler with a 500-kilowatt engine must carry a ton of the gas to spend 4 to 5 days at sea. One way to store enough fuel, he says, might be to sequester the gas in hydrogen-absorbing compounds called metal hydrides. The compound could even serve as ballast for the boat instead of the standard concrete keel ballast.

    Although Iceland's leaders are eager for the hydrogen economy to take off, Sigfusson acknowledges that it will have to appeal directly to Iceland's drivers and fishers. Generous tax breaks to importers of hydrogen vehicles will help, he says, if hydrogen can match the price of heavily taxed fossil fuels: “People will be willing to pay a little more [for a hydrogen vehicle], but they're not willing to pay a lot more. The market has to force down the price of an installed kilowatt.” According to Sigfusson, that is already happening, especially as research investments are ramped up: “There has been a paradigm shift. We had had decades of coffee-room discussions that never led anywhere.” Now the buses with their chemistry lesson, he says, are pointing the way to the future.

  16. Can the Developing World Skip Petroleum?

    1. Gretchen Vogel

    If technologies for hydrogen fuel take off, one of the biggest winners could be the developing world. Just as cell phones in poor countries have made land lines obsolete before they were installed, hydrogen from renewable sources—in an ideal world—could enable developing countries to leap over the developed world to energy independence. “The opportunity is there for them to become leaders in this area,” says Thorsteinn Sigfusson of the University of Iceland, one of the leaders of the International Partnership for a Hydrogen Economy (IPHE), a cooperative effort of 15 countries, including the United States, Iceland, India, China, and Brazil, founded last year to advance hydrogen research and technology development.

    With their growing influence in global manufacturing, their technical expertise, and their low labor costs, Sigfusson says, countries such as China and India could play extremely important roles in developing more efficient solar or biotech sources of hydrogen—as well as vehicles and power systems that use the fuel. “They have the opportunity to take a leap into the hydrogen economy without all the troubles of going through combustion and liquid fuel,” he says. The impact would be huge. The IPHE members already encompass 85% of the world's population, he notes.

    Different future.

    Countries not yet committed to fossil fuels might go straight to hydrogen.


    > The current steps are small. For example, a joint U.S.-Indian project is working to build a hydrogen-powered three-wheel test vehicle. The minicar, designed for crowded urban streets, needs only one-tenth as much storage space as a standard passenger car. India's largest auto manufacturer, Mahindra and Mahindra, has shipped two of its popular gasoline-powered three-wheelers (currently a huge source of urban air pollution), to the Michigan-based company Energy Conversion Devices (ECD). Engineers at ECD are working to convert the engine to run on hydrogen stored in a metal hydride. One model will return to India for testing, and one will remain in the United States. The small project “is just the beginning,” says a U.S. Department of Energy official. “But the point of bringing in these countries is that they are huge energy consumers. They simply have to be part of the partnership, especially as we start to use the technologies.”

    Ideally, the developing world will be able to harness the solar energy plentiful in the tropics to power hydrogen systems, Sigfusson says. “The most important renewable will be the sun,” he says. “Mankind started as a solar civilization. We spent 200 years flirting with fossil fuels, but I believe we'll soon go back to being a solar civilization.”